problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_3568 | rasdani/github-patches | git_diff | nf-core__tools-1441 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
All modules with the same prefix being found with nf-core modules update
### Description of the bug
Not sure whether this affects any other `nf-core modules` commands but if I want to update `minia` in my pipeline then it is correctly updated but another module with the same prefix `miniasm` is being installed too as you can see at the end of the console output below. I suspect there is a regex or glob that needs to be updated somewhere in the tools codebase to only find and update the required module.
### Command used and terminal output
```console
$ git clone https://github.com/nf-core/viralrecon.git
Cloning into 'viralrecon'...
remote: Enumerating objects: 10345, done.
remote: Counting objects: 100% (100/100), done.
remote: Compressing objects: 100% (88/88), done.
remote: Total 10345 (delta 22), reused 42 (delta 4), pack-reused 10245
Receiving objects: 100% (10345/10345), 7.96 MiB | 5.16 MiB/s, done.
Resolving deltas: 100% (6536/6536), done.
$ cd viralrecon
$ nf-core modules update minia
,--./,-.
___ __ __ __ ___ /,-._.--~\
|\ | |__ __ / ` / \ |__) |__ } {
| \| | \__, \__/ | \ |___ \`-._,-`-,
`._,._,'
nf-core/tools version 2.2
INFO Updating 'nf-core/modules/minia' update.py:239
INFO Downloaded 4 files to ./modules/nf-core/modules/minia modules_command.py:273
$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
(use "git add/rm <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: modules.json
deleted: modules/nf-core/modules/minia/functions.nf
modified: modules/nf-core/modules/minia/main.nf
modified: modules/nf-core/modules/minia/meta.yml
Untracked files:
(use "git add <file>..." to include in what will be committed)
modules/nf-core/modules/miniasm/
no changes added to commit (use "git add" and/or "git commit -a")
```
### System information
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nf_core/modules/modules_repo.py`
Content:
```
1 import os
2 import requests
3 import base64
4 import sys
5 import logging
6 import nf_core.utils
7
8 log = logging.getLogger(__name__)
9
10
11 class ModulesRepo(object):
12 """
13 An object to store details about the repository being used for modules.
14
15 Used by the `nf-core modules` top-level command with -r and -b flags,
16 so that this can be used in the same way by all sub-commands.
17 """
18
19 def __init__(self, repo="nf-core/modules", branch=None):
20 self.name = repo
21 self.branch = branch
22
23 # Don't bother fetching default branch if we're using nf-core
24 if not self.branch and self.name == "nf-core/modules":
25 self.branch = "master"
26
27 # Verify that the repo seems to be correctly configured
28 if self.name != "nf-core/modules" or self.branch:
29
30 # Get the default branch if not set
31 if not self.branch:
32 self.get_default_branch()
33
34 try:
35 self.verify_modules_repo()
36 except LookupError:
37 raise
38
39 self.owner, self.repo = self.name.split("/")
40 self.modules_file_tree = {}
41 self.modules_avail_module_names = []
42
43 def get_default_branch(self):
44 """Get the default branch for a GitHub repo"""
45 api_url = f"https://api.github.com/repos/{self.name}"
46 response = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())
47 if response.status_code == 200:
48 self.branch = response.json()["default_branch"]
49 log.debug(f"Found default branch to be '{self.branch}'")
50 else:
51 raise LookupError(f"Could not find repository '{self.name}' on GitHub")
52
53 def verify_modules_repo(self):
54
55 # Check if name seems to be well formed
56 if self.name.count("/") != 1:
57 raise LookupError(f"Repository name '{self.name}' should be of the format '<github_user_name>/<repo_name>'")
58
59 # Check if repository exist
60 api_url = f"https://api.github.com/repos/{self.name}/branches"
61 response = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())
62 if response.status_code == 200:
63 branches = [branch["name"] for branch in response.json()]
64 if self.branch not in branches:
65 raise LookupError(f"Branch '{self.branch}' not found in '{self.name}'")
66 else:
67 raise LookupError(f"Repository '{self.name}' is not available on GitHub")
68
69 api_url = f"https://api.github.com/repos/{self.name}/contents?ref={self.branch}"
70 response = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())
71 if response.status_code == 200:
72 dir_names = [entry["name"] for entry in response.json() if entry["type"] == "dir"]
73 if "modules" not in dir_names:
74 err_str = f"Repository '{self.name}' ({self.branch}) does not contain a 'modules/' directory"
75 if "software" in dir_names:
76 err_str += ".\nAs of version 2.0, the 'software/' directory should be renamed to 'modules/'"
77 raise LookupError(err_str)
78 else:
79 raise LookupError(f"Unable to fetch repository information from '{self.name}' ({self.branch})")
80
81 def get_modules_file_tree(self):
82 """
83 Fetch the file list from the repo, using the GitHub API
84
85 Sets self.modules_file_tree
86 self.modules_avail_module_names
87 """
88 api_url = "https://api.github.com/repos/{}/git/trees/{}?recursive=1".format(self.name, self.branch)
89 r = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())
90 if r.status_code == 404:
91 raise LookupError("Repository / branch not found: {} ({})\n{}".format(self.name, self.branch, api_url))
92 elif r.status_code != 200:
93 raise LookupError(
94 "Could not fetch {} ({}) tree: {}\n{}".format(self.name, self.branch, r.status_code, api_url)
95 )
96
97 result = r.json()
98 assert result["truncated"] == False
99
100 self.modules_file_tree = result["tree"]
101 for f in result["tree"]:
102 if f["path"].startswith(f"modules/") and f["path"].endswith("/main.nf") and "/test/" not in f["path"]:
103 # remove modules/ and /main.nf
104 self.modules_avail_module_names.append(f["path"].replace("modules/", "").replace("/main.nf", ""))
105 if len(self.modules_avail_module_names) == 0:
106 raise LookupError(f"Found no modules in '{self.name}'")
107
108 def get_module_file_urls(self, module, commit=""):
109 """Fetch list of URLs for a specific module
110
111 Takes the name of a module and iterates over the GitHub repo file tree.
112 Loops over items that are prefixed with the path 'modules/<module_name>' and ignores
113 anything that's not a blob. Also ignores the test/ subfolder.
114
115 Returns a dictionary with keys as filenames and values as GitHub API URLs.
116 These can be used to then download file contents.
117
118 Args:
119 module (string): Name of module for which to fetch a set of URLs
120
121 Returns:
122 dict: Set of files and associated URLs as follows:
123
124 {
125 'modules/fastqc/main.nf': 'https://api.github.com/repos/nf-core/modules/git/blobs/65ba598119206a2b851b86a9b5880b5476e263c3',
126 'modules/fastqc/meta.yml': 'https://api.github.com/repos/nf-core/modules/git/blobs/0d5afc23ba44d44a805c35902febc0a382b17651'
127 }
128 """
129 results = {}
130 for f in self.modules_file_tree:
131 if not f["path"].startswith("modules/{}".format(module)):
132 continue
133 if f["type"] != "blob":
134 continue
135 if "/test/" in f["path"]:
136 continue
137 results[f["path"]] = f["url"]
138 if commit != "":
139 for path in results:
140 results[path] = f"https://api.github.com/repos/{self.name}/contents/{path}?ref={commit}"
141 return results
142
143 def download_gh_file(self, dl_filename, api_url):
144 """Download a file from GitHub using the GitHub API
145
146 Args:
147 dl_filename (string): Path to save file to
148 api_url (string): GitHub API URL for file
149
150 Raises:
151 If a problem, raises an error
152 """
153
154 # Make target directory if it doesn't already exist
155 dl_directory = os.path.dirname(dl_filename)
156 if not os.path.exists(dl_directory):
157 os.makedirs(dl_directory)
158
159 # Call the GitHub API
160 r = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())
161 if r.status_code != 200:
162 raise LookupError("Could not fetch {} file: {}\n {}".format(self.name, r.status_code, api_url))
163 result = r.json()
164 file_contents = base64.b64decode(result["content"])
165
166 # Write the file contents
167 with open(dl_filename, "wb") as fh:
168 fh.write(file_contents)
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nf_core/modules/modules_repo.py b/nf_core/modules/modules_repo.py
--- a/nf_core/modules/modules_repo.py
+++ b/nf_core/modules/modules_repo.py
@@ -128,7 +128,7 @@
"""
results = {}
for f in self.modules_file_tree:
- if not f["path"].startswith("modules/{}".format(module)):
+ if not f["path"].startswith("modules/{}/".format(module)):
continue
if f["type"] != "blob":
continue
| {"golden_diff": "diff --git a/nf_core/modules/modules_repo.py b/nf_core/modules/modules_repo.py\n--- a/nf_core/modules/modules_repo.py\n+++ b/nf_core/modules/modules_repo.py\n@@ -128,7 +128,7 @@\n \"\"\"\n results = {}\n for f in self.modules_file_tree:\n- if not f[\"path\"].startswith(\"modules/{}\".format(module)):\n+ if not f[\"path\"].startswith(\"modules/{}/\".format(module)):\n continue\n if f[\"type\"] != \"blob\":\n continue\n", "issue": "All modules with the same prefix being found with nf-core modules update\n### Description of the bug\n\nNot sure whether this affects any other `nf-core modules` commands but if I want to update `minia` in my pipeline then it is correctly updated but another module with the same prefix `miniasm` is being installed too as you can see at the end of the console output below. I suspect there is a regex or glob that needs to be updated somewhere in the tools codebase to only find and update the required module.\n\n### Command used and terminal output\n\n```console\n$ git clone https://github.com/nf-core/viralrecon.git \r\nCloning into 'viralrecon'...\r\nremote: Enumerating objects: 10345, done.\r\nremote: Counting objects: 100% (100/100), done.\r\nremote: Compressing objects: 100% (88/88), done.\r\nremote: Total 10345 (delta 22), reused 42 (delta 4), pack-reused 10245\r\nReceiving objects: 100% (10345/10345), 7.96 MiB | 5.16 MiB/s, done.\r\nResolving deltas: 100% (6536/6536), done.\r\n\r\n$ cd viralrecon \r\n\r\n$ nf-core modules update minia\r\n\r\n ,--./,-.\r\n ___ __ __ __ ___ /,-._.--~\\\r\n |\\ | |__ __ / ` / \\ |__) |__ } {\r\n | \\| | \\__, \\__/ | \\ |___ \\`-._,-`-,\r\n `._,._,'\r\n\r\n nf-core/tools version 2.2\r\n\r\n\r\n\r\nINFO Updating 'nf-core/modules/minia' update.py:239\r\nINFO Downloaded 4 files to ./modules/nf-core/modules/minia modules_command.py:273\r\n\r\n$ git status \r\nOn branch master\r\nYour branch is up-to-date with 'origin/master'.\r\n\r\nChanges not staged for commit:\r\n (use \"git add/rm <file>...\" to update what will be committed)\r\n (use \"git restore <file>...\" to discard changes in working directory)\r\n\tmodified: modules.json\r\n\tdeleted: modules/nf-core/modules/minia/functions.nf\r\n\tmodified: modules/nf-core/modules/minia/main.nf\r\n\tmodified: modules/nf-core/modules/minia/meta.yml\r\n\r\nUntracked files:\r\n (use \"git add <file>...\" to include in what will be committed)\r\n\tmodules/nf-core/modules/miniasm/\r\n\r\nno changes added to commit (use \"git add\" and/or \"git commit -a\")\n```\n\n\n### System information\n\n_No response_\n", "before_files": [{"content": "import os\nimport requests\nimport base64\nimport sys\nimport logging\nimport nf_core.utils\n\nlog = logging.getLogger(__name__)\n\n\nclass ModulesRepo(object):\n \"\"\"\n An object to store details about the repository being used for modules.\n\n Used by the `nf-core modules` top-level command with -r and -b flags,\n so that this can be used in the same way by all sub-commands.\n \"\"\"\n\n def __init__(self, repo=\"nf-core/modules\", branch=None):\n self.name = repo\n self.branch = branch\n\n # Don't bother fetching default branch if we're using nf-core\n if not self.branch and self.name == \"nf-core/modules\":\n self.branch = \"master\"\n\n # Verify that the repo seems to be correctly configured\n if self.name != \"nf-core/modules\" or self.branch:\n\n # Get the default branch if not set\n if not self.branch:\n self.get_default_branch()\n\n try:\n self.verify_modules_repo()\n except LookupError:\n raise\n\n self.owner, self.repo = self.name.split(\"/\")\n self.modules_file_tree = {}\n self.modules_avail_module_names = []\n\n def get_default_branch(self):\n \"\"\"Get the default branch for a GitHub repo\"\"\"\n api_url = f\"https://api.github.com/repos/{self.name}\"\n response = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())\n if response.status_code == 200:\n self.branch = response.json()[\"default_branch\"]\n log.debug(f\"Found default branch to be '{self.branch}'\")\n else:\n raise LookupError(f\"Could not find repository '{self.name}' on GitHub\")\n\n def verify_modules_repo(self):\n\n # Check if name seems to be well formed\n if self.name.count(\"/\") != 1:\n raise LookupError(f\"Repository name '{self.name}' should be of the format '<github_user_name>/<repo_name>'\")\n\n # Check if repository exist\n api_url = f\"https://api.github.com/repos/{self.name}/branches\"\n response = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())\n if response.status_code == 200:\n branches = [branch[\"name\"] for branch in response.json()]\n if self.branch not in branches:\n raise LookupError(f\"Branch '{self.branch}' not found in '{self.name}'\")\n else:\n raise LookupError(f\"Repository '{self.name}' is not available on GitHub\")\n\n api_url = f\"https://api.github.com/repos/{self.name}/contents?ref={self.branch}\"\n response = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())\n if response.status_code == 200:\n dir_names = [entry[\"name\"] for entry in response.json() if entry[\"type\"] == \"dir\"]\n if \"modules\" not in dir_names:\n err_str = f\"Repository '{self.name}' ({self.branch}) does not contain a 'modules/' directory\"\n if \"software\" in dir_names:\n err_str += \".\\nAs of version 2.0, the 'software/' directory should be renamed to 'modules/'\"\n raise LookupError(err_str)\n else:\n raise LookupError(f\"Unable to fetch repository information from '{self.name}' ({self.branch})\")\n\n def get_modules_file_tree(self):\n \"\"\"\n Fetch the file list from the repo, using the GitHub API\n\n Sets self.modules_file_tree\n self.modules_avail_module_names\n \"\"\"\n api_url = \"https://api.github.com/repos/{}/git/trees/{}?recursive=1\".format(self.name, self.branch)\n r = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())\n if r.status_code == 404:\n raise LookupError(\"Repository / branch not found: {} ({})\\n{}\".format(self.name, self.branch, api_url))\n elif r.status_code != 200:\n raise LookupError(\n \"Could not fetch {} ({}) tree: {}\\n{}\".format(self.name, self.branch, r.status_code, api_url)\n )\n\n result = r.json()\n assert result[\"truncated\"] == False\n\n self.modules_file_tree = result[\"tree\"]\n for f in result[\"tree\"]:\n if f[\"path\"].startswith(f\"modules/\") and f[\"path\"].endswith(\"/main.nf\") and \"/test/\" not in f[\"path\"]:\n # remove modules/ and /main.nf\n self.modules_avail_module_names.append(f[\"path\"].replace(\"modules/\", \"\").replace(\"/main.nf\", \"\"))\n if len(self.modules_avail_module_names) == 0:\n raise LookupError(f\"Found no modules in '{self.name}'\")\n\n def get_module_file_urls(self, module, commit=\"\"):\n \"\"\"Fetch list of URLs for a specific module\n\n Takes the name of a module and iterates over the GitHub repo file tree.\n Loops over items that are prefixed with the path 'modules/<module_name>' and ignores\n anything that's not a blob. Also ignores the test/ subfolder.\n\n Returns a dictionary with keys as filenames and values as GitHub API URLs.\n These can be used to then download file contents.\n\n Args:\n module (string): Name of module for which to fetch a set of URLs\n\n Returns:\n dict: Set of files and associated URLs as follows:\n\n {\n 'modules/fastqc/main.nf': 'https://api.github.com/repos/nf-core/modules/git/blobs/65ba598119206a2b851b86a9b5880b5476e263c3',\n 'modules/fastqc/meta.yml': 'https://api.github.com/repos/nf-core/modules/git/blobs/0d5afc23ba44d44a805c35902febc0a382b17651'\n }\n \"\"\"\n results = {}\n for f in self.modules_file_tree:\n if not f[\"path\"].startswith(\"modules/{}\".format(module)):\n continue\n if f[\"type\"] != \"blob\":\n continue\n if \"/test/\" in f[\"path\"]:\n continue\n results[f[\"path\"]] = f[\"url\"]\n if commit != \"\":\n for path in results:\n results[path] = f\"https://api.github.com/repos/{self.name}/contents/{path}?ref={commit}\"\n return results\n\n def download_gh_file(self, dl_filename, api_url):\n \"\"\"Download a file from GitHub using the GitHub API\n\n Args:\n dl_filename (string): Path to save file to\n api_url (string): GitHub API URL for file\n\n Raises:\n If a problem, raises an error\n \"\"\"\n\n # Make target directory if it doesn't already exist\n dl_directory = os.path.dirname(dl_filename)\n if not os.path.exists(dl_directory):\n os.makedirs(dl_directory)\n\n # Call the GitHub API\n r = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())\n if r.status_code != 200:\n raise LookupError(\"Could not fetch {} file: {}\\n {}\".format(self.name, r.status_code, api_url))\n result = r.json()\n file_contents = base64.b64decode(result[\"content\"])\n\n # Write the file contents\n with open(dl_filename, \"wb\") as fh:\n fh.write(file_contents)\n", "path": "nf_core/modules/modules_repo.py"}], "after_files": [{"content": "import os\nimport requests\nimport base64\nimport sys\nimport logging\nimport nf_core.utils\n\nlog = logging.getLogger(__name__)\n\n\nclass ModulesRepo(object):\n \"\"\"\n An object to store details about the repository being used for modules.\n\n Used by the `nf-core modules` top-level command with -r and -b flags,\n so that this can be used in the same way by all sub-commands.\n \"\"\"\n\n def __init__(self, repo=\"nf-core/modules\", branch=None):\n self.name = repo\n self.branch = branch\n\n # Don't bother fetching default branch if we're using nf-core\n if not self.branch and self.name == \"nf-core/modules\":\n self.branch = \"master\"\n\n # Verify that the repo seems to be correctly configured\n if self.name != \"nf-core/modules\" or self.branch:\n\n # Get the default branch if not set\n if not self.branch:\n self.get_default_branch()\n\n try:\n self.verify_modules_repo()\n except LookupError:\n raise\n\n self.owner, self.repo = self.name.split(\"/\")\n self.modules_file_tree = {}\n self.modules_avail_module_names = []\n\n def get_default_branch(self):\n \"\"\"Get the default branch for a GitHub repo\"\"\"\n api_url = f\"https://api.github.com/repos/{self.name}\"\n response = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())\n if response.status_code == 200:\n self.branch = response.json()[\"default_branch\"]\n log.debug(f\"Found default branch to be '{self.branch}'\")\n else:\n raise LookupError(f\"Could not find repository '{self.name}' on GitHub\")\n\n def verify_modules_repo(self):\n\n # Check if name seems to be well formed\n if self.name.count(\"/\") != 1:\n raise LookupError(f\"Repository name '{self.name}' should be of the format '<github_user_name>/<repo_name>'\")\n\n # Check if repository exist\n api_url = f\"https://api.github.com/repos/{self.name}/branches\"\n response = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())\n if response.status_code == 200:\n branches = [branch[\"name\"] for branch in response.json()]\n if self.branch not in branches:\n raise LookupError(f\"Branch '{self.branch}' not found in '{self.name}'\")\n else:\n raise LookupError(f\"Repository '{self.name}' is not available on GitHub\")\n\n api_url = f\"https://api.github.com/repos/{self.name}/contents?ref={self.branch}\"\n response = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())\n if response.status_code == 200:\n dir_names = [entry[\"name\"] for entry in response.json() if entry[\"type\"] == \"dir\"]\n if \"modules\" not in dir_names:\n err_str = f\"Repository '{self.name}' ({self.branch}) does not contain a 'modules/' directory\"\n if \"software\" in dir_names:\n err_str += \".\\nAs of version 2.0, the 'software/' directory should be renamed to 'modules/'\"\n raise LookupError(err_str)\n else:\n raise LookupError(f\"Unable to fetch repository information from '{self.name}' ({self.branch})\")\n\n def get_modules_file_tree(self):\n \"\"\"\n Fetch the file list from the repo, using the GitHub API\n\n Sets self.modules_file_tree\n self.modules_avail_module_names\n \"\"\"\n api_url = \"https://api.github.com/repos/{}/git/trees/{}?recursive=1\".format(self.name, self.branch)\n r = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())\n if r.status_code == 404:\n raise LookupError(\"Repository / branch not found: {} ({})\\n{}\".format(self.name, self.branch, api_url))\n elif r.status_code != 200:\n raise LookupError(\n \"Could not fetch {} ({}) tree: {}\\n{}\".format(self.name, self.branch, r.status_code, api_url)\n )\n\n result = r.json()\n assert result[\"truncated\"] == False\n\n self.modules_file_tree = result[\"tree\"]\n for f in result[\"tree\"]:\n if f[\"path\"].startswith(f\"modules/\") and f[\"path\"].endswith(\"/main.nf\") and \"/test/\" not in f[\"path\"]:\n # remove modules/ and /main.nf\n self.modules_avail_module_names.append(f[\"path\"].replace(\"modules/\", \"\").replace(\"/main.nf\", \"\"))\n if len(self.modules_avail_module_names) == 0:\n raise LookupError(f\"Found no modules in '{self.name}'\")\n\n def get_module_file_urls(self, module, commit=\"\"):\n \"\"\"Fetch list of URLs for a specific module\n\n Takes the name of a module and iterates over the GitHub repo file tree.\n Loops over items that are prefixed with the path 'modules/<module_name>' and ignores\n anything that's not a blob. Also ignores the test/ subfolder.\n\n Returns a dictionary with keys as filenames and values as GitHub API URLs.\n These can be used to then download file contents.\n\n Args:\n module (string): Name of module for which to fetch a set of URLs\n\n Returns:\n dict: Set of files and associated URLs as follows:\n\n {\n 'modules/fastqc/main.nf': 'https://api.github.com/repos/nf-core/modules/git/blobs/65ba598119206a2b851b86a9b5880b5476e263c3',\n 'modules/fastqc/meta.yml': 'https://api.github.com/repos/nf-core/modules/git/blobs/0d5afc23ba44d44a805c35902febc0a382b17651'\n }\n \"\"\"\n results = {}\n for f in self.modules_file_tree:\n if not f[\"path\"].startswith(\"modules/{}/\".format(module)):\n continue\n if f[\"type\"] != \"blob\":\n continue\n if \"/test/\" in f[\"path\"]:\n continue\n results[f[\"path\"]] = f[\"url\"]\n if commit != \"\":\n for path in results:\n results[path] = f\"https://api.github.com/repos/{self.name}/contents/{path}?ref={commit}\"\n return results\n\n def download_gh_file(self, dl_filename, api_url):\n \"\"\"Download a file from GitHub using the GitHub API\n\n Args:\n dl_filename (string): Path to save file to\n api_url (string): GitHub API URL for file\n\n Raises:\n If a problem, raises an error\n \"\"\"\n\n # Make target directory if it doesn't already exist\n dl_directory = os.path.dirname(dl_filename)\n if not os.path.exists(dl_directory):\n os.makedirs(dl_directory)\n\n # Call the GitHub API\n r = requests.get(api_url, auth=nf_core.utils.github_api_auto_auth())\n if r.status_code != 200:\n raise LookupError(\"Could not fetch {} file: {}\\n {}\".format(self.name, r.status_code, api_url))\n result = r.json()\n file_contents = base64.b64decode(result[\"content\"])\n\n # Write the file contents\n with open(dl_filename, \"wb\") as fh:\n fh.write(file_contents)\n", "path": "nf_core/modules/modules_repo.py"}]} | 2,883 | 117 |
gh_patches_debug_42801 | rasdani/github-patches | git_diff | facebookresearch__CompilerGym-446 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Possible error in how to generate unbiased benchmarks
## 📚 Documentation
(Possible error, possibly me doing it wrong)
https://facebookresearch.github.io/CompilerGym/compiler_gym/datasets.html
The code for `random_benchmark` says:
```python
rng = np.random.default_rng()
finite_datasets = [d for d in env.datasets if len(d) != math.inf]
dataset = rng.choice(
finite_datasets,
p=[len(d) for d in finite_datasets]
)
dataset.random_benchmark(random_state=rng)
```
But, I think that if you call `len` on a generator, it will fail with error:
`TypeError: 'float' object cannot be interpreted as an integer`
Suggest changing it to:
`d.size != math.inf`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `compiler_gym/datasets/datasets.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
5 from collections import deque
6 from typing import Dict, Iterable, Optional, Set, TypeVar
7
8 import numpy as np
9
10 from compiler_gym.datasets.benchmark import Benchmark
11 from compiler_gym.datasets.dataset import Dataset
12 from compiler_gym.datasets.uri import BENCHMARK_URI_RE, resolve_uri_protocol
13
14 T = TypeVar("T")
15
16
17 def round_robin_iterables(iters: Iterable[Iterable[T]]) -> Iterable[T]:
18 """Yield from the given iterators in round robin order."""
19 # Use a queue of iterators to iterate over. Repeatedly pop an iterator from
20 # the queue, yield the next value from it, then put it at the back of the
21 # queue. The iterator is discarded once exhausted.
22 iters = deque(iters)
23 while len(iters) > 1:
24 it = iters.popleft()
25 try:
26 yield next(it)
27 iters.append(it)
28 except StopIteration:
29 pass
30 # Once we have only a single iterator left, return it directly rather
31 # continuing with the round robin.
32 if len(iters) == 1:
33 yield from iters.popleft()
34
35
36 class Datasets:
37 """A collection of datasets.
38
39 This class provides a dictionary-like interface for indexing and iterating
40 over multiple :class:`Dataset <compiler_gym.datasets.Dataset>` objects.
41 Select a dataset by URI using:
42
43 >>> env.datasets["benchmark://cbench-v1"]
44
45 Check whether a dataset exists using:
46
47 >>> "benchmark://cbench-v1" in env.datasets
48 True
49
50 Or iterate over the datasets using:
51
52 >>> for dataset in env.datasets:
53 ... print(dataset.name)
54 benchmark://cbench-v1
55 benchmark://github-v0
56 benchmark://npb-v0
57
58 To select a benchmark from the datasets, use :meth:`benchmark()`:
59
60 >>> env.datasets.benchmark("benchmark://a-v0/a")
61
62 Use the :meth:`benchmarks()` method to iterate over every benchmark in the
63 datasets in a stable round robin order:
64
65 >>> for benchmark in env.datasets.benchmarks():
66 ... print(benchmark)
67 benchmark://cbench-v1/1
68 benchmark://github-v0/1
69 benchmark://npb-v0/1
70 benchmark://cbench-v1/2
71 ...
72
73 If you want to exclude a dataset, delete it:
74
75 >>> del env.datasets["benchmark://b-v0"]
76 """
77
78 def __init__(
79 self,
80 datasets: Iterable[Dataset],
81 ):
82 self._datasets: Dict[str, Dataset] = {d.name: d for d in datasets}
83 self._visible_datasets: Set[str] = set(
84 name for name, dataset in self._datasets.items() if not dataset.deprecated
85 )
86
87 def datasets(self, with_deprecated: bool = False) -> Iterable[Dataset]:
88 """Enumerate the datasets.
89
90 Dataset order is consistent across runs.
91
92 :param with_deprecated: If :code:`True`, include datasets that have been
93 marked as deprecated.
94
95 :return: An iterable sequence of :meth:`Dataset
96 <compiler_gym.datasets.Dataset>` instances.
97 """
98 datasets = self._datasets.values()
99 if not with_deprecated:
100 datasets = (d for d in datasets if not d.deprecated)
101 yield from sorted(datasets, key=lambda d: (d.sort_order, d.name))
102
103 def __iter__(self) -> Iterable[Dataset]:
104 """Iterate over the datasets.
105
106 Dataset order is consistent across runs.
107
108 Equivalent to :meth:`datasets.datasets()
109 <compiler_gym.datasets.Dataset.datasets>`, but without the ability to
110 iterate over the deprecated datasets.
111
112 If the number of benchmarks in any of the datasets is infinite
113 (:code:`len(dataset) == math.inf`), the iterable returned by this method
114 will continue indefinitely.
115
116 :return: An iterable sequence of :meth:`Dataset
117 <compiler_gym.datasets.Dataset>` instances.
118 """
119 return self.datasets()
120
121 def dataset(self, dataset: str) -> Dataset:
122 """Get a dataset.
123
124 Return the corresponding :meth:`Dataset
125 <compiler_gym.datasets.Dataset>`. Name lookup will succeed whether or
126 not the dataset is deprecated.
127
128 :param dataset: A dataset name.
129
130 :return: A :meth:`Dataset <compiler_gym.datasets.Dataset>` instance.
131
132 :raises LookupError: If :code:`dataset` is not found.
133 """
134 dataset_name = resolve_uri_protocol(dataset)
135
136 if dataset_name not in self._datasets:
137 raise LookupError(f"Dataset not found: {dataset_name}")
138
139 return self._datasets[dataset_name]
140
141 def __getitem__(self, dataset: str) -> Dataset:
142 """Lookup a dataset.
143
144 :param dataset: A dataset name.
145
146 :return: A :meth:`Dataset <compiler_gym.datasets.Dataset>` instance.
147
148 :raises LookupError: If :code:`dataset` is not found.
149 """
150 return self.dataset(dataset)
151
152 def __setitem__(self, key: str, dataset: Dataset):
153 """Add a dataset to the collection.
154
155 :param key: The name of the dataset.
156 :param dataset: The dataset to add.
157 """
158 dataset_name = resolve_uri_protocol(key)
159
160 self._datasets[dataset_name] = dataset
161 if not dataset.deprecated:
162 self._visible_datasets.add(dataset_name)
163
164 def __delitem__(self, dataset: str):
165 """Remove a dataset from the collection.
166
167 This does not affect any underlying storage used by dataset. See
168 :meth:`uninstall() <compiler_gym.datasets.Datasets.uninstall>` to clean
169 up.
170
171 :param dataset: The name of a dataset.
172
173 :return: :code:`True` if the dataset was removed, :code:`False` if it
174 was already removed.
175 """
176 dataset_name = resolve_uri_protocol(dataset)
177 if dataset_name in self._visible_datasets:
178 self._visible_datasets.remove(dataset_name)
179 del self._datasets[dataset_name]
180
181 def __contains__(self, dataset: str) -> bool:
182 """Returns whether the dataset is contained."""
183 try:
184 self.dataset(dataset)
185 return True
186 except LookupError:
187 return False
188
189 def benchmarks(self, with_deprecated: bool = False) -> Iterable[Benchmark]:
190 """Enumerate the (possibly infinite) benchmarks lazily.
191
192 Benchmarks order is consistent across runs. One benchmark from each
193 dataset is returned in round robin order until all datasets have been
194 fully enumerated. The order of :meth:`benchmarks()
195 <compiler_gym.datasets.Datasets.benchmarks>` and :meth:`benchmark_uris()
196 <compiler_gym.datasets.Datasets.benchmark_uris>` is the same.
197
198 If the number of benchmarks in any of the datasets is infinite
199 (:code:`len(dataset) == math.inf`), the iterable returned by this method
200 will continue indefinitely.
201
202 :param with_deprecated: If :code:`True`, include benchmarks from
203 datasets that have been marked deprecated.
204
205 :return: An iterable sequence of :class:`Benchmark
206 <compiler_gym.datasets.Benchmark>` instances.
207 """
208 return round_robin_iterables(
209 (d.benchmarks() for d in self.datasets(with_deprecated=with_deprecated))
210 )
211
212 def benchmark_uris(self, with_deprecated: bool = False) -> Iterable[str]:
213 """Enumerate the (possibly infinite) benchmark URIs.
214
215 Benchmark URI order is consistent across runs. URIs from datasets are
216 returned in round robin order. The order of :meth:`benchmarks()
217 <compiler_gym.datasets.Datasets.benchmarks>` and :meth:`benchmark_uris()
218 <compiler_gym.datasets.Datasets.benchmark_uris>` is the same.
219
220 If the number of benchmarks in any of the datasets is infinite
221 (:code:`len(dataset) == math.inf`), the iterable returned by this method
222 will continue indefinitely.
223
224 :param with_deprecated: If :code:`True`, include benchmarks from
225 datasets that have been marked deprecated.
226
227 :return: An iterable sequence of benchmark URI strings.
228 """
229 return round_robin_iterables(
230 (d.benchmark_uris() for d in self.datasets(with_deprecated=with_deprecated))
231 )
232
233 def benchmark(self, uri: str) -> Benchmark:
234 """Select a benchmark.
235
236 Returns the corresponding :class:`Benchmark
237 <compiler_gym.datasets.Benchmark>`, regardless of whether the containing
238 dataset is installed or deprecated.
239
240 :param uri: The URI of the benchmark to return.
241
242 :return: A :class:`Benchmark <compiler_gym.datasets.Benchmark>`
243 instance.
244 """
245 uri = resolve_uri_protocol(uri)
246
247 match = BENCHMARK_URI_RE.match(uri)
248 if not match:
249 raise ValueError(f"Invalid benchmark URI: '{uri}'")
250
251 dataset_name = match.group("dataset")
252 dataset = self._datasets[dataset_name]
253
254 return dataset.benchmark(uri)
255
256 def random_benchmark(
257 self, random_state: Optional[np.random.Generator] = None
258 ) -> Benchmark:
259 """Select a benchmark randomly.
260
261 First, a dataset is selected uniformly randomly using
262 :code:`random_state.choice(list(datasets))`. The
263 :meth:`random_benchmark()
264 <compiler_gym.datasets.Dataset.random_benchmark>` method of that dataset
265 is then called to select a benchmark.
266
267 Note that the distribution of benchmarks selected by this method is not
268 biased by the size of each dataset, since datasets are selected
269 uniformly. This means that datasets with a small number of benchmarks
270 will be overrepresented compared to datasets with many benchmarks. To
271 correct for this bias, use the number of benchmarks in each dataset as
272 a weight for the random selection:
273
274 >>> rng = np.random.default_rng()
275 >>> finite_datasets = [d for d in env.datasets if len(d) != math.inf]
276 >>> dataset = rng.choice(
277 finite_datasets,
278 p=[len(d) for d in finite_datasets]
279 )
280 >>> dataset.random_benchmark(random_state=rng)
281
282 :param random_state: A random number generator. If not provided, a
283 default :code:`np.random.default_rng()` is used.
284
285 :return: A :class:`Benchmark <compiler_gym.datasets.Benchmark>`
286 instance.
287 """
288 random_state = random_state or np.random.default_rng()
289 dataset = random_state.choice(list(self._visible_datasets))
290 return self[dataset].random_benchmark(random_state=random_state)
291
292 @property
293 def size(self) -> int:
294 return len(self._visible_datasets)
295
296 def __len__(self) -> int:
297 """The number of datasets in the collection."""
298 return self.size
299
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/compiler_gym/datasets/datasets.py b/compiler_gym/datasets/datasets.py
--- a/compiler_gym/datasets/datasets.py
+++ b/compiler_gym/datasets/datasets.py
@@ -3,7 +3,7 @@
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from collections import deque
-from typing import Dict, Iterable, Optional, Set, TypeVar
+from typing import Dict, Iterable, List, Optional, Set, TypeVar
import numpy as np
@@ -254,39 +254,64 @@
return dataset.benchmark(uri)
def random_benchmark(
- self, random_state: Optional[np.random.Generator] = None
+ self,
+ random_state: Optional[np.random.Generator] = None,
+ weighted: bool = False,
+ weights: Optional[Dict[str, float]] = None,
) -> Benchmark:
"""Select a benchmark randomly.
- First, a dataset is selected uniformly randomly using
- :code:`random_state.choice(list(datasets))`. The
+ First, a dataset is selected randomly using
+ :code:`random_state.choice(list(datasets))`. Then the
:meth:`random_benchmark()
- <compiler_gym.datasets.Dataset.random_benchmark>` method of that dataset
- is then called to select a benchmark.
-
- Note that the distribution of benchmarks selected by this method is not
- biased by the size of each dataset, since datasets are selected
- uniformly. This means that datasets with a small number of benchmarks
- will be overrepresented compared to datasets with many benchmarks. To
- correct for this bias, use the number of benchmarks in each dataset as
- a weight for the random selection:
-
- >>> rng = np.random.default_rng()
- >>> finite_datasets = [d for d in env.datasets if len(d) != math.inf]
- >>> dataset = rng.choice(
- finite_datasets,
- p=[len(d) for d in finite_datasets]
- )
- >>> dataset.random_benchmark(random_state=rng)
+ <compiler_gym.datasets.Dataset.random_benchmark>` method of the chosen
+ dataset is called to select a benchmark.
+
+ By default datasets are selected uniformly randomly. This means that
+ datasets with a small number of benchmarks will be overrepresented
+ compared to datasets with many benchmarks. To correct for this bias pass
+ the argument :code:`weighted=True`, which weights the dataset choice by
+ the number of benchmarks in each dataset, equivalent to:
+
+ >>> random.choices(datasets, weights=[len(p) for p in datasets])
+
+ Weighting the choice of datasets by their size means that datasets with
+ infinite sizes (such as random program generators) will be excluded from
+ sampling as their size is :code:`0`. To override the weights of datasets
+ pass a :code:`weights` mapping:
+
+ >>> env.datasets.random_benchmark(weighted=True, weights={
+ "benchmark://dataset-v0": 10,
+ "benchmark://another-dataset-v0": 555,
+ })
:param random_state: A random number generator. If not provided, a
default :code:`np.random.default_rng()` is used.
+ :param weighted: If set, weight the choice of dataset by the number of
+ benchmarks in each dataset, or the value specified in the
+ :code:`weights` mapping.
+
+ :param weights: An optional mapping from dataset URI to the weight to
+ use when :code:`weighted=True`. This overrides the default value of
+ using the dataset size.
+
:return: A :class:`Benchmark <compiler_gym.datasets.Benchmark>`
instance.
"""
random_state = random_state or np.random.default_rng()
- dataset = random_state.choice(list(self._visible_datasets))
+ datasets: List[str] = list(self._visible_datasets)
+ # Assume weighted=True if weights dictionary is specified.
+ weighted = weighted or weights
+
+ if weighted:
+ weights: Dict[str, float] = weights or {}
+ w: List[float] = np.array(
+ [weights.get(d, self[d].size) for d in datasets], dtype=float
+ )
+ dataset = random_state.choice(datasets, p=w / w.sum())
+ else:
+ dataset = random_state.choice(datasets)
return self[dataset].random_benchmark(random_state=random_state)
@property
| {"golden_diff": "diff --git a/compiler_gym/datasets/datasets.py b/compiler_gym/datasets/datasets.py\n--- a/compiler_gym/datasets/datasets.py\n+++ b/compiler_gym/datasets/datasets.py\n@@ -3,7 +3,7 @@\n # This source code is licensed under the MIT license found in the\n # LICENSE file in the root directory of this source tree.\n from collections import deque\n-from typing import Dict, Iterable, Optional, Set, TypeVar\n+from typing import Dict, Iterable, List, Optional, Set, TypeVar\n \n import numpy as np\n \n@@ -254,39 +254,64 @@\n return dataset.benchmark(uri)\n \n def random_benchmark(\n- self, random_state: Optional[np.random.Generator] = None\n+ self,\n+ random_state: Optional[np.random.Generator] = None,\n+ weighted: bool = False,\n+ weights: Optional[Dict[str, float]] = None,\n ) -> Benchmark:\n \"\"\"Select a benchmark randomly.\n \n- First, a dataset is selected uniformly randomly using\n- :code:`random_state.choice(list(datasets))`. The\n+ First, a dataset is selected randomly using\n+ :code:`random_state.choice(list(datasets))`. Then the\n :meth:`random_benchmark()\n- <compiler_gym.datasets.Dataset.random_benchmark>` method of that dataset\n- is then called to select a benchmark.\n-\n- Note that the distribution of benchmarks selected by this method is not\n- biased by the size of each dataset, since datasets are selected\n- uniformly. This means that datasets with a small number of benchmarks\n- will be overrepresented compared to datasets with many benchmarks. To\n- correct for this bias, use the number of benchmarks in each dataset as\n- a weight for the random selection:\n-\n- >>> rng = np.random.default_rng()\n- >>> finite_datasets = [d for d in env.datasets if len(d) != math.inf]\n- >>> dataset = rng.choice(\n- finite_datasets,\n- p=[len(d) for d in finite_datasets]\n- )\n- >>> dataset.random_benchmark(random_state=rng)\n+ <compiler_gym.datasets.Dataset.random_benchmark>` method of the chosen\n+ dataset is called to select a benchmark.\n+\n+ By default datasets are selected uniformly randomly. This means that\n+ datasets with a small number of benchmarks will be overrepresented\n+ compared to datasets with many benchmarks. To correct for this bias pass\n+ the argument :code:`weighted=True`, which weights the dataset choice by\n+ the number of benchmarks in each dataset, equivalent to:\n+\n+ >>> random.choices(datasets, weights=[len(p) for p in datasets])\n+\n+ Weighting the choice of datasets by their size means that datasets with\n+ infinite sizes (such as random program generators) will be excluded from\n+ sampling as their size is :code:`0`. To override the weights of datasets\n+ pass a :code:`weights` mapping:\n+\n+ >>> env.datasets.random_benchmark(weighted=True, weights={\n+ \"benchmark://dataset-v0\": 10,\n+ \"benchmark://another-dataset-v0\": 555,\n+ })\n \n :param random_state: A random number generator. If not provided, a\n default :code:`np.random.default_rng()` is used.\n \n+ :param weighted: If set, weight the choice of dataset by the number of\n+ benchmarks in each dataset, or the value specified in the\n+ :code:`weights` mapping.\n+\n+ :param weights: An optional mapping from dataset URI to the weight to\n+ use when :code:`weighted=True`. This overrides the default value of\n+ using the dataset size.\n+\n :return: A :class:`Benchmark <compiler_gym.datasets.Benchmark>`\n instance.\n \"\"\"\n random_state = random_state or np.random.default_rng()\n- dataset = random_state.choice(list(self._visible_datasets))\n+ datasets: List[str] = list(self._visible_datasets)\n+ # Assume weighted=True if weights dictionary is specified.\n+ weighted = weighted or weights\n+\n+ if weighted:\n+ weights: Dict[str, float] = weights or {}\n+ w: List[float] = np.array(\n+ [weights.get(d, self[d].size) for d in datasets], dtype=float\n+ )\n+ dataset = random_state.choice(datasets, p=w / w.sum())\n+ else:\n+ dataset = random_state.choice(datasets)\n return self[dataset].random_benchmark(random_state=random_state)\n \n @property\n", "issue": "Possible error in how to generate unbiased benchmarks\n## \ud83d\udcda Documentation\r\n(Possible error, possibly me doing it wrong)\r\nhttps://facebookresearch.github.io/CompilerGym/compiler_gym/datasets.html\r\n\r\nThe code for `random_benchmark` says:\r\n\r\n```python\r\nrng = np.random.default_rng()\r\nfinite_datasets = [d for d in env.datasets if len(d) != math.inf]\r\ndataset = rng.choice(\r\n finite_datasets,\r\n p=[len(d) for d in finite_datasets]\r\n)\r\ndataset.random_benchmark(random_state=rng)\r\n```\r\n\r\nBut, I think that if you call `len` on a generator, it will fail with error:\r\n`TypeError: 'float' object cannot be interpreted as an integer`\r\n\r\nSuggest changing it to:\r\n`d.size != math.inf`\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\nfrom collections import deque\nfrom typing import Dict, Iterable, Optional, Set, TypeVar\n\nimport numpy as np\n\nfrom compiler_gym.datasets.benchmark import Benchmark\nfrom compiler_gym.datasets.dataset import Dataset\nfrom compiler_gym.datasets.uri import BENCHMARK_URI_RE, resolve_uri_protocol\n\nT = TypeVar(\"T\")\n\n\ndef round_robin_iterables(iters: Iterable[Iterable[T]]) -> Iterable[T]:\n \"\"\"Yield from the given iterators in round robin order.\"\"\"\n # Use a queue of iterators to iterate over. Repeatedly pop an iterator from\n # the queue, yield the next value from it, then put it at the back of the\n # queue. The iterator is discarded once exhausted.\n iters = deque(iters)\n while len(iters) > 1:\n it = iters.popleft()\n try:\n yield next(it)\n iters.append(it)\n except StopIteration:\n pass\n # Once we have only a single iterator left, return it directly rather\n # continuing with the round robin.\n if len(iters) == 1:\n yield from iters.popleft()\n\n\nclass Datasets:\n \"\"\"A collection of datasets.\n\n This class provides a dictionary-like interface for indexing and iterating\n over multiple :class:`Dataset <compiler_gym.datasets.Dataset>` objects.\n Select a dataset by URI using:\n\n >>> env.datasets[\"benchmark://cbench-v1\"]\n\n Check whether a dataset exists using:\n\n >>> \"benchmark://cbench-v1\" in env.datasets\n True\n\n Or iterate over the datasets using:\n\n >>> for dataset in env.datasets:\n ... print(dataset.name)\n benchmark://cbench-v1\n benchmark://github-v0\n benchmark://npb-v0\n\n To select a benchmark from the datasets, use :meth:`benchmark()`:\n\n >>> env.datasets.benchmark(\"benchmark://a-v0/a\")\n\n Use the :meth:`benchmarks()` method to iterate over every benchmark in the\n datasets in a stable round robin order:\n\n >>> for benchmark in env.datasets.benchmarks():\n ... print(benchmark)\n benchmark://cbench-v1/1\n benchmark://github-v0/1\n benchmark://npb-v0/1\n benchmark://cbench-v1/2\n ...\n\n If you want to exclude a dataset, delete it:\n\n >>> del env.datasets[\"benchmark://b-v0\"]\n \"\"\"\n\n def __init__(\n self,\n datasets: Iterable[Dataset],\n ):\n self._datasets: Dict[str, Dataset] = {d.name: d for d in datasets}\n self._visible_datasets: Set[str] = set(\n name for name, dataset in self._datasets.items() if not dataset.deprecated\n )\n\n def datasets(self, with_deprecated: bool = False) -> Iterable[Dataset]:\n \"\"\"Enumerate the datasets.\n\n Dataset order is consistent across runs.\n\n :param with_deprecated: If :code:`True`, include datasets that have been\n marked as deprecated.\n\n :return: An iterable sequence of :meth:`Dataset\n <compiler_gym.datasets.Dataset>` instances.\n \"\"\"\n datasets = self._datasets.values()\n if not with_deprecated:\n datasets = (d for d in datasets if not d.deprecated)\n yield from sorted(datasets, key=lambda d: (d.sort_order, d.name))\n\n def __iter__(self) -> Iterable[Dataset]:\n \"\"\"Iterate over the datasets.\n\n Dataset order is consistent across runs.\n\n Equivalent to :meth:`datasets.datasets()\n <compiler_gym.datasets.Dataset.datasets>`, but without the ability to\n iterate over the deprecated datasets.\n\n If the number of benchmarks in any of the datasets is infinite\n (:code:`len(dataset) == math.inf`), the iterable returned by this method\n will continue indefinitely.\n\n :return: An iterable sequence of :meth:`Dataset\n <compiler_gym.datasets.Dataset>` instances.\n \"\"\"\n return self.datasets()\n\n def dataset(self, dataset: str) -> Dataset:\n \"\"\"Get a dataset.\n\n Return the corresponding :meth:`Dataset\n <compiler_gym.datasets.Dataset>`. Name lookup will succeed whether or\n not the dataset is deprecated.\n\n :param dataset: A dataset name.\n\n :return: A :meth:`Dataset <compiler_gym.datasets.Dataset>` instance.\n\n :raises LookupError: If :code:`dataset` is not found.\n \"\"\"\n dataset_name = resolve_uri_protocol(dataset)\n\n if dataset_name not in self._datasets:\n raise LookupError(f\"Dataset not found: {dataset_name}\")\n\n return self._datasets[dataset_name]\n\n def __getitem__(self, dataset: str) -> Dataset:\n \"\"\"Lookup a dataset.\n\n :param dataset: A dataset name.\n\n :return: A :meth:`Dataset <compiler_gym.datasets.Dataset>` instance.\n\n :raises LookupError: If :code:`dataset` is not found.\n \"\"\"\n return self.dataset(dataset)\n\n def __setitem__(self, key: str, dataset: Dataset):\n \"\"\"Add a dataset to the collection.\n\n :param key: The name of the dataset.\n :param dataset: The dataset to add.\n \"\"\"\n dataset_name = resolve_uri_protocol(key)\n\n self._datasets[dataset_name] = dataset\n if not dataset.deprecated:\n self._visible_datasets.add(dataset_name)\n\n def __delitem__(self, dataset: str):\n \"\"\"Remove a dataset from the collection.\n\n This does not affect any underlying storage used by dataset. See\n :meth:`uninstall() <compiler_gym.datasets.Datasets.uninstall>` to clean\n up.\n\n :param dataset: The name of a dataset.\n\n :return: :code:`True` if the dataset was removed, :code:`False` if it\n was already removed.\n \"\"\"\n dataset_name = resolve_uri_protocol(dataset)\n if dataset_name in self._visible_datasets:\n self._visible_datasets.remove(dataset_name)\n del self._datasets[dataset_name]\n\n def __contains__(self, dataset: str) -> bool:\n \"\"\"Returns whether the dataset is contained.\"\"\"\n try:\n self.dataset(dataset)\n return True\n except LookupError:\n return False\n\n def benchmarks(self, with_deprecated: bool = False) -> Iterable[Benchmark]:\n \"\"\"Enumerate the (possibly infinite) benchmarks lazily.\n\n Benchmarks order is consistent across runs. One benchmark from each\n dataset is returned in round robin order until all datasets have been\n fully enumerated. The order of :meth:`benchmarks()\n <compiler_gym.datasets.Datasets.benchmarks>` and :meth:`benchmark_uris()\n <compiler_gym.datasets.Datasets.benchmark_uris>` is the same.\n\n If the number of benchmarks in any of the datasets is infinite\n (:code:`len(dataset) == math.inf`), the iterable returned by this method\n will continue indefinitely.\n\n :param with_deprecated: If :code:`True`, include benchmarks from\n datasets that have been marked deprecated.\n\n :return: An iterable sequence of :class:`Benchmark\n <compiler_gym.datasets.Benchmark>` instances.\n \"\"\"\n return round_robin_iterables(\n (d.benchmarks() for d in self.datasets(with_deprecated=with_deprecated))\n )\n\n def benchmark_uris(self, with_deprecated: bool = False) -> Iterable[str]:\n \"\"\"Enumerate the (possibly infinite) benchmark URIs.\n\n Benchmark URI order is consistent across runs. URIs from datasets are\n returned in round robin order. The order of :meth:`benchmarks()\n <compiler_gym.datasets.Datasets.benchmarks>` and :meth:`benchmark_uris()\n <compiler_gym.datasets.Datasets.benchmark_uris>` is the same.\n\n If the number of benchmarks in any of the datasets is infinite\n (:code:`len(dataset) == math.inf`), the iterable returned by this method\n will continue indefinitely.\n\n :param with_deprecated: If :code:`True`, include benchmarks from\n datasets that have been marked deprecated.\n\n :return: An iterable sequence of benchmark URI strings.\n \"\"\"\n return round_robin_iterables(\n (d.benchmark_uris() for d in self.datasets(with_deprecated=with_deprecated))\n )\n\n def benchmark(self, uri: str) -> Benchmark:\n \"\"\"Select a benchmark.\n\n Returns the corresponding :class:`Benchmark\n <compiler_gym.datasets.Benchmark>`, regardless of whether the containing\n dataset is installed or deprecated.\n\n :param uri: The URI of the benchmark to return.\n\n :return: A :class:`Benchmark <compiler_gym.datasets.Benchmark>`\n instance.\n \"\"\"\n uri = resolve_uri_protocol(uri)\n\n match = BENCHMARK_URI_RE.match(uri)\n if not match:\n raise ValueError(f\"Invalid benchmark URI: '{uri}'\")\n\n dataset_name = match.group(\"dataset\")\n dataset = self._datasets[dataset_name]\n\n return dataset.benchmark(uri)\n\n def random_benchmark(\n self, random_state: Optional[np.random.Generator] = None\n ) -> Benchmark:\n \"\"\"Select a benchmark randomly.\n\n First, a dataset is selected uniformly randomly using\n :code:`random_state.choice(list(datasets))`. The\n :meth:`random_benchmark()\n <compiler_gym.datasets.Dataset.random_benchmark>` method of that dataset\n is then called to select a benchmark.\n\n Note that the distribution of benchmarks selected by this method is not\n biased by the size of each dataset, since datasets are selected\n uniformly. This means that datasets with a small number of benchmarks\n will be overrepresented compared to datasets with many benchmarks. To\n correct for this bias, use the number of benchmarks in each dataset as\n a weight for the random selection:\n\n >>> rng = np.random.default_rng()\n >>> finite_datasets = [d for d in env.datasets if len(d) != math.inf]\n >>> dataset = rng.choice(\n finite_datasets,\n p=[len(d) for d in finite_datasets]\n )\n >>> dataset.random_benchmark(random_state=rng)\n\n :param random_state: A random number generator. If not provided, a\n default :code:`np.random.default_rng()` is used.\n\n :return: A :class:`Benchmark <compiler_gym.datasets.Benchmark>`\n instance.\n \"\"\"\n random_state = random_state or np.random.default_rng()\n dataset = random_state.choice(list(self._visible_datasets))\n return self[dataset].random_benchmark(random_state=random_state)\n\n @property\n def size(self) -> int:\n return len(self._visible_datasets)\n\n def __len__(self) -> int:\n \"\"\"The number of datasets in the collection.\"\"\"\n return self.size\n", "path": "compiler_gym/datasets/datasets.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\nfrom collections import deque\nfrom typing import Dict, Iterable, List, Optional, Set, TypeVar\n\nimport numpy as np\n\nfrom compiler_gym.datasets.benchmark import Benchmark\nfrom compiler_gym.datasets.dataset import Dataset\nfrom compiler_gym.datasets.uri import BENCHMARK_URI_RE, resolve_uri_protocol\n\nT = TypeVar(\"T\")\n\n\ndef round_robin_iterables(iters: Iterable[Iterable[T]]) -> Iterable[T]:\n \"\"\"Yield from the given iterators in round robin order.\"\"\"\n # Use a queue of iterators to iterate over. Repeatedly pop an iterator from\n # the queue, yield the next value from it, then put it at the back of the\n # queue. The iterator is discarded once exhausted.\n iters = deque(iters)\n while len(iters) > 1:\n it = iters.popleft()\n try:\n yield next(it)\n iters.append(it)\n except StopIteration:\n pass\n # Once we have only a single iterator left, return it directly rather\n # continuing with the round robin.\n if len(iters) == 1:\n yield from iters.popleft()\n\n\nclass Datasets:\n \"\"\"A collection of datasets.\n\n This class provides a dictionary-like interface for indexing and iterating\n over multiple :class:`Dataset <compiler_gym.datasets.Dataset>` objects.\n Select a dataset by URI using:\n\n >>> env.datasets[\"benchmark://cbench-v1\"]\n\n Check whether a dataset exists using:\n\n >>> \"benchmark://cbench-v1\" in env.datasets\n True\n\n Or iterate over the datasets using:\n\n >>> for dataset in env.datasets:\n ... print(dataset.name)\n benchmark://cbench-v1\n benchmark://github-v0\n benchmark://npb-v0\n\n To select a benchmark from the datasets, use :meth:`benchmark()`:\n\n >>> env.datasets.benchmark(\"benchmark://a-v0/a\")\n\n Use the :meth:`benchmarks()` method to iterate over every benchmark in the\n datasets in a stable round robin order:\n\n >>> for benchmark in env.datasets.benchmarks():\n ... print(benchmark)\n benchmark://cbench-v1/1\n benchmark://github-v0/1\n benchmark://npb-v0/1\n benchmark://cbench-v1/2\n ...\n\n If you want to exclude a dataset, delete it:\n\n >>> del env.datasets[\"benchmark://b-v0\"]\n \"\"\"\n\n def __init__(\n self,\n datasets: Iterable[Dataset],\n ):\n self._datasets: Dict[str, Dataset] = {d.name: d for d in datasets}\n self._visible_datasets: Set[str] = set(\n name for name, dataset in self._datasets.items() if not dataset.deprecated\n )\n\n def datasets(self, with_deprecated: bool = False) -> Iterable[Dataset]:\n \"\"\"Enumerate the datasets.\n\n Dataset order is consistent across runs.\n\n :param with_deprecated: If :code:`True`, include datasets that have been\n marked as deprecated.\n\n :return: An iterable sequence of :meth:`Dataset\n <compiler_gym.datasets.Dataset>` instances.\n \"\"\"\n datasets = self._datasets.values()\n if not with_deprecated:\n datasets = (d for d in datasets if not d.deprecated)\n yield from sorted(datasets, key=lambda d: (d.sort_order, d.name))\n\n def __iter__(self) -> Iterable[Dataset]:\n \"\"\"Iterate over the datasets.\n\n Dataset order is consistent across runs.\n\n Equivalent to :meth:`datasets.datasets()\n <compiler_gym.datasets.Dataset.datasets>`, but without the ability to\n iterate over the deprecated datasets.\n\n If the number of benchmarks in any of the datasets is infinite\n (:code:`len(dataset) == math.inf`), the iterable returned by this method\n will continue indefinitely.\n\n :return: An iterable sequence of :meth:`Dataset\n <compiler_gym.datasets.Dataset>` instances.\n \"\"\"\n return self.datasets()\n\n def dataset(self, dataset: str) -> Dataset:\n \"\"\"Get a dataset.\n\n Return the corresponding :meth:`Dataset\n <compiler_gym.datasets.Dataset>`. Name lookup will succeed whether or\n not the dataset is deprecated.\n\n :param dataset: A dataset name.\n\n :return: A :meth:`Dataset <compiler_gym.datasets.Dataset>` instance.\n\n :raises LookupError: If :code:`dataset` is not found.\n \"\"\"\n dataset_name = resolve_uri_protocol(dataset)\n\n if dataset_name not in self._datasets:\n raise LookupError(f\"Dataset not found: {dataset_name}\")\n\n return self._datasets[dataset_name]\n\n def __getitem__(self, dataset: str) -> Dataset:\n \"\"\"Lookup a dataset.\n\n :param dataset: A dataset name.\n\n :return: A :meth:`Dataset <compiler_gym.datasets.Dataset>` instance.\n\n :raises LookupError: If :code:`dataset` is not found.\n \"\"\"\n return self.dataset(dataset)\n\n def __setitem__(self, key: str, dataset: Dataset):\n \"\"\"Add a dataset to the collection.\n\n :param key: The name of the dataset.\n :param dataset: The dataset to add.\n \"\"\"\n dataset_name = resolve_uri_protocol(key)\n\n self._datasets[dataset_name] = dataset\n if not dataset.deprecated:\n self._visible_datasets.add(dataset_name)\n\n def __delitem__(self, dataset: str):\n \"\"\"Remove a dataset from the collection.\n\n This does not affect any underlying storage used by dataset. See\n :meth:`uninstall() <compiler_gym.datasets.Datasets.uninstall>` to clean\n up.\n\n :param dataset: The name of a dataset.\n\n :return: :code:`True` if the dataset was removed, :code:`False` if it\n was already removed.\n \"\"\"\n dataset_name = resolve_uri_protocol(dataset)\n if dataset_name in self._visible_datasets:\n self._visible_datasets.remove(dataset_name)\n del self._datasets[dataset_name]\n\n def __contains__(self, dataset: str) -> bool:\n \"\"\"Returns whether the dataset is contained.\"\"\"\n try:\n self.dataset(dataset)\n return True\n except LookupError:\n return False\n\n def benchmarks(self, with_deprecated: bool = False) -> Iterable[Benchmark]:\n \"\"\"Enumerate the (possibly infinite) benchmarks lazily.\n\n Benchmarks order is consistent across runs. One benchmark from each\n dataset is returned in round robin order until all datasets have been\n fully enumerated. The order of :meth:`benchmarks()\n <compiler_gym.datasets.Datasets.benchmarks>` and :meth:`benchmark_uris()\n <compiler_gym.datasets.Datasets.benchmark_uris>` is the same.\n\n If the number of benchmarks in any of the datasets is infinite\n (:code:`len(dataset) == math.inf`), the iterable returned by this method\n will continue indefinitely.\n\n :param with_deprecated: If :code:`True`, include benchmarks from\n datasets that have been marked deprecated.\n\n :return: An iterable sequence of :class:`Benchmark\n <compiler_gym.datasets.Benchmark>` instances.\n \"\"\"\n return round_robin_iterables(\n (d.benchmarks() for d in self.datasets(with_deprecated=with_deprecated))\n )\n\n def benchmark_uris(self, with_deprecated: bool = False) -> Iterable[str]:\n \"\"\"Enumerate the (possibly infinite) benchmark URIs.\n\n Benchmark URI order is consistent across runs. URIs from datasets are\n returned in round robin order. The order of :meth:`benchmarks()\n <compiler_gym.datasets.Datasets.benchmarks>` and :meth:`benchmark_uris()\n <compiler_gym.datasets.Datasets.benchmark_uris>` is the same.\n\n If the number of benchmarks in any of the datasets is infinite\n (:code:`len(dataset) == math.inf`), the iterable returned by this method\n will continue indefinitely.\n\n :param with_deprecated: If :code:`True`, include benchmarks from\n datasets that have been marked deprecated.\n\n :return: An iterable sequence of benchmark URI strings.\n \"\"\"\n return round_robin_iterables(\n (d.benchmark_uris() for d in self.datasets(with_deprecated=with_deprecated))\n )\n\n def benchmark(self, uri: str) -> Benchmark:\n \"\"\"Select a benchmark.\n\n Returns the corresponding :class:`Benchmark\n <compiler_gym.datasets.Benchmark>`, regardless of whether the containing\n dataset is installed or deprecated.\n\n :param uri: The URI of the benchmark to return.\n\n :return: A :class:`Benchmark <compiler_gym.datasets.Benchmark>`\n instance.\n \"\"\"\n uri = resolve_uri_protocol(uri)\n\n match = BENCHMARK_URI_RE.match(uri)\n if not match:\n raise ValueError(f\"Invalid benchmark URI: '{uri}'\")\n\n dataset_name = match.group(\"dataset\")\n dataset = self._datasets[dataset_name]\n\n return dataset.benchmark(uri)\n\n def random_benchmark(\n self,\n random_state: Optional[np.random.Generator] = None,\n weighted: bool = False,\n weights: Optional[Dict[str, float]] = None,\n ) -> Benchmark:\n \"\"\"Select a benchmark randomly.\n\n First, a dataset is selected randomly using\n :code:`random_state.choice(list(datasets))`. Then the\n :meth:`random_benchmark()\n <compiler_gym.datasets.Dataset.random_benchmark>` method of the chosen\n dataset is called to select a benchmark.\n\n By default datasets are selected uniformly randomly. This means that\n datasets with a small number of benchmarks will be overrepresented\n compared to datasets with many benchmarks. To correct for this bias pass\n the argument :code:`weighted=True`, which weights the dataset choice by\n the number of benchmarks in each dataset, equivalent to:\n\n >>> random.choices(datasets, weights=[len(p) for p in datasets])\n\n Weighting the choice of datasets by their size means that datasets with\n infinite sizes (such as random program generators) will be excluded from\n sampling as their size is :code:`0`. To override the weights of datasets\n pass a :code:`weights` mapping:\n\n >>> env.datasets.random_benchmark(weighted=True, weights={\n \"benchmark://dataset-v0\": 10,\n \"benchmark://another-dataset-v0\": 555,\n })\n\n :param random_state: A random number generator. If not provided, a\n default :code:`np.random.default_rng()` is used.\n\n :param weighted: If set, weight the choice of dataset by the number of\n benchmarks in each dataset, or the value specified in the\n :code:`weights` mapping.\n\n :param weights: An optional mapping from dataset URI to the weight to\n use when :code:`weighted=True`. This overrides the default value of\n using the dataset size.\n\n :return: A :class:`Benchmark <compiler_gym.datasets.Benchmark>`\n instance.\n \"\"\"\n random_state = random_state or np.random.default_rng()\n datasets: List[str] = list(self._visible_datasets)\n # Assume weighted=True if weights dictionary is specified.\n weighted = weighted or weights\n\n if weighted:\n weights: Dict[str, float] = weights or {}\n w: List[float] = np.array(\n [weights.get(d, self[d].size) for d in datasets], dtype=float\n )\n dataset = random_state.choice(datasets, p=w / w.sum())\n else:\n dataset = random_state.choice(datasets)\n return self[dataset].random_benchmark(random_state=random_state)\n\n @property\n def size(self) -> int:\n return len(self._visible_datasets)\n\n def __len__(self) -> int:\n \"\"\"The number of datasets in the collection.\"\"\"\n return self.size\n", "path": "compiler_gym/datasets/datasets.py"}]} | 3,594 | 1,000 |
gh_patches_debug_16378 | rasdani/github-patches | git_diff | freedomofpress__securedrop-379 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Display number of docs and messages per source in source list
> At the moment each source in the list displays: source codename, last updated. It would be helpful to also see: total # of messages/docs.
Extracted from #322
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/db.py`
Content:
```
1 import os
2 import datetime
3
4 from sqlalchemy import create_engine, ForeignKey
5 from sqlalchemy.orm import scoped_session, sessionmaker, relationship, backref
6 from sqlalchemy.ext.declarative import declarative_base
7 from sqlalchemy import Column, Integer, String, Boolean, DateTime
8 from sqlalchemy.orm.exc import NoResultFound
9
10 import config
11 import crypto_util
12 import store
13
14 # http://flask.pocoo.org/docs/patterns/sqlalchemy/
15
16 if config.DATABASE_ENGINE == "sqlite":
17 engine = create_engine(
18 config.DATABASE_ENGINE + ":///" +
19 config.DATABASE_FILE
20 )
21 else:
22 engine = create_engine(
23 config.DATABASE_ENGINE + '://' +
24 config.DATABASE_USERNAME + ':' +
25 config.DATABASE_PASSWORD + '@' +
26 config.DATABASE_HOST + '/' +
27 config.DATABASE_NAME, echo=False
28 )
29
30 db_session = scoped_session(sessionmaker(autocommit=False,
31 autoflush=False,
32 bind=engine))
33 Base = declarative_base()
34 Base.query = db_session.query_property()
35
36
37 class Source(Base):
38 __tablename__ = 'sources'
39 id = Column(Integer, primary_key=True)
40 filesystem_id = Column(String(96), unique=True)
41 journalist_designation = Column(String(255), nullable=False)
42 flagged = Column(Boolean, default=False)
43 last_updated = Column(DateTime, default=datetime.datetime.now)
44
45 # sources are "pending" and don't get displayed to journalists until they submit something
46 pending = Column(Boolean, default=True)
47
48 # keep track of how many interactions have happened, for filenames
49 interaction_count = Column(Integer, default=0, nullable=False)
50
51 def __init__(self, filesystem_id=None, journalist_designation=None):
52 self.filesystem_id = filesystem_id
53 self.journalist_designation = journalist_designation
54
55 def __repr__(self):
56 return '<Source %r>' % (self.journalist_designation)
57
58 def journalist_filename(self):
59 valid_chars = 'abcdefghijklmnopqrstuvwxyz1234567890-_'
60 return ''.join([c for c in self.journalist_designation.lower().replace(' ', '_') if c in valid_chars])
61
62 class Submission(Base):
63 __tablename__ = 'submissions'
64 id = Column(Integer, primary_key=True)
65 source_id = Column(Integer, ForeignKey('sources.id'))
66 source = relationship("Source", backref=backref('submissions', order_by=id))
67 filename = Column(String(255), nullable=False)
68 size = Column(Integer, nullable=False)
69
70 def __init__(self, source, filename):
71 self.source_id = source.id
72 self.filename = filename
73 self.size = os.stat(store.path(source.filesystem_id, filename)).st_size
74
75 def __repr__(self):
76 return '<Submission %r>' % (self.filename)
77
78
79 # Declare (or import) models before init_db
80 def init_db():
81 Base.metadata.create_all(bind=engine)
82
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/securedrop/db.py b/securedrop/db.py
--- a/securedrop/db.py
+++ b/securedrop/db.py
@@ -59,6 +59,19 @@
valid_chars = 'abcdefghijklmnopqrstuvwxyz1234567890-_'
return ''.join([c for c in self.journalist_designation.lower().replace(' ', '_') if c in valid_chars])
+ def documents_messages_count(self):
+ try:
+ return self.docs_msgs_count
+ except AttributeError:
+ self.docs_msgs_count = {'messages': 0, 'documents': 0}
+ for submission in self.submissions:
+ if submission.filename.endswith('msg.gpg'):
+ self.docs_msgs_count['messages'] += 1
+ elif submission.filename.endswith('doc.zip.gpg'):
+ self.docs_msgs_count['documents'] += 1
+ return self.docs_msgs_count
+
+
class Submission(Base):
__tablename__ = 'submissions'
id = Column(Integer, primary_key=True)
| {"golden_diff": "diff --git a/securedrop/db.py b/securedrop/db.py\n--- a/securedrop/db.py\n+++ b/securedrop/db.py\n@@ -59,6 +59,19 @@\n valid_chars = 'abcdefghijklmnopqrstuvwxyz1234567890-_'\n return ''.join([c for c in self.journalist_designation.lower().replace(' ', '_') if c in valid_chars])\n \n+ def documents_messages_count(self):\n+ try:\n+ return self.docs_msgs_count\n+ except AttributeError:\n+ self.docs_msgs_count = {'messages': 0, 'documents': 0}\n+ for submission in self.submissions:\n+ if submission.filename.endswith('msg.gpg'):\n+ self.docs_msgs_count['messages'] += 1\n+ elif submission.filename.endswith('doc.zip.gpg'):\n+ self.docs_msgs_count['documents'] += 1\n+ return self.docs_msgs_count\n+\n+\n class Submission(Base):\n __tablename__ = 'submissions'\n id = Column(Integer, primary_key=True)\n", "issue": "Display number of docs and messages per source in source list\n> At the moment each source in the list displays: source codename, last updated. It would be helpful to also see: total # of messages/docs.\n\nExtracted from #322\n\n", "before_files": [{"content": "import os\nimport datetime\n\nfrom sqlalchemy import create_engine, ForeignKey\nfrom sqlalchemy.orm import scoped_session, sessionmaker, relationship, backref\nfrom sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy import Column, Integer, String, Boolean, DateTime\nfrom sqlalchemy.orm.exc import NoResultFound\n\nimport config\nimport crypto_util\nimport store\n\n# http://flask.pocoo.org/docs/patterns/sqlalchemy/\n\nif config.DATABASE_ENGINE == \"sqlite\":\n engine = create_engine(\n config.DATABASE_ENGINE + \":///\" +\n config.DATABASE_FILE\n )\nelse:\n engine = create_engine(\n config.DATABASE_ENGINE + '://' +\n config.DATABASE_USERNAME + ':' +\n config.DATABASE_PASSWORD + '@' +\n config.DATABASE_HOST + '/' +\n config.DATABASE_NAME, echo=False\n )\n\ndb_session = scoped_session(sessionmaker(autocommit=False,\n autoflush=False,\n bind=engine))\nBase = declarative_base()\nBase.query = db_session.query_property()\n\n\nclass Source(Base):\n __tablename__ = 'sources'\n id = Column(Integer, primary_key=True)\n filesystem_id = Column(String(96), unique=True)\n journalist_designation = Column(String(255), nullable=False)\n flagged = Column(Boolean, default=False)\n last_updated = Column(DateTime, default=datetime.datetime.now)\n \n # sources are \"pending\" and don't get displayed to journalists until they submit something\n pending = Column(Boolean, default=True)\n\n # keep track of how many interactions have happened, for filenames\n interaction_count = Column(Integer, default=0, nullable=False)\n\n def __init__(self, filesystem_id=None, journalist_designation=None):\n self.filesystem_id = filesystem_id\n self.journalist_designation = journalist_designation\n\n def __repr__(self):\n return '<Source %r>' % (self.journalist_designation)\n\n def journalist_filename(self):\n valid_chars = 'abcdefghijklmnopqrstuvwxyz1234567890-_'\n return ''.join([c for c in self.journalist_designation.lower().replace(' ', '_') if c in valid_chars])\n\nclass Submission(Base):\n __tablename__ = 'submissions'\n id = Column(Integer, primary_key=True)\n source_id = Column(Integer, ForeignKey('sources.id'))\n source = relationship(\"Source\", backref=backref('submissions', order_by=id))\n filename = Column(String(255), nullable=False)\n size = Column(Integer, nullable=False)\n\n def __init__(self, source, filename):\n self.source_id = source.id\n self.filename = filename\n self.size = os.stat(store.path(source.filesystem_id, filename)).st_size\n\n def __repr__(self):\n return '<Submission %r>' % (self.filename)\n\n\n# Declare (or import) models before init_db\ndef init_db():\n Base.metadata.create_all(bind=engine)\n\n", "path": "securedrop/db.py"}], "after_files": [{"content": "import os\nimport datetime\n\nfrom sqlalchemy import create_engine, ForeignKey\nfrom sqlalchemy.orm import scoped_session, sessionmaker, relationship, backref\nfrom sqlalchemy.ext.declarative import declarative_base\nfrom sqlalchemy import Column, Integer, String, Boolean, DateTime\nfrom sqlalchemy.orm.exc import NoResultFound\n\nimport config\nimport crypto_util\nimport store\n\n# http://flask.pocoo.org/docs/patterns/sqlalchemy/\n\nif config.DATABASE_ENGINE == \"sqlite\":\n engine = create_engine(\n config.DATABASE_ENGINE + \":///\" +\n config.DATABASE_FILE\n )\nelse:\n engine = create_engine(\n config.DATABASE_ENGINE + '://' +\n config.DATABASE_USERNAME + ':' +\n config.DATABASE_PASSWORD + '@' +\n config.DATABASE_HOST + '/' +\n config.DATABASE_NAME, echo=False\n )\n\ndb_session = scoped_session(sessionmaker(autocommit=False,\n autoflush=False,\n bind=engine))\nBase = declarative_base()\nBase.query = db_session.query_property()\n\n\nclass Source(Base):\n __tablename__ = 'sources'\n id = Column(Integer, primary_key=True)\n filesystem_id = Column(String(96), unique=True)\n journalist_designation = Column(String(255), nullable=False)\n flagged = Column(Boolean, default=False)\n last_updated = Column(DateTime, default=datetime.datetime.now)\n \n # sources are \"pending\" and don't get displayed to journalists until they submit something\n pending = Column(Boolean, default=True)\n\n # keep track of how many interactions have happened, for filenames\n interaction_count = Column(Integer, default=0, nullable=False)\n\n def __init__(self, filesystem_id=None, journalist_designation=None):\n self.filesystem_id = filesystem_id\n self.journalist_designation = journalist_designation\n\n def __repr__(self):\n return '<Source %r>' % (self.journalist_designation)\n\n def journalist_filename(self):\n valid_chars = 'abcdefghijklmnopqrstuvwxyz1234567890-_'\n return ''.join([c for c in self.journalist_designation.lower().replace(' ', '_') if c in valid_chars])\n\n def documents_messages_count(self):\n try:\n return self.docs_msgs_count\n except AttributeError:\n self.docs_msgs_count = {'messages': 0, 'documents': 0}\n for submission in self.submissions:\n if submission.filename.endswith('msg.gpg'):\n self.docs_msgs_count['messages'] += 1\n elif submission.filename.endswith('doc.zip.gpg'):\n self.docs_msgs_count['documents'] += 1\n return self.docs_msgs_count\n\n\nclass Submission(Base):\n __tablename__ = 'submissions'\n id = Column(Integer, primary_key=True)\n source_id = Column(Integer, ForeignKey('sources.id'))\n source = relationship(\"Source\", backref=backref('submissions', order_by=id))\n filename = Column(String(255), nullable=False)\n size = Column(Integer, nullable=False)\n\n def __init__(self, source, filename):\n self.source_id = source.id\n self.filename = filename\n self.size = os.stat(store.path(source.filesystem_id, filename)).st_size\n\n def __repr__(self):\n return '<Submission %r>' % (self.filename)\n\n\n# Declare (or import) models before init_db\ndef init_db():\n Base.metadata.create_all(bind=engine)\n\n", "path": "securedrop/db.py"}]} | 1,097 | 227 |
gh_patches_debug_19689 | rasdani/github-patches | git_diff | biolab__orange3-4044 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PCA: Apply automatically does not work when changing the threshold by moving a bar in the scree diagram
**Describe the bug**
When Apply automatically is on in the PCA widget, moving the vertical threshold bar with a mouse in the scree plot should update the output data. It does not.
**To Reproduce**
File (say, Iris) -> PCA -> Data Table
make sure Apply Automatically is clicked
now move the bar
no change in the Data Table
**Orange version:**
3.23.0
**Expected behavior**
In the PCA widget, changing the number of components by moving the "threshold bar" in the scree diagram with a mouse should apply this setting and output a new data set when "Apply automatically" is set on.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Orange/widgets/unsupervised/owpca.py`
Content:
```
1 import numbers
2
3 import numpy
4 from AnyQt.QtWidgets import QFormLayout
5 from AnyQt.QtCore import Qt
6
7 from Orange.data import Table, Domain, StringVariable, ContinuousVariable
8 from Orange.data.sql.table import SqlTable, AUTO_DL_LIMIT
9 from Orange.preprocess import preprocess
10 from Orange.projection import PCA
11 from Orange.widgets import widget, gui, settings
12 from Orange.widgets.utils.slidergraph import SliderGraph
13 from Orange.widgets.utils.widgetpreview import WidgetPreview
14 from Orange.widgets.widget import Input, Output
15
16
17 # Maximum number of PCA components that we can set in the widget
18 MAX_COMPONENTS = 100
19
20
21 class OWPCA(widget.OWWidget):
22 name = "PCA"
23 description = "Principal component analysis with a scree-diagram."
24 icon = "icons/PCA.svg"
25 priority = 3050
26 keywords = ["principal component analysis", "linear transformation"]
27
28 class Inputs:
29 data = Input("Data", Table)
30
31 class Outputs:
32 transformed_data = Output("Transformed Data", Table, replaces=["Transformed data"])
33 components = Output("Components", Table)
34 pca = Output("PCA", PCA, dynamic=False)
35
36 settingsHandler = settings.DomainContextHandler()
37
38 ncomponents = settings.Setting(2)
39 variance_covered = settings.Setting(100)
40 auto_commit = settings.Setting(True)
41 normalize = settings.ContextSetting(True)
42 maxp = settings.Setting(20)
43 axis_labels = settings.Setting(10)
44
45 graph_name = "plot.plotItem"
46
47 class Warning(widget.OWWidget.Warning):
48 trivial_components = widget.Msg(
49 "All components of the PCA are trivial (explain 0 variance). "
50 "Input data is constant (or near constant).")
51
52 class Error(widget.OWWidget.Error):
53 no_features = widget.Msg("At least 1 feature is required")
54 no_instances = widget.Msg("At least 1 data instance is required")
55
56 def __init__(self):
57 super().__init__()
58 self.data = None
59
60 self._pca = None
61 self._transformed = None
62 self._variance_ratio = None
63 self._cumulative = None
64 self._init_projector()
65
66 # Components Selection
67 box = gui.vBox(self.controlArea, "Components Selection")
68 form = QFormLayout()
69 box.layout().addLayout(form)
70
71 self.components_spin = gui.spin(
72 box, self, "ncomponents", 1, MAX_COMPONENTS,
73 callback=self._update_selection_component_spin,
74 keyboardTracking=False
75 )
76 self.components_spin.setSpecialValueText("All")
77
78 self.variance_spin = gui.spin(
79 box, self, "variance_covered", 1, 100,
80 callback=self._update_selection_variance_spin,
81 keyboardTracking=False
82 )
83 self.variance_spin.setSuffix("%")
84
85 form.addRow("Components:", self.components_spin)
86 form.addRow("Variance covered:", self.variance_spin)
87
88 # Options
89 self.options_box = gui.vBox(self.controlArea, "Options")
90 self.normalize_box = gui.checkBox(
91 self.options_box, self, "normalize",
92 "Normalize data", callback=self._update_normalize
93 )
94
95 self.maxp_spin = gui.spin(
96 self.options_box, self, "maxp", 1, MAX_COMPONENTS,
97 label="Show only first", callback=self._setup_plot,
98 keyboardTracking=False
99 )
100
101 self.controlArea.layout().addStretch()
102
103 gui.auto_apply(self.controlArea, self, "auto_commit")
104
105 self.plot = SliderGraph(
106 "Principal Components", "Proportion of variance",
107 self._on_cut_changed)
108
109 self.mainArea.layout().addWidget(self.plot)
110 self._update_normalize()
111
112 @Inputs.data
113 def set_data(self, data):
114 self.closeContext()
115 self.clear_messages()
116 self.clear()
117 self.information()
118 self.data = None
119 if isinstance(data, SqlTable):
120 if data.approx_len() < AUTO_DL_LIMIT:
121 data = Table(data)
122 else:
123 self.information("Data has been sampled")
124 data_sample = data.sample_time(1, no_cache=True)
125 data_sample.download_data(2000, partial=True)
126 data = Table(data_sample)
127 if isinstance(data, Table):
128 if not data.domain.attributes:
129 self.Error.no_features()
130 self.clear_outputs()
131 return
132 if not data:
133 self.Error.no_instances()
134 self.clear_outputs()
135 return
136
137 self.openContext(data)
138 self._init_projector()
139
140 self.data = data
141 self.fit()
142
143 def fit(self):
144 self.clear()
145 self.Warning.trivial_components.clear()
146 if self.data is None:
147 return
148
149 data = self.data
150
151 if self.normalize:
152 self._pca_projector.preprocessors = \
153 self._pca_preprocessors + [preprocess.Normalize(center=False)]
154 else:
155 self._pca_projector.preprocessors = self._pca_preprocessors
156
157 if not isinstance(data, SqlTable):
158 pca = self._pca_projector(data)
159 variance_ratio = pca.explained_variance_ratio_
160 cumulative = numpy.cumsum(variance_ratio)
161
162 if numpy.isfinite(cumulative[-1]):
163 self.components_spin.setRange(0, len(cumulative))
164 self._pca = pca
165 self._variance_ratio = variance_ratio
166 self._cumulative = cumulative
167 self._setup_plot()
168 else:
169 self.Warning.trivial_components()
170
171 self.unconditional_commit()
172
173 def clear(self):
174 self._pca = None
175 self._transformed = None
176 self._variance_ratio = None
177 self._cumulative = None
178 self.plot.clear_plot()
179
180 def clear_outputs(self):
181 self.Outputs.transformed_data.send(None)
182 self.Outputs.components.send(None)
183 self.Outputs.pca.send(self._pca_projector)
184
185 def _setup_plot(self):
186 if self._pca is None:
187 self.plot.clear_plot()
188 return
189
190 explained_ratio = self._variance_ratio
191 explained = self._cumulative
192 cutpos = self._nselected_components()
193 p = min(len(self._variance_ratio), self.maxp)
194
195 self.plot.update(
196 numpy.arange(1, p+1), [explained_ratio[:p], explained[:p]],
197 [Qt.red, Qt.darkYellow], cutpoint_x=cutpos)
198
199 self._update_axis()
200
201 def _on_cut_changed(self, components):
202
203 if not (self.ncomponents == 0 and
204 components == len(self._variance_ratio)):
205 self.ncomponents = components
206
207 if self._pca is not None:
208 var = self._cumulative[components - 1]
209 if numpy.isfinite(var):
210 self.variance_covered = int(var * 100)
211
212 if components != self._nselected_components():
213 self._invalidate_selection()
214
215 def _update_selection_component_spin(self):
216 # cut changed by "ncomponents" spin.
217 if self._pca is None:
218 self._invalidate_selection()
219 return
220
221 if self.ncomponents == 0:
222 # Special "All" value
223 cut = len(self._variance_ratio)
224 else:
225 cut = self.ncomponents
226
227 var = self._cumulative[cut - 1]
228 if numpy.isfinite(var):
229 self.variance_covered = int(var * 100)
230
231 self.plot.set_cut_point(cut)
232 self._invalidate_selection()
233
234 def _update_selection_variance_spin(self):
235 # cut changed by "max variance" spin.
236 if self._pca is None:
237 return
238
239 cut = numpy.searchsorted(self._cumulative,
240 self.variance_covered / 100.0) + 1
241 cut = min(cut, len(self._cumulative))
242 self.ncomponents = cut
243 self.plot.set_cut_point(cut)
244 self._invalidate_selection()
245
246 def _update_normalize(self):
247 self.fit()
248 if self.data is None:
249 self._invalidate_selection()
250
251 def _init_projector(self):
252 self._pca_projector = PCA(n_components=MAX_COMPONENTS, random_state=0)
253 self._pca_projector.component = self.ncomponents
254 self._pca_preprocessors = PCA.preprocessors
255
256 def _nselected_components(self):
257 """Return the number of selected components."""
258 if self._pca is None:
259 return 0
260
261 if self.ncomponents == 0:
262 # Special "All" value
263 max_comp = len(self._variance_ratio)
264 else:
265 max_comp = self.ncomponents
266
267 var_max = self._cumulative[max_comp - 1]
268 if var_max != numpy.floor(self.variance_covered / 100.0):
269 cut = max_comp
270 assert numpy.isfinite(var_max)
271 self.variance_covered = int(var_max * 100)
272 else:
273 self.ncomponents = cut = numpy.searchsorted(
274 self._cumulative, self.variance_covered / 100.0) + 1
275 return cut
276
277 def _invalidate_selection(self):
278 self.commit()
279
280 def _update_axis(self):
281 p = min(len(self._variance_ratio), self.maxp)
282 axis = self.plot.getAxis("bottom")
283 d = max((p-1)//(self.axis_labels-1), 1)
284 axis.setTicks([[(i, str(i)) for i in range(1, p + 1, d)]])
285
286 def commit(self):
287 transformed = components = None
288 if self._pca is not None:
289 if self._transformed is None:
290 # Compute the full transform (MAX_COMPONENTS components) once.
291 self._transformed = self._pca(self.data)
292 transformed = self._transformed
293
294 domain = Domain(
295 transformed.domain.attributes[:self.ncomponents],
296 self.data.domain.class_vars,
297 self.data.domain.metas
298 )
299 transformed = transformed.from_table(domain, transformed)
300 # prevent caching new features by defining compute_value
301 dom = Domain(
302 [ContinuousVariable(a.name, compute_value=lambda _: None)
303 for a in self._pca.orig_domain.attributes],
304 metas=[StringVariable(name='component')])
305 metas = numpy.array([['PC{}'.format(i + 1)
306 for i in range(self.ncomponents)]],
307 dtype=object).T
308 components = Table(dom, self._pca.components_[:self.ncomponents],
309 metas=metas)
310 components.name = 'components'
311
312 self._pca_projector.component = self.ncomponents
313 self.Outputs.transformed_data.send(transformed)
314 self.Outputs.components.send(components)
315 self.Outputs.pca.send(self._pca_projector)
316
317 def send_report(self):
318 if self.data is None:
319 return
320 self.report_items((
321 ("Normalize data", str(self.normalize)),
322 ("Selected components", self.ncomponents),
323 ("Explained variance", "{:.3f} %".format(self.variance_covered))
324 ))
325 self.report_plot()
326
327 @classmethod
328 def migrate_settings(cls, settings, version):
329 if "variance_covered" in settings:
330 # Due to the error in gh-1896 the variance_covered was persisted
331 # as a NaN value, causing a TypeError in the widgets `__init__`.
332 vc = settings["variance_covered"]
333 if isinstance(vc, numbers.Real):
334 if numpy.isfinite(vc):
335 vc = int(vc)
336 else:
337 vc = 100
338 settings["variance_covered"] = vc
339 if settings.get("ncomponents", 0) > MAX_COMPONENTS:
340 settings["ncomponents"] = MAX_COMPONENTS
341
342 # Remove old `decomposition_idx` when SVD was still included
343 settings.pop("decomposition_idx", None)
344
345 # Remove RemotePCA settings
346 settings.pop("batch_size", None)
347 settings.pop("address", None)
348 settings.pop("auto_update", None)
349
350
351 if __name__ == "__main__": # pragma: no cover
352 WidgetPreview(OWPCA).run(Table("housing"))
353
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Orange/widgets/unsupervised/owpca.py b/Orange/widgets/unsupervised/owpca.py
--- a/Orange/widgets/unsupervised/owpca.py
+++ b/Orange/widgets/unsupervised/owpca.py
@@ -199,18 +199,19 @@
self._update_axis()
def _on_cut_changed(self, components):
+ if components == self.ncomponents \
+ or self.ncomponents == 0 \
+ or self._pca is not None \
+ and components == len(self._variance_ratio):
+ return
- if not (self.ncomponents == 0 and
- components == len(self._variance_ratio)):
- self.ncomponents = components
-
+ self.ncomponents = components
if self._pca is not None:
var = self._cumulative[components - 1]
if numpy.isfinite(var):
self.variance_covered = int(var * 100)
- if components != self._nselected_components():
- self._invalidate_selection()
+ self._invalidate_selection()
def _update_selection_component_spin(self):
# cut changed by "ncomponents" spin.
| {"golden_diff": "diff --git a/Orange/widgets/unsupervised/owpca.py b/Orange/widgets/unsupervised/owpca.py\n--- a/Orange/widgets/unsupervised/owpca.py\n+++ b/Orange/widgets/unsupervised/owpca.py\n@@ -199,18 +199,19 @@\n self._update_axis()\n \n def _on_cut_changed(self, components):\n+ if components == self.ncomponents \\\n+ or self.ncomponents == 0 \\\n+ or self._pca is not None \\\n+ and components == len(self._variance_ratio):\n+ return\n \n- if not (self.ncomponents == 0 and\n- components == len(self._variance_ratio)):\n- self.ncomponents = components\n-\n+ self.ncomponents = components\n if self._pca is not None:\n var = self._cumulative[components - 1]\n if numpy.isfinite(var):\n self.variance_covered = int(var * 100)\n \n- if components != self._nselected_components():\n- self._invalidate_selection()\n+ self._invalidate_selection()\n \n def _update_selection_component_spin(self):\n # cut changed by \"ncomponents\" spin.\n", "issue": "PCA: Apply automatically does not work when changing the threshold by moving a bar in the scree diagram\n**Describe the bug**\r\nWhen Apply automatically is on in the PCA widget, moving the vertical threshold bar with a mouse in the scree plot should update the output data. It does not.\r\n\r\n**To Reproduce**\r\nFile (say, Iris) -> PCA -> Data Table\r\nmake sure Apply Automatically is clicked\r\nnow move the bar\r\nno change in the Data Table\r\n\r\n**Orange version:**\r\n3.23.0\r\n\r\n**Expected behavior**\r\nIn the PCA widget, changing the number of components by moving the \"threshold bar\" in the scree diagram with a mouse should apply this setting and output a new data set when \"Apply automatically\" is set on.\r\n\n", "before_files": [{"content": "import numbers\n\nimport numpy\nfrom AnyQt.QtWidgets import QFormLayout\nfrom AnyQt.QtCore import Qt\n\nfrom Orange.data import Table, Domain, StringVariable, ContinuousVariable\nfrom Orange.data.sql.table import SqlTable, AUTO_DL_LIMIT\nfrom Orange.preprocess import preprocess\nfrom Orange.projection import PCA\nfrom Orange.widgets import widget, gui, settings\nfrom Orange.widgets.utils.slidergraph import SliderGraph\nfrom Orange.widgets.utils.widgetpreview import WidgetPreview\nfrom Orange.widgets.widget import Input, Output\n\n\n# Maximum number of PCA components that we can set in the widget\nMAX_COMPONENTS = 100\n\n\nclass OWPCA(widget.OWWidget):\n name = \"PCA\"\n description = \"Principal component analysis with a scree-diagram.\"\n icon = \"icons/PCA.svg\"\n priority = 3050\n keywords = [\"principal component analysis\", \"linear transformation\"]\n\n class Inputs:\n data = Input(\"Data\", Table)\n\n class Outputs:\n transformed_data = Output(\"Transformed Data\", Table, replaces=[\"Transformed data\"])\n components = Output(\"Components\", Table)\n pca = Output(\"PCA\", PCA, dynamic=False)\n\n settingsHandler = settings.DomainContextHandler()\n\n ncomponents = settings.Setting(2)\n variance_covered = settings.Setting(100)\n auto_commit = settings.Setting(True)\n normalize = settings.ContextSetting(True)\n maxp = settings.Setting(20)\n axis_labels = settings.Setting(10)\n\n graph_name = \"plot.plotItem\"\n\n class Warning(widget.OWWidget.Warning):\n trivial_components = widget.Msg(\n \"All components of the PCA are trivial (explain 0 variance). \"\n \"Input data is constant (or near constant).\")\n\n class Error(widget.OWWidget.Error):\n no_features = widget.Msg(\"At least 1 feature is required\")\n no_instances = widget.Msg(\"At least 1 data instance is required\")\n\n def __init__(self):\n super().__init__()\n self.data = None\n\n self._pca = None\n self._transformed = None\n self._variance_ratio = None\n self._cumulative = None\n self._init_projector()\n\n # Components Selection\n box = gui.vBox(self.controlArea, \"Components Selection\")\n form = QFormLayout()\n box.layout().addLayout(form)\n\n self.components_spin = gui.spin(\n box, self, \"ncomponents\", 1, MAX_COMPONENTS,\n callback=self._update_selection_component_spin,\n keyboardTracking=False\n )\n self.components_spin.setSpecialValueText(\"All\")\n\n self.variance_spin = gui.spin(\n box, self, \"variance_covered\", 1, 100,\n callback=self._update_selection_variance_spin,\n keyboardTracking=False\n )\n self.variance_spin.setSuffix(\"%\")\n\n form.addRow(\"Components:\", self.components_spin)\n form.addRow(\"Variance covered:\", self.variance_spin)\n\n # Options\n self.options_box = gui.vBox(self.controlArea, \"Options\")\n self.normalize_box = gui.checkBox(\n self.options_box, self, \"normalize\",\n \"Normalize data\", callback=self._update_normalize\n )\n\n self.maxp_spin = gui.spin(\n self.options_box, self, \"maxp\", 1, MAX_COMPONENTS,\n label=\"Show only first\", callback=self._setup_plot,\n keyboardTracking=False\n )\n\n self.controlArea.layout().addStretch()\n\n gui.auto_apply(self.controlArea, self, \"auto_commit\")\n\n self.plot = SliderGraph(\n \"Principal Components\", \"Proportion of variance\",\n self._on_cut_changed)\n\n self.mainArea.layout().addWidget(self.plot)\n self._update_normalize()\n\n @Inputs.data\n def set_data(self, data):\n self.closeContext()\n self.clear_messages()\n self.clear()\n self.information()\n self.data = None\n if isinstance(data, SqlTable):\n if data.approx_len() < AUTO_DL_LIMIT:\n data = Table(data)\n else:\n self.information(\"Data has been sampled\")\n data_sample = data.sample_time(1, no_cache=True)\n data_sample.download_data(2000, partial=True)\n data = Table(data_sample)\n if isinstance(data, Table):\n if not data.domain.attributes:\n self.Error.no_features()\n self.clear_outputs()\n return\n if not data:\n self.Error.no_instances()\n self.clear_outputs()\n return\n\n self.openContext(data)\n self._init_projector()\n\n self.data = data\n self.fit()\n\n def fit(self):\n self.clear()\n self.Warning.trivial_components.clear()\n if self.data is None:\n return\n\n data = self.data\n\n if self.normalize:\n self._pca_projector.preprocessors = \\\n self._pca_preprocessors + [preprocess.Normalize(center=False)]\n else:\n self._pca_projector.preprocessors = self._pca_preprocessors\n\n if not isinstance(data, SqlTable):\n pca = self._pca_projector(data)\n variance_ratio = pca.explained_variance_ratio_\n cumulative = numpy.cumsum(variance_ratio)\n\n if numpy.isfinite(cumulative[-1]):\n self.components_spin.setRange(0, len(cumulative))\n self._pca = pca\n self._variance_ratio = variance_ratio\n self._cumulative = cumulative\n self._setup_plot()\n else:\n self.Warning.trivial_components()\n\n self.unconditional_commit()\n\n def clear(self):\n self._pca = None\n self._transformed = None\n self._variance_ratio = None\n self._cumulative = None\n self.plot.clear_plot()\n\n def clear_outputs(self):\n self.Outputs.transformed_data.send(None)\n self.Outputs.components.send(None)\n self.Outputs.pca.send(self._pca_projector)\n\n def _setup_plot(self):\n if self._pca is None:\n self.plot.clear_plot()\n return\n\n explained_ratio = self._variance_ratio\n explained = self._cumulative\n cutpos = self._nselected_components()\n p = min(len(self._variance_ratio), self.maxp)\n\n self.plot.update(\n numpy.arange(1, p+1), [explained_ratio[:p], explained[:p]],\n [Qt.red, Qt.darkYellow], cutpoint_x=cutpos)\n\n self._update_axis()\n\n def _on_cut_changed(self, components):\n\n if not (self.ncomponents == 0 and\n components == len(self._variance_ratio)):\n self.ncomponents = components\n\n if self._pca is not None:\n var = self._cumulative[components - 1]\n if numpy.isfinite(var):\n self.variance_covered = int(var * 100)\n\n if components != self._nselected_components():\n self._invalidate_selection()\n\n def _update_selection_component_spin(self):\n # cut changed by \"ncomponents\" spin.\n if self._pca is None:\n self._invalidate_selection()\n return\n\n if self.ncomponents == 0:\n # Special \"All\" value\n cut = len(self._variance_ratio)\n else:\n cut = self.ncomponents\n\n var = self._cumulative[cut - 1]\n if numpy.isfinite(var):\n self.variance_covered = int(var * 100)\n\n self.plot.set_cut_point(cut)\n self._invalidate_selection()\n\n def _update_selection_variance_spin(self):\n # cut changed by \"max variance\" spin.\n if self._pca is None:\n return\n\n cut = numpy.searchsorted(self._cumulative,\n self.variance_covered / 100.0) + 1\n cut = min(cut, len(self._cumulative))\n self.ncomponents = cut\n self.plot.set_cut_point(cut)\n self._invalidate_selection()\n\n def _update_normalize(self):\n self.fit()\n if self.data is None:\n self._invalidate_selection()\n\n def _init_projector(self):\n self._pca_projector = PCA(n_components=MAX_COMPONENTS, random_state=0)\n self._pca_projector.component = self.ncomponents\n self._pca_preprocessors = PCA.preprocessors\n\n def _nselected_components(self):\n \"\"\"Return the number of selected components.\"\"\"\n if self._pca is None:\n return 0\n\n if self.ncomponents == 0:\n # Special \"All\" value\n max_comp = len(self._variance_ratio)\n else:\n max_comp = self.ncomponents\n\n var_max = self._cumulative[max_comp - 1]\n if var_max != numpy.floor(self.variance_covered / 100.0):\n cut = max_comp\n assert numpy.isfinite(var_max)\n self.variance_covered = int(var_max * 100)\n else:\n self.ncomponents = cut = numpy.searchsorted(\n self._cumulative, self.variance_covered / 100.0) + 1\n return cut\n\n def _invalidate_selection(self):\n self.commit()\n\n def _update_axis(self):\n p = min(len(self._variance_ratio), self.maxp)\n axis = self.plot.getAxis(\"bottom\")\n d = max((p-1)//(self.axis_labels-1), 1)\n axis.setTicks([[(i, str(i)) for i in range(1, p + 1, d)]])\n\n def commit(self):\n transformed = components = None\n if self._pca is not None:\n if self._transformed is None:\n # Compute the full transform (MAX_COMPONENTS components) once.\n self._transformed = self._pca(self.data)\n transformed = self._transformed\n\n domain = Domain(\n transformed.domain.attributes[:self.ncomponents],\n self.data.domain.class_vars,\n self.data.domain.metas\n )\n transformed = transformed.from_table(domain, transformed)\n # prevent caching new features by defining compute_value\n dom = Domain(\n [ContinuousVariable(a.name, compute_value=lambda _: None)\n for a in self._pca.orig_domain.attributes],\n metas=[StringVariable(name='component')])\n metas = numpy.array([['PC{}'.format(i + 1)\n for i in range(self.ncomponents)]],\n dtype=object).T\n components = Table(dom, self._pca.components_[:self.ncomponents],\n metas=metas)\n components.name = 'components'\n\n self._pca_projector.component = self.ncomponents\n self.Outputs.transformed_data.send(transformed)\n self.Outputs.components.send(components)\n self.Outputs.pca.send(self._pca_projector)\n\n def send_report(self):\n if self.data is None:\n return\n self.report_items((\n (\"Normalize data\", str(self.normalize)),\n (\"Selected components\", self.ncomponents),\n (\"Explained variance\", \"{:.3f} %\".format(self.variance_covered))\n ))\n self.report_plot()\n\n @classmethod\n def migrate_settings(cls, settings, version):\n if \"variance_covered\" in settings:\n # Due to the error in gh-1896 the variance_covered was persisted\n # as a NaN value, causing a TypeError in the widgets `__init__`.\n vc = settings[\"variance_covered\"]\n if isinstance(vc, numbers.Real):\n if numpy.isfinite(vc):\n vc = int(vc)\n else:\n vc = 100\n settings[\"variance_covered\"] = vc\n if settings.get(\"ncomponents\", 0) > MAX_COMPONENTS:\n settings[\"ncomponents\"] = MAX_COMPONENTS\n\n # Remove old `decomposition_idx` when SVD was still included\n settings.pop(\"decomposition_idx\", None)\n\n # Remove RemotePCA settings\n settings.pop(\"batch_size\", None)\n settings.pop(\"address\", None)\n settings.pop(\"auto_update\", None)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n WidgetPreview(OWPCA).run(Table(\"housing\"))\n", "path": "Orange/widgets/unsupervised/owpca.py"}], "after_files": [{"content": "import numbers\n\nimport numpy\nfrom AnyQt.QtWidgets import QFormLayout\nfrom AnyQt.QtCore import Qt\n\nfrom Orange.data import Table, Domain, StringVariable, ContinuousVariable\nfrom Orange.data.sql.table import SqlTable, AUTO_DL_LIMIT\nfrom Orange.preprocess import preprocess\nfrom Orange.projection import PCA\nfrom Orange.widgets import widget, gui, settings\nfrom Orange.widgets.utils.slidergraph import SliderGraph\nfrom Orange.widgets.utils.widgetpreview import WidgetPreview\nfrom Orange.widgets.widget import Input, Output\n\n\n# Maximum number of PCA components that we can set in the widget\nMAX_COMPONENTS = 100\n\n\nclass OWPCA(widget.OWWidget):\n name = \"PCA\"\n description = \"Principal component analysis with a scree-diagram.\"\n icon = \"icons/PCA.svg\"\n priority = 3050\n keywords = [\"principal component analysis\", \"linear transformation\"]\n\n class Inputs:\n data = Input(\"Data\", Table)\n\n class Outputs:\n transformed_data = Output(\"Transformed Data\", Table, replaces=[\"Transformed data\"])\n components = Output(\"Components\", Table)\n pca = Output(\"PCA\", PCA, dynamic=False)\n\n settingsHandler = settings.DomainContextHandler()\n\n ncomponents = settings.Setting(2)\n variance_covered = settings.Setting(100)\n auto_commit = settings.Setting(True)\n normalize = settings.ContextSetting(True)\n maxp = settings.Setting(20)\n axis_labels = settings.Setting(10)\n\n graph_name = \"plot.plotItem\"\n\n class Warning(widget.OWWidget.Warning):\n trivial_components = widget.Msg(\n \"All components of the PCA are trivial (explain 0 variance). \"\n \"Input data is constant (or near constant).\")\n\n class Error(widget.OWWidget.Error):\n no_features = widget.Msg(\"At least 1 feature is required\")\n no_instances = widget.Msg(\"At least 1 data instance is required\")\n\n def __init__(self):\n super().__init__()\n self.data = None\n\n self._pca = None\n self._transformed = None\n self._variance_ratio = None\n self._cumulative = None\n self._init_projector()\n\n # Components Selection\n box = gui.vBox(self.controlArea, \"Components Selection\")\n form = QFormLayout()\n box.layout().addLayout(form)\n\n self.components_spin = gui.spin(\n box, self, \"ncomponents\", 1, MAX_COMPONENTS,\n callback=self._update_selection_component_spin,\n keyboardTracking=False\n )\n self.components_spin.setSpecialValueText(\"All\")\n\n self.variance_spin = gui.spin(\n box, self, \"variance_covered\", 1, 100,\n callback=self._update_selection_variance_spin,\n keyboardTracking=False\n )\n self.variance_spin.setSuffix(\"%\")\n\n form.addRow(\"Components:\", self.components_spin)\n form.addRow(\"Variance covered:\", self.variance_spin)\n\n # Options\n self.options_box = gui.vBox(self.controlArea, \"Options\")\n self.normalize_box = gui.checkBox(\n self.options_box, self, \"normalize\",\n \"Normalize data\", callback=self._update_normalize\n )\n\n self.maxp_spin = gui.spin(\n self.options_box, self, \"maxp\", 1, MAX_COMPONENTS,\n label=\"Show only first\", callback=self._setup_plot,\n keyboardTracking=False\n )\n\n self.controlArea.layout().addStretch()\n\n gui.auto_apply(self.controlArea, self, \"auto_commit\")\n\n self.plot = SliderGraph(\n \"Principal Components\", \"Proportion of variance\",\n self._on_cut_changed)\n\n self.mainArea.layout().addWidget(self.plot)\n self._update_normalize()\n\n @Inputs.data\n def set_data(self, data):\n self.closeContext()\n self.clear_messages()\n self.clear()\n self.information()\n self.data = None\n if isinstance(data, SqlTable):\n if data.approx_len() < AUTO_DL_LIMIT:\n data = Table(data)\n else:\n self.information(\"Data has been sampled\")\n data_sample = data.sample_time(1, no_cache=True)\n data_sample.download_data(2000, partial=True)\n data = Table(data_sample)\n if isinstance(data, Table):\n if not data.domain.attributes:\n self.Error.no_features()\n self.clear_outputs()\n return\n if not data:\n self.Error.no_instances()\n self.clear_outputs()\n return\n\n self.openContext(data)\n self._init_projector()\n\n self.data = data\n self.fit()\n\n def fit(self):\n self.clear()\n self.Warning.trivial_components.clear()\n if self.data is None:\n return\n\n data = self.data\n\n if self.normalize:\n self._pca_projector.preprocessors = \\\n self._pca_preprocessors + [preprocess.Normalize(center=False)]\n else:\n self._pca_projector.preprocessors = self._pca_preprocessors\n\n if not isinstance(data, SqlTable):\n pca = self._pca_projector(data)\n variance_ratio = pca.explained_variance_ratio_\n cumulative = numpy.cumsum(variance_ratio)\n\n if numpy.isfinite(cumulative[-1]):\n self.components_spin.setRange(0, len(cumulative))\n self._pca = pca\n self._variance_ratio = variance_ratio\n self._cumulative = cumulative\n self._setup_plot()\n else:\n self.Warning.trivial_components()\n\n self.unconditional_commit()\n\n def clear(self):\n self._pca = None\n self._transformed = None\n self._variance_ratio = None\n self._cumulative = None\n self.plot.clear_plot()\n\n def clear_outputs(self):\n self.Outputs.transformed_data.send(None)\n self.Outputs.components.send(None)\n self.Outputs.pca.send(self._pca_projector)\n\n def _setup_plot(self):\n if self._pca is None:\n self.plot.clear_plot()\n return\n\n explained_ratio = self._variance_ratio\n explained = self._cumulative\n cutpos = self._nselected_components()\n p = min(len(self._variance_ratio), self.maxp)\n\n self.plot.update(\n numpy.arange(1, p+1), [explained_ratio[:p], explained[:p]],\n [Qt.red, Qt.darkYellow], cutpoint_x=cutpos)\n\n self._update_axis()\n\n def _on_cut_changed(self, components):\n if components == self.ncomponents \\\n or self.ncomponents == 0 \\\n or self._pca is not None \\\n and components == len(self._variance_ratio):\n return\n\n self.ncomponents = components\n if self._pca is not None:\n var = self._cumulative[components - 1]\n if numpy.isfinite(var):\n self.variance_covered = int(var * 100)\n\n self._invalidate_selection()\n\n def _update_selection_component_spin(self):\n # cut changed by \"ncomponents\" spin.\n if self._pca is None:\n self._invalidate_selection()\n return\n\n if self.ncomponents == 0:\n # Special \"All\" value\n cut = len(self._variance_ratio)\n else:\n cut = self.ncomponents\n\n var = self._cumulative[cut - 1]\n if numpy.isfinite(var):\n self.variance_covered = int(var * 100)\n\n self.plot.set_cut_point(cut)\n self._invalidate_selection()\n\n def _update_selection_variance_spin(self):\n # cut changed by \"max variance\" spin.\n if self._pca is None:\n return\n\n cut = numpy.searchsorted(self._cumulative,\n self.variance_covered / 100.0) + 1\n cut = min(cut, len(self._cumulative))\n self.ncomponents = cut\n self.plot.set_cut_point(cut)\n self._invalidate_selection()\n\n def _update_normalize(self):\n self.fit()\n if self.data is None:\n self._invalidate_selection()\n\n def _init_projector(self):\n self._pca_projector = PCA(n_components=MAX_COMPONENTS, random_state=0)\n self._pca_projector.component = self.ncomponents\n self._pca_preprocessors = PCA.preprocessors\n\n def _nselected_components(self):\n \"\"\"Return the number of selected components.\"\"\"\n if self._pca is None:\n return 0\n\n if self.ncomponents == 0:\n # Special \"All\" value\n max_comp = len(self._variance_ratio)\n else:\n max_comp = self.ncomponents\n\n var_max = self._cumulative[max_comp - 1]\n if var_max != numpy.floor(self.variance_covered / 100.0):\n cut = max_comp\n assert numpy.isfinite(var_max)\n self.variance_covered = int(var_max * 100)\n else:\n self.ncomponents = cut = numpy.searchsorted(\n self._cumulative, self.variance_covered / 100.0) + 1\n return cut\n\n def _invalidate_selection(self):\n self.commit()\n\n def _update_axis(self):\n p = min(len(self._variance_ratio), self.maxp)\n axis = self.plot.getAxis(\"bottom\")\n d = max((p-1)//(self.axis_labels-1), 1)\n axis.setTicks([[(i, str(i)) for i in range(1, p + 1, d)]])\n\n def commit(self):\n transformed = components = None\n if self._pca is not None:\n if self._transformed is None:\n # Compute the full transform (MAX_COMPONENTS components) once.\n self._transformed = self._pca(self.data)\n transformed = self._transformed\n\n domain = Domain(\n transformed.domain.attributes[:self.ncomponents],\n self.data.domain.class_vars,\n self.data.domain.metas\n )\n transformed = transformed.from_table(domain, transformed)\n # prevent caching new features by defining compute_value\n dom = Domain(\n [ContinuousVariable(a.name, compute_value=lambda _: None)\n for a in self._pca.orig_domain.attributes],\n metas=[StringVariable(name='component')])\n metas = numpy.array([['PC{}'.format(i + 1)\n for i in range(self.ncomponents)]],\n dtype=object).T\n components = Table(dom, self._pca.components_[:self.ncomponents],\n metas=metas)\n components.name = 'components'\n\n self._pca_projector.component = self.ncomponents\n self.Outputs.transformed_data.send(transformed)\n self.Outputs.components.send(components)\n self.Outputs.pca.send(self._pca_projector)\n\n def send_report(self):\n if self.data is None:\n return\n self.report_items((\n (\"Normalize data\", str(self.normalize)),\n (\"Selected components\", self.ncomponents),\n (\"Explained variance\", \"{:.3f} %\".format(self.variance_covered))\n ))\n self.report_plot()\n\n @classmethod\n def migrate_settings(cls, settings, version):\n if \"variance_covered\" in settings:\n # Due to the error in gh-1896 the variance_covered was persisted\n # as a NaN value, causing a TypeError in the widgets `__init__`.\n vc = settings[\"variance_covered\"]\n if isinstance(vc, numbers.Real):\n if numpy.isfinite(vc):\n vc = int(vc)\n else:\n vc = 100\n settings[\"variance_covered\"] = vc\n if settings.get(\"ncomponents\", 0) > MAX_COMPONENTS:\n settings[\"ncomponents\"] = MAX_COMPONENTS\n\n # Remove old `decomposition_idx` when SVD was still included\n settings.pop(\"decomposition_idx\", None)\n\n # Remove RemotePCA settings\n settings.pop(\"batch_size\", None)\n settings.pop(\"address\", None)\n settings.pop(\"auto_update\", None)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n WidgetPreview(OWPCA).run(Table(\"housing\"))\n", "path": "Orange/widgets/unsupervised/owpca.py"}]} | 4,025 | 272 |
gh_patches_debug_34089 | rasdani/github-patches | git_diff | microsoft__nni-3416 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
wrong field name in ExperimentConfig, "accessor" should be "assessor"
**Environment**:
- NNI version: 2.0
- NNI mode (local|remote|pai): local
- Client OS: Ubuntu 20.10
- Server OS (for remote mode only):
- Python version: 3.8.5
- PyTorch/TensorFlow version: pytorch 1.9.0a+a62b0de
- Is conda/virtualenv/venv used?: conda 4.9.2
- Is running in Docker?: No
**Log message**:
- nnimanager.log: None
- dispatcher.log: None
- nnictl stdout and stderr:
```
WARNING: Validation with V1 schema failed. Trying to convert from V2 format...
ERROR: Conversion from v2 format failed: ValueError('ExperimentConfig: Unrecognized fields assessor')
ERROR: Config in v1 format validation failed. KeyError('trial')
```
**What issue meet, what's expected?**:
In `ExperimentConfig`, field `accessor` should be `assessor`.
**How to reproduce it?**:
A V2 config with 'assessor'
**Additional information**:
Fix by renaming all `accessor` in `nni.experiment.config.common.py` to `assessor`.
There are some `accessor` in doc as well.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nni/experiment/config/common.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 # Licensed under the MIT license.
3
4 from dataclasses import dataclass
5 from pathlib import Path
6 from typing import Any, Dict, List, Optional, Union
7
8 from .base import ConfigBase, PathLike
9 from . import util
10
11 __all__ = [
12 'ExperimentConfig',
13 'AlgorithmConfig',
14 'CustomAlgorithmConfig',
15 'TrainingServiceConfig',
16 ]
17
18
19 @dataclass(init=False)
20 class _AlgorithmConfig(ConfigBase):
21 name: Optional[str] = None
22 class_name: Optional[str] = None
23 code_directory: Optional[PathLike] = None
24 class_args: Optional[Dict[str, Any]] = None
25
26 def validate(self):
27 super().validate()
28 _validate_algo(self)
29
30
31 @dataclass(init=False)
32 class AlgorithmConfig(_AlgorithmConfig):
33 name: str
34 class_args: Optional[Dict[str, Any]] = None
35
36
37 @dataclass(init=False)
38 class CustomAlgorithmConfig(_AlgorithmConfig):
39 class_name: str
40 class_directory: Optional[PathLike] = None
41 class_args: Optional[Dict[str, Any]] = None
42
43
44 class TrainingServiceConfig(ConfigBase):
45 platform: str
46
47
48 @dataclass(init=False)
49 class ExperimentConfig(ConfigBase):
50 experiment_name: Optional[str] = None
51 search_space_file: Optional[PathLike] = None
52 search_space: Any = None
53 trial_command: str
54 trial_code_directory: PathLike = '.'
55 trial_concurrency: int
56 trial_gpu_number: Optional[int] = None
57 max_experiment_duration: Optional[str] = None
58 max_trial_number: Optional[int] = None
59 nni_manager_ip: Optional[str] = None
60 use_annotation: bool = False
61 debug: bool = False
62 log_level: Optional[str] = None
63 experiment_working_directory: Optional[PathLike] = None
64 tuner_gpu_indices: Optional[Union[List[int], str]] = None
65 tuner: Optional[_AlgorithmConfig] = None
66 accessor: Optional[_AlgorithmConfig] = None
67 advisor: Optional[_AlgorithmConfig] = None
68 training_service: Union[TrainingServiceConfig, List[TrainingServiceConfig]]
69
70 def __init__(self, training_service_platform: Optional[Union[str, List[str]]] = None, **kwargs):
71 kwargs = util.case_insensitive(kwargs)
72 if training_service_platform is not None:
73 assert 'trainingservice' not in kwargs
74 kwargs['trainingservice'] = util.training_service_config_factory(platform = training_service_platform)
75 elif isinstance(kwargs.get('trainingservice'), (dict, list)):
76 # dict means a single training service
77 # list means hybrid training service
78 kwargs['trainingservice'] = util.training_service_config_factory(config = kwargs['trainingservice'])
79 else:
80 raise RuntimeError('Unsupported Training service configuration!')
81 super().__init__(**kwargs)
82
83 def validate(self, initialized_tuner: bool = False) -> None:
84 super().validate()
85 if initialized_tuner:
86 _validate_for_exp(self)
87 else:
88 _validate_for_nnictl(self)
89 if self.trial_gpu_number and hasattr(self.training_service, 'use_active_gpu'):
90 if self.training_service.use_active_gpu is None:
91 raise ValueError('Please set "use_active_gpu"')
92
93 ## End of public API ##
94
95 @property
96 def _canonical_rules(self):
97 return _canonical_rules
98
99 @property
100 def _validation_rules(self):
101 return _validation_rules
102
103
104 _canonical_rules = {
105 'search_space_file': util.canonical_path,
106 'trial_code_directory': util.canonical_path,
107 'max_experiment_duration': lambda value: f'{util.parse_time(value)}s' if value is not None else None,
108 'experiment_working_directory': util.canonical_path,
109 'tuner_gpu_indices': lambda value: [int(idx) for idx in value.split(',')] if isinstance(value, str) else value
110 }
111
112 _validation_rules = {
113 'search_space_file': lambda value: (Path(value).is_file(), f'"{value}" does not exist or is not regular file'),
114 'trial_code_directory': lambda value: (Path(value).is_dir(), f'"{value}" does not exist or is not directory'),
115 'trial_concurrency': lambda value: value > 0,
116 'trial_gpu_number': lambda value: value >= 0,
117 'max_experiment_duration': lambda value: util.parse_time(value) > 0,
118 'max_trial_number': lambda value: value > 0,
119 'log_level': lambda value: value in ["trace", "debug", "info", "warning", "error", "fatal"],
120 'tuner_gpu_indices': lambda value: all(i >= 0 for i in value) and len(value) == len(set(value)),
121 'training_service': lambda value: (type(value) is not TrainingServiceConfig, 'cannot be abstract base class')
122 }
123
124 def _validate_for_exp(config: ExperimentConfig) -> None:
125 # validate experiment for nni.Experiment, where tuner is already initialized outside
126 if config.use_annotation:
127 raise ValueError('ExperimentConfig: annotation is not supported in this mode')
128 if util.count(config.search_space, config.search_space_file) != 1:
129 raise ValueError('ExperimentConfig: search_space and search_space_file must be set one')
130 if util.count(config.tuner, config.accessor, config.advisor) != 0:
131 raise ValueError('ExperimentConfig: tuner, accessor, and advisor must not be set in for this mode')
132 if config.tuner_gpu_indices is not None:
133 raise ValueError('ExperimentConfig: tuner_gpu_indices is not supported in this mode')
134
135 def _validate_for_nnictl(config: ExperimentConfig) -> None:
136 # validate experiment for normal launching approach
137 if config.use_annotation:
138 if util.count(config.search_space, config.search_space_file) != 0:
139 raise ValueError('ExperimentConfig: search_space and search_space_file must not be set with annotationn')
140 else:
141 if util.count(config.search_space, config.search_space_file) != 1:
142 raise ValueError('ExperimentConfig: search_space and search_space_file must be set one')
143 if util.count(config.tuner, config.advisor) != 1:
144 raise ValueError('ExperimentConfig: tuner and advisor must be set one')
145
146 def _validate_algo(algo: AlgorithmConfig) -> None:
147 if algo.name is None:
148 if algo.class_name is None:
149 raise ValueError('Missing algorithm name')
150 if algo.code_directory is not None and not Path(algo.code_directory).is_dir():
151 raise ValueError(f'code_directory "{algo.code_directory}" does not exist or is not directory')
152 else:
153 if algo.class_name is not None or algo.code_directory is not None:
154 raise ValueError(f'When name is set for registered algorithm, class_name and code_directory cannot be used')
155 # TODO: verify algorithm installation and class args
156
```
Path: `nni/assessor.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 # Licensed under the MIT license.
3
4 """
5 Assessor analyzes trial's intermediate results (e.g., periodically evaluated accuracy on test dataset)
6 to tell whether this trial can be early stopped or not.
7
8 See :class:`Assessor`' specification and ``docs/en_US/assessors.rst`` for details.
9 """
10
11 from enum import Enum
12 import logging
13
14 from .recoverable import Recoverable
15
16 __all__ = ['AssessResult', 'Assessor']
17
18 _logger = logging.getLogger(__name__)
19
20
21 class AssessResult(Enum):
22 """
23 Enum class for :meth:`Assessor.assess_trial` return value.
24 """
25
26 Good = True
27 """The trial works well."""
28
29 Bad = False
30 """The trial works poorly and should be early stopped."""
31
32
33 class Assessor(Recoverable):
34 """
35 Assessor analyzes trial's intermediate results (e.g., periodically evaluated accuracy on test dataset)
36 to tell whether this trial can be early stopped or not.
37
38 This is the abstract base class for all assessors.
39 Early stopping algorithms should inherit this class and override :meth:`assess_trial` method,
40 which receives intermediate results from trials and give an assessing result.
41
42 If :meth:`assess_trial` returns :obj:`AssessResult.Bad` for a trial,
43 it hints NNI framework that the trial is likely to result in a poor final accuracy,
44 and therefore should be killed to save resource.
45
46 If an accessor want's to be notified when a trial ends, it can also override :meth:`trial_end`.
47
48 To write a new assessor, you can reference :class:`~nni.medianstop_assessor.MedianstopAssessor`'s code as an example.
49
50 See Also
51 --------
52 Builtin assessors:
53 :class:`~nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor`
54 :class:`~nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor`
55 """
56
57 def assess_trial(self, trial_job_id, trial_history):
58 """
59 Abstract method for determining whether a trial should be killed. Must override.
60
61 The NNI framework has little guarantee on ``trial_history``.
62 This method is not guaranteed to be invoked for each time ``trial_history`` get updated.
63 It is also possible that a trial's history keeps updating after receiving a bad result.
64 And if the trial failed and retried, ``trial_history`` may be inconsistent with its previous value.
65
66 The only guarantee is that ``trial_history`` is always growing.
67 It will not be empty and will always be longer than previous value.
68
69 This is an example of how :meth:`assess_trial` get invoked sequentially:
70
71 ::
72
73 trial_job_id | trial_history | return value
74 ------------ | --------------- | ------------
75 Trial_A | [1.0, 2.0] | Good
76 Trial_B | [1.5, 1.3] | Bad
77 Trial_B | [1.5, 1.3, 1.9] | Good
78 Trial_A | [0.9, 1.8, 2.3] | Good
79
80 Parameters
81 ----------
82 trial_job_id : str
83 Unique identifier of the trial.
84 trial_history : list
85 Intermediate results of this trial. The element type is decided by trial code.
86
87 Returns
88 -------
89 AssessResult
90 :obj:`AssessResult.Good` or :obj:`AssessResult.Bad`.
91 """
92 raise NotImplementedError('Assessor: assess_trial not implemented')
93
94 def trial_end(self, trial_job_id, success):
95 """
96 Abstract method invoked when a trial is completed or terminated. Do nothing by default.
97
98 Parameters
99 ----------
100 trial_job_id : str
101 Unique identifier of the trial.
102 success : bool
103 True if the trial successfully completed; False if failed or terminated.
104 """
105
106 def load_checkpoint(self):
107 """
108 Internal API under revising, not recommended for end users.
109 """
110 checkpoin_path = self.get_checkpoint_path()
111 _logger.info('Load checkpoint ignored by assessor, checkpoint path: %s', checkpoin_path)
112
113 def save_checkpoint(self):
114 """
115 Internal API under revising, not recommended for end users.
116 """
117 checkpoin_path = self.get_checkpoint_path()
118 _logger.info('Save checkpoint ignored by assessor, checkpoint path: %s', checkpoin_path)
119
120 def _on_exit(self):
121 pass
122
123 def _on_error(self):
124 pass
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nni/assessor.py b/nni/assessor.py
--- a/nni/assessor.py
+++ b/nni/assessor.py
@@ -43,7 +43,7 @@
it hints NNI framework that the trial is likely to result in a poor final accuracy,
and therefore should be killed to save resource.
- If an accessor want's to be notified when a trial ends, it can also override :meth:`trial_end`.
+ If an assessor want's to be notified when a trial ends, it can also override :meth:`trial_end`.
To write a new assessor, you can reference :class:`~nni.medianstop_assessor.MedianstopAssessor`'s code as an example.
diff --git a/nni/experiment/config/common.py b/nni/experiment/config/common.py
--- a/nni/experiment/config/common.py
+++ b/nni/experiment/config/common.py
@@ -63,7 +63,7 @@
experiment_working_directory: Optional[PathLike] = None
tuner_gpu_indices: Optional[Union[List[int], str]] = None
tuner: Optional[_AlgorithmConfig] = None
- accessor: Optional[_AlgorithmConfig] = None
+ assessor: Optional[_AlgorithmConfig] = None
advisor: Optional[_AlgorithmConfig] = None
training_service: Union[TrainingServiceConfig, List[TrainingServiceConfig]]
@@ -127,8 +127,8 @@
raise ValueError('ExperimentConfig: annotation is not supported in this mode')
if util.count(config.search_space, config.search_space_file) != 1:
raise ValueError('ExperimentConfig: search_space and search_space_file must be set one')
- if util.count(config.tuner, config.accessor, config.advisor) != 0:
- raise ValueError('ExperimentConfig: tuner, accessor, and advisor must not be set in for this mode')
+ if util.count(config.tuner, config.assessor, config.advisor) != 0:
+ raise ValueError('ExperimentConfig: tuner, assessor, and advisor must not be set in for this mode')
if config.tuner_gpu_indices is not None:
raise ValueError('ExperimentConfig: tuner_gpu_indices is not supported in this mode')
| {"golden_diff": "diff --git a/nni/assessor.py b/nni/assessor.py\n--- a/nni/assessor.py\n+++ b/nni/assessor.py\n@@ -43,7 +43,7 @@\n it hints NNI framework that the trial is likely to result in a poor final accuracy,\n and therefore should be killed to save resource.\n \n- If an accessor want's to be notified when a trial ends, it can also override :meth:`trial_end`.\n+ If an assessor want's to be notified when a trial ends, it can also override :meth:`trial_end`.\n \n To write a new assessor, you can reference :class:`~nni.medianstop_assessor.MedianstopAssessor`'s code as an example.\n \ndiff --git a/nni/experiment/config/common.py b/nni/experiment/config/common.py\n--- a/nni/experiment/config/common.py\n+++ b/nni/experiment/config/common.py\n@@ -63,7 +63,7 @@\n experiment_working_directory: Optional[PathLike] = None\n tuner_gpu_indices: Optional[Union[List[int], str]] = None\n tuner: Optional[_AlgorithmConfig] = None\n- accessor: Optional[_AlgorithmConfig] = None\n+ assessor: Optional[_AlgorithmConfig] = None\n advisor: Optional[_AlgorithmConfig] = None\n training_service: Union[TrainingServiceConfig, List[TrainingServiceConfig]]\n \n@@ -127,8 +127,8 @@\n raise ValueError('ExperimentConfig: annotation is not supported in this mode')\n if util.count(config.search_space, config.search_space_file) != 1:\n raise ValueError('ExperimentConfig: search_space and search_space_file must be set one')\n- if util.count(config.tuner, config.accessor, config.advisor) != 0:\n- raise ValueError('ExperimentConfig: tuner, accessor, and advisor must not be set in for this mode')\n+ if util.count(config.tuner, config.assessor, config.advisor) != 0:\n+ raise ValueError('ExperimentConfig: tuner, assessor, and advisor must not be set in for this mode')\n if config.tuner_gpu_indices is not None:\n raise ValueError('ExperimentConfig: tuner_gpu_indices is not supported in this mode')\n", "issue": "wrong field name in ExperimentConfig, \"accessor\" should be \"assessor\"\n**Environment**:\r\n- NNI version: 2.0\r\n- NNI mode (local|remote|pai): local\r\n- Client OS: Ubuntu 20.10\r\n- Server OS (for remote mode only):\r\n- Python version: 3.8.5\r\n- PyTorch/TensorFlow version: pytorch 1.9.0a+a62b0de\r\n- Is conda/virtualenv/venv used?: conda 4.9.2\r\n- Is running in Docker?: No\r\n\r\n**Log message**:\r\n - nnimanager.log: None\r\n - dispatcher.log: None\r\n - nnictl stdout and stderr:\r\n```\r\nWARNING: Validation with V1 schema failed. Trying to convert from V2 format...\r\nERROR: Conversion from v2 format failed: ValueError('ExperimentConfig: Unrecognized fields assessor')\r\nERROR: Config in v1 format validation failed. KeyError('trial')\r\n```\r\n\r\n**What issue meet, what's expected?**:\r\nIn `ExperimentConfig`, field `accessor` should be `assessor`.\r\n\r\n**How to reproduce it?**: \r\nA V2 config with 'assessor'\r\n\r\n**Additional information**:\r\nFix by renaming all `accessor` in `nni.experiment.config.common.py` to `assessor`.\r\nThere are some `accessor` in doc as well.\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT license.\n\nfrom dataclasses import dataclass\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Union\n\nfrom .base import ConfigBase, PathLike\nfrom . import util\n\n__all__ = [\n 'ExperimentConfig',\n 'AlgorithmConfig',\n 'CustomAlgorithmConfig',\n 'TrainingServiceConfig',\n]\n\n\n@dataclass(init=False)\nclass _AlgorithmConfig(ConfigBase):\n name: Optional[str] = None\n class_name: Optional[str] = None\n code_directory: Optional[PathLike] = None\n class_args: Optional[Dict[str, Any]] = None\n\n def validate(self):\n super().validate()\n _validate_algo(self)\n\n\n@dataclass(init=False)\nclass AlgorithmConfig(_AlgorithmConfig):\n name: str\n class_args: Optional[Dict[str, Any]] = None\n\n\n@dataclass(init=False)\nclass CustomAlgorithmConfig(_AlgorithmConfig):\n class_name: str\n class_directory: Optional[PathLike] = None\n class_args: Optional[Dict[str, Any]] = None\n\n\nclass TrainingServiceConfig(ConfigBase):\n platform: str\n\n\n@dataclass(init=False)\nclass ExperimentConfig(ConfigBase):\n experiment_name: Optional[str] = None\n search_space_file: Optional[PathLike] = None\n search_space: Any = None\n trial_command: str\n trial_code_directory: PathLike = '.'\n trial_concurrency: int\n trial_gpu_number: Optional[int] = None\n max_experiment_duration: Optional[str] = None\n max_trial_number: Optional[int] = None\n nni_manager_ip: Optional[str] = None\n use_annotation: bool = False\n debug: bool = False\n log_level: Optional[str] = None\n experiment_working_directory: Optional[PathLike] = None\n tuner_gpu_indices: Optional[Union[List[int], str]] = None\n tuner: Optional[_AlgorithmConfig] = None\n accessor: Optional[_AlgorithmConfig] = None\n advisor: Optional[_AlgorithmConfig] = None\n training_service: Union[TrainingServiceConfig, List[TrainingServiceConfig]]\n\n def __init__(self, training_service_platform: Optional[Union[str, List[str]]] = None, **kwargs):\n kwargs = util.case_insensitive(kwargs)\n if training_service_platform is not None:\n assert 'trainingservice' not in kwargs\n kwargs['trainingservice'] = util.training_service_config_factory(platform = training_service_platform)\n elif isinstance(kwargs.get('trainingservice'), (dict, list)):\n # dict means a single training service\n # list means hybrid training service\n kwargs['trainingservice'] = util.training_service_config_factory(config = kwargs['trainingservice'])\n else:\n raise RuntimeError('Unsupported Training service configuration!')\n super().__init__(**kwargs)\n\n def validate(self, initialized_tuner: bool = False) -> None:\n super().validate()\n if initialized_tuner:\n _validate_for_exp(self)\n else:\n _validate_for_nnictl(self)\n if self.trial_gpu_number and hasattr(self.training_service, 'use_active_gpu'):\n if self.training_service.use_active_gpu is None:\n raise ValueError('Please set \"use_active_gpu\"')\n\n## End of public API ##\n\n @property\n def _canonical_rules(self):\n return _canonical_rules\n\n @property\n def _validation_rules(self):\n return _validation_rules\n\n\n_canonical_rules = {\n 'search_space_file': util.canonical_path,\n 'trial_code_directory': util.canonical_path,\n 'max_experiment_duration': lambda value: f'{util.parse_time(value)}s' if value is not None else None,\n 'experiment_working_directory': util.canonical_path,\n 'tuner_gpu_indices': lambda value: [int(idx) for idx in value.split(',')] if isinstance(value, str) else value\n}\n\n_validation_rules = {\n 'search_space_file': lambda value: (Path(value).is_file(), f'\"{value}\" does not exist or is not regular file'),\n 'trial_code_directory': lambda value: (Path(value).is_dir(), f'\"{value}\" does not exist or is not directory'),\n 'trial_concurrency': lambda value: value > 0,\n 'trial_gpu_number': lambda value: value >= 0,\n 'max_experiment_duration': lambda value: util.parse_time(value) > 0,\n 'max_trial_number': lambda value: value > 0,\n 'log_level': lambda value: value in [\"trace\", \"debug\", \"info\", \"warning\", \"error\", \"fatal\"],\n 'tuner_gpu_indices': lambda value: all(i >= 0 for i in value) and len(value) == len(set(value)),\n 'training_service': lambda value: (type(value) is not TrainingServiceConfig, 'cannot be abstract base class')\n}\n\ndef _validate_for_exp(config: ExperimentConfig) -> None:\n # validate experiment for nni.Experiment, where tuner is already initialized outside\n if config.use_annotation:\n raise ValueError('ExperimentConfig: annotation is not supported in this mode')\n if util.count(config.search_space, config.search_space_file) != 1:\n raise ValueError('ExperimentConfig: search_space and search_space_file must be set one')\n if util.count(config.tuner, config.accessor, config.advisor) != 0:\n raise ValueError('ExperimentConfig: tuner, accessor, and advisor must not be set in for this mode')\n if config.tuner_gpu_indices is not None:\n raise ValueError('ExperimentConfig: tuner_gpu_indices is not supported in this mode')\n\ndef _validate_for_nnictl(config: ExperimentConfig) -> None:\n # validate experiment for normal launching approach\n if config.use_annotation:\n if util.count(config.search_space, config.search_space_file) != 0:\n raise ValueError('ExperimentConfig: search_space and search_space_file must not be set with annotationn')\n else:\n if util.count(config.search_space, config.search_space_file) != 1:\n raise ValueError('ExperimentConfig: search_space and search_space_file must be set one')\n if util.count(config.tuner, config.advisor) != 1:\n raise ValueError('ExperimentConfig: tuner and advisor must be set one')\n\ndef _validate_algo(algo: AlgorithmConfig) -> None:\n if algo.name is None:\n if algo.class_name is None:\n raise ValueError('Missing algorithm name')\n if algo.code_directory is not None and not Path(algo.code_directory).is_dir():\n raise ValueError(f'code_directory \"{algo.code_directory}\" does not exist or is not directory')\n else:\n if algo.class_name is not None or algo.code_directory is not None:\n raise ValueError(f'When name is set for registered algorithm, class_name and code_directory cannot be used')\n # TODO: verify algorithm installation and class args\n", "path": "nni/experiment/config/common.py"}, {"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT license.\n\n\"\"\"\nAssessor analyzes trial's intermediate results (e.g., periodically evaluated accuracy on test dataset)\nto tell whether this trial can be early stopped or not.\n\nSee :class:`Assessor`' specification and ``docs/en_US/assessors.rst`` for details.\n\"\"\"\n\nfrom enum import Enum\nimport logging\n\nfrom .recoverable import Recoverable\n\n__all__ = ['AssessResult', 'Assessor']\n\n_logger = logging.getLogger(__name__)\n\n\nclass AssessResult(Enum):\n \"\"\"\n Enum class for :meth:`Assessor.assess_trial` return value.\n \"\"\"\n\n Good = True\n \"\"\"The trial works well.\"\"\"\n\n Bad = False\n \"\"\"The trial works poorly and should be early stopped.\"\"\"\n\n\nclass Assessor(Recoverable):\n \"\"\"\n Assessor analyzes trial's intermediate results (e.g., periodically evaluated accuracy on test dataset)\n to tell whether this trial can be early stopped or not.\n\n This is the abstract base class for all assessors.\n Early stopping algorithms should inherit this class and override :meth:`assess_trial` method,\n which receives intermediate results from trials and give an assessing result.\n\n If :meth:`assess_trial` returns :obj:`AssessResult.Bad` for a trial,\n it hints NNI framework that the trial is likely to result in a poor final accuracy,\n and therefore should be killed to save resource.\n\n If an accessor want's to be notified when a trial ends, it can also override :meth:`trial_end`.\n\n To write a new assessor, you can reference :class:`~nni.medianstop_assessor.MedianstopAssessor`'s code as an example.\n\n See Also\n --------\n Builtin assessors:\n :class:`~nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor`\n :class:`~nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor`\n \"\"\"\n\n def assess_trial(self, trial_job_id, trial_history):\n \"\"\"\n Abstract method for determining whether a trial should be killed. Must override.\n\n The NNI framework has little guarantee on ``trial_history``.\n This method is not guaranteed to be invoked for each time ``trial_history`` get updated.\n It is also possible that a trial's history keeps updating after receiving a bad result.\n And if the trial failed and retried, ``trial_history`` may be inconsistent with its previous value.\n\n The only guarantee is that ``trial_history`` is always growing.\n It will not be empty and will always be longer than previous value.\n\n This is an example of how :meth:`assess_trial` get invoked sequentially:\n\n ::\n\n trial_job_id | trial_history | return value\n ------------ | --------------- | ------------\n Trial_A | [1.0, 2.0] | Good\n Trial_B | [1.5, 1.3] | Bad\n Trial_B | [1.5, 1.3, 1.9] | Good\n Trial_A | [0.9, 1.8, 2.3] | Good\n\n Parameters\n ----------\n trial_job_id : str\n Unique identifier of the trial.\n trial_history : list\n Intermediate results of this trial. The element type is decided by trial code.\n\n Returns\n -------\n AssessResult\n :obj:`AssessResult.Good` or :obj:`AssessResult.Bad`.\n \"\"\"\n raise NotImplementedError('Assessor: assess_trial not implemented')\n\n def trial_end(self, trial_job_id, success):\n \"\"\"\n Abstract method invoked when a trial is completed or terminated. Do nothing by default.\n\n Parameters\n ----------\n trial_job_id : str\n Unique identifier of the trial.\n success : bool\n True if the trial successfully completed; False if failed or terminated.\n \"\"\"\n\n def load_checkpoint(self):\n \"\"\"\n Internal API under revising, not recommended for end users.\n \"\"\"\n checkpoin_path = self.get_checkpoint_path()\n _logger.info('Load checkpoint ignored by assessor, checkpoint path: %s', checkpoin_path)\n\n def save_checkpoint(self):\n \"\"\"\n Internal API under revising, not recommended for end users.\n \"\"\"\n checkpoin_path = self.get_checkpoint_path()\n _logger.info('Save checkpoint ignored by assessor, checkpoint path: %s', checkpoin_path)\n\n def _on_exit(self):\n pass\n\n def _on_error(self):\n pass\n", "path": "nni/assessor.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT license.\n\nfrom dataclasses import dataclass\nfrom pathlib import Path\nfrom typing import Any, Dict, List, Optional, Union\n\nfrom .base import ConfigBase, PathLike\nfrom . import util\n\n__all__ = [\n 'ExperimentConfig',\n 'AlgorithmConfig',\n 'CustomAlgorithmConfig',\n 'TrainingServiceConfig',\n]\n\n\n@dataclass(init=False)\nclass _AlgorithmConfig(ConfigBase):\n name: Optional[str] = None\n class_name: Optional[str] = None\n code_directory: Optional[PathLike] = None\n class_args: Optional[Dict[str, Any]] = None\n\n def validate(self):\n super().validate()\n _validate_algo(self)\n\n\n@dataclass(init=False)\nclass AlgorithmConfig(_AlgorithmConfig):\n name: str\n class_args: Optional[Dict[str, Any]] = None\n\n\n@dataclass(init=False)\nclass CustomAlgorithmConfig(_AlgorithmConfig):\n class_name: str\n class_directory: Optional[PathLike] = None\n class_args: Optional[Dict[str, Any]] = None\n\n\nclass TrainingServiceConfig(ConfigBase):\n platform: str\n\n\n@dataclass(init=False)\nclass ExperimentConfig(ConfigBase):\n experiment_name: Optional[str] = None\n search_space_file: Optional[PathLike] = None\n search_space: Any = None\n trial_command: str\n trial_code_directory: PathLike = '.'\n trial_concurrency: int\n trial_gpu_number: Optional[int] = None\n max_experiment_duration: Optional[str] = None\n max_trial_number: Optional[int] = None\n nni_manager_ip: Optional[str] = None\n use_annotation: bool = False\n debug: bool = False\n log_level: Optional[str] = None\n experiment_working_directory: Optional[PathLike] = None\n tuner_gpu_indices: Optional[Union[List[int], str]] = None\n tuner: Optional[_AlgorithmConfig] = None\n assessor: Optional[_AlgorithmConfig] = None\n advisor: Optional[_AlgorithmConfig] = None\n training_service: Union[TrainingServiceConfig, List[TrainingServiceConfig]]\n\n def __init__(self, training_service_platform: Optional[Union[str, List[str]]] = None, **kwargs):\n kwargs = util.case_insensitive(kwargs)\n if training_service_platform is not None:\n assert 'trainingservice' not in kwargs\n kwargs['trainingservice'] = util.training_service_config_factory(platform = training_service_platform)\n elif isinstance(kwargs.get('trainingservice'), (dict, list)):\n # dict means a single training service\n # list means hybrid training service\n kwargs['trainingservice'] = util.training_service_config_factory(config = kwargs['trainingservice'])\n else:\n raise RuntimeError('Unsupported Training service configuration!')\n super().__init__(**kwargs)\n\n def validate(self, initialized_tuner: bool = False) -> None:\n super().validate()\n if initialized_tuner:\n _validate_for_exp(self)\n else:\n _validate_for_nnictl(self)\n if self.trial_gpu_number and hasattr(self.training_service, 'use_active_gpu'):\n if self.training_service.use_active_gpu is None:\n raise ValueError('Please set \"use_active_gpu\"')\n\n## End of public API ##\n\n @property\n def _canonical_rules(self):\n return _canonical_rules\n\n @property\n def _validation_rules(self):\n return _validation_rules\n\n\n_canonical_rules = {\n 'search_space_file': util.canonical_path,\n 'trial_code_directory': util.canonical_path,\n 'max_experiment_duration': lambda value: f'{util.parse_time(value)}s' if value is not None else None,\n 'experiment_working_directory': util.canonical_path,\n 'tuner_gpu_indices': lambda value: [int(idx) for idx in value.split(',')] if isinstance(value, str) else value\n}\n\n_validation_rules = {\n 'search_space_file': lambda value: (Path(value).is_file(), f'\"{value}\" does not exist or is not regular file'),\n 'trial_code_directory': lambda value: (Path(value).is_dir(), f'\"{value}\" does not exist or is not directory'),\n 'trial_concurrency': lambda value: value > 0,\n 'trial_gpu_number': lambda value: value >= 0,\n 'max_experiment_duration': lambda value: util.parse_time(value) > 0,\n 'max_trial_number': lambda value: value > 0,\n 'log_level': lambda value: value in [\"trace\", \"debug\", \"info\", \"warning\", \"error\", \"fatal\"],\n 'tuner_gpu_indices': lambda value: all(i >= 0 for i in value) and len(value) == len(set(value)),\n 'training_service': lambda value: (type(value) is not TrainingServiceConfig, 'cannot be abstract base class')\n}\n\ndef _validate_for_exp(config: ExperimentConfig) -> None:\n # validate experiment for nni.Experiment, where tuner is already initialized outside\n if config.use_annotation:\n raise ValueError('ExperimentConfig: annotation is not supported in this mode')\n if util.count(config.search_space, config.search_space_file) != 1:\n raise ValueError('ExperimentConfig: search_space and search_space_file must be set one')\n if util.count(config.tuner, config.assessor, config.advisor) != 0:\n raise ValueError('ExperimentConfig: tuner, assessor, and advisor must not be set in for this mode')\n if config.tuner_gpu_indices is not None:\n raise ValueError('ExperimentConfig: tuner_gpu_indices is not supported in this mode')\n\ndef _validate_for_nnictl(config: ExperimentConfig) -> None:\n # validate experiment for normal launching approach\n if config.use_annotation:\n if util.count(config.search_space, config.search_space_file) != 0:\n raise ValueError('ExperimentConfig: search_space and search_space_file must not be set with annotationn')\n else:\n if util.count(config.search_space, config.search_space_file) != 1:\n raise ValueError('ExperimentConfig: search_space and search_space_file must be set one')\n if util.count(config.tuner, config.advisor) != 1:\n raise ValueError('ExperimentConfig: tuner and advisor must be set one')\n\ndef _validate_algo(algo: AlgorithmConfig) -> None:\n if algo.name is None:\n if algo.class_name is None:\n raise ValueError('Missing algorithm name')\n if algo.code_directory is not None and not Path(algo.code_directory).is_dir():\n raise ValueError(f'code_directory \"{algo.code_directory}\" does not exist or is not directory')\n else:\n if algo.class_name is not None or algo.code_directory is not None:\n raise ValueError(f'When name is set for registered algorithm, class_name and code_directory cannot be used')\n # TODO: verify algorithm installation and class args\n", "path": "nni/experiment/config/common.py"}, {"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT license.\n\n\"\"\"\nAssessor analyzes trial's intermediate results (e.g., periodically evaluated accuracy on test dataset)\nto tell whether this trial can be early stopped or not.\n\nSee :class:`Assessor`' specification and ``docs/en_US/assessors.rst`` for details.\n\"\"\"\n\nfrom enum import Enum\nimport logging\n\nfrom .recoverable import Recoverable\n\n__all__ = ['AssessResult', 'Assessor']\n\n_logger = logging.getLogger(__name__)\n\n\nclass AssessResult(Enum):\n \"\"\"\n Enum class for :meth:`Assessor.assess_trial` return value.\n \"\"\"\n\n Good = True\n \"\"\"The trial works well.\"\"\"\n\n Bad = False\n \"\"\"The trial works poorly and should be early stopped.\"\"\"\n\n\nclass Assessor(Recoverable):\n \"\"\"\n Assessor analyzes trial's intermediate results (e.g., periodically evaluated accuracy on test dataset)\n to tell whether this trial can be early stopped or not.\n\n This is the abstract base class for all assessors.\n Early stopping algorithms should inherit this class and override :meth:`assess_trial` method,\n which receives intermediate results from trials and give an assessing result.\n\n If :meth:`assess_trial` returns :obj:`AssessResult.Bad` for a trial,\n it hints NNI framework that the trial is likely to result in a poor final accuracy,\n and therefore should be killed to save resource.\n\n If an assessor want's to be notified when a trial ends, it can also override :meth:`trial_end`.\n\n To write a new assessor, you can reference :class:`~nni.medianstop_assessor.MedianstopAssessor`'s code as an example.\n\n See Also\n --------\n Builtin assessors:\n :class:`~nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor`\n :class:`~nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor`\n \"\"\"\n\n def assess_trial(self, trial_job_id, trial_history):\n \"\"\"\n Abstract method for determining whether a trial should be killed. Must override.\n\n The NNI framework has little guarantee on ``trial_history``.\n This method is not guaranteed to be invoked for each time ``trial_history`` get updated.\n It is also possible that a trial's history keeps updating after receiving a bad result.\n And if the trial failed and retried, ``trial_history`` may be inconsistent with its previous value.\n\n The only guarantee is that ``trial_history`` is always growing.\n It will not be empty and will always be longer than previous value.\n\n This is an example of how :meth:`assess_trial` get invoked sequentially:\n\n ::\n\n trial_job_id | trial_history | return value\n ------------ | --------------- | ------------\n Trial_A | [1.0, 2.0] | Good\n Trial_B | [1.5, 1.3] | Bad\n Trial_B | [1.5, 1.3, 1.9] | Good\n Trial_A | [0.9, 1.8, 2.3] | Good\n\n Parameters\n ----------\n trial_job_id : str\n Unique identifier of the trial.\n trial_history : list\n Intermediate results of this trial. The element type is decided by trial code.\n\n Returns\n -------\n AssessResult\n :obj:`AssessResult.Good` or :obj:`AssessResult.Bad`.\n \"\"\"\n raise NotImplementedError('Assessor: assess_trial not implemented')\n\n def trial_end(self, trial_job_id, success):\n \"\"\"\n Abstract method invoked when a trial is completed or terminated. Do nothing by default.\n\n Parameters\n ----------\n trial_job_id : str\n Unique identifier of the trial.\n success : bool\n True if the trial successfully completed; False if failed or terminated.\n \"\"\"\n\n def load_checkpoint(self):\n \"\"\"\n Internal API under revising, not recommended for end users.\n \"\"\"\n checkpoin_path = self.get_checkpoint_path()\n _logger.info('Load checkpoint ignored by assessor, checkpoint path: %s', checkpoin_path)\n\n def save_checkpoint(self):\n \"\"\"\n Internal API under revising, not recommended for end users.\n \"\"\"\n checkpoin_path = self.get_checkpoint_path()\n _logger.info('Save checkpoint ignored by assessor, checkpoint path: %s', checkpoin_path)\n\n def _on_exit(self):\n pass\n\n def _on_error(self):\n pass\n", "path": "nni/assessor.py"}]} | 3,674 | 499 |
gh_patches_debug_29713 | rasdani/github-patches | git_diff | biopython__biopython-3653 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
editable/develop install warning: You may be importing Biopython from inside the source tree.
### Setup
I am reporting a problem with Biopython version, Python version, and operating
system as follows:
```pycon
$ python
Python 3.7.4 (default, Aug 13 2019, 15:17:50)
[Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys; print(sys.version)
3.7.4 (default, Aug 13 2019, 15:17:50)
[Clang 4.0.1 (tags/RELEASE_401/final)]
>>> import platform; print(platform.python_implementation()); print(platform.platform())
CPython
Darwin-18.7.0-x86_64-i386-64bit
>>> import Bio; print(Bio.__version__)
/Users/xxx/repositories/biopython/Bio/__init__.py:128: BiopythonWarning: You may be importing Biopython from inside the source tree. This is bad practice and might lead to downstream issues. In particular, you might encounter ImportErrors due to missing compiled C extensions. We recommend that you try running your code from outside the source tree. If you are outside the source tree then you have a setup.py file in an unexpected directory: /Users/xxx/repositories/biopython.
format(_parent_dir), BiopythonWarning)
1.75.dev0
```
(*Please copy and run the above in your Python, and copy-and-paste the output*)
### Expected behaviour
No warning ``BiopythonWarning: You may be importing Biopython from inside the source tree. ...``
### Actual behaviour
Noisy warning as above.
### Steps to reproduce
Using pip to install in editable (develop) mode:
```
$ pip install -h
...
-e, --editable <path/url> Install a project in editable mode (i.e. setuptools "develop mode") from a local
project path or a VCS url.
...
```
```
$ git clone [email protected]:biopython/biopython.git
$ cd biopython
$ pip install -e .
```
This is an unfortunate side effect of the changes in #2007, intended to help with confusing messages when C code was not compiled.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Bio/__init__.py`
Content:
```
1 # Copyright 1999-2003 by Jeffrey Chang. All rights reserved.
2 #
3 # This file is part of the Biopython distribution and governed by your
4 # choice of the "Biopython License Agreement" or the "BSD 3-Clause License".
5 # Please see the LICENSE file that should have been included as part of this
6 # package.
7 """Collection of modules for dealing with biological data in Python.
8
9 The Biopython Project is an international association of developers
10 of freely available Python tools for computational molecular biology.
11
12 https://biopython.org
13 """
14
15 import os
16 import warnings
17
18 __version__ = "1.80.dev0"
19
20
21 class MissingExternalDependencyError(Exception):
22 """Missing an external dependency.
23
24 Used for things like missing command line tools. Important for our unit
25 tests to allow skipping tests with missing external dependencies.
26 """
27
28
29 class MissingPythonDependencyError(MissingExternalDependencyError, ImportError):
30 """Missing an external python dependency (subclass of ImportError).
31
32 Used for missing Python modules (rather than just a typical ImportError).
33 Important for our unit tests to allow skipping tests with missing external
34 python dependencies, while also allowing the exception to be caught as an
35 ImportError.
36 """
37
38
39 class StreamModeError(ValueError):
40 """Incorrect stream mode (text vs binary).
41
42 This error should be raised when a stream (file or file-like object)
43 argument is in text mode while the receiving function expects binary mode,
44 or vice versa.
45 """
46
47
48 class BiopythonWarning(Warning):
49 """Biopython warning.
50
51 Biopython should use this warning (or subclasses of it), making it easy to
52 silence all our warning messages should you wish to:
53
54 >>> import warnings
55 >>> from Bio import BiopythonWarning
56 >>> warnings.simplefilter('ignore', BiopythonWarning)
57
58 Consult the warnings module documentation for more details.
59 """
60
61
62 class BiopythonParserWarning(BiopythonWarning):
63 """Biopython parser warning.
64
65 Some in-valid data files cannot be parsed and will trigger an exception.
66 Where a reasonable interpretation is possible, Biopython will issue this
67 warning to indicate a potential problem. To silence these warnings, use:
68
69 >>> import warnings
70 >>> from Bio import BiopythonParserWarning
71 >>> warnings.simplefilter('ignore', BiopythonParserWarning)
72
73 Consult the warnings module documentation for more details.
74 """
75
76
77 class BiopythonDeprecationWarning(BiopythonWarning):
78 """Biopython deprecation warning.
79
80 Biopython uses this warning instead of the built in DeprecationWarning
81 since those are ignored by default since Python 2.7.
82
83 To silence all our deprecation warning messages, use:
84
85 >>> import warnings
86 >>> from Bio import BiopythonDeprecationWarning
87 >>> warnings.simplefilter('ignore', BiopythonDeprecationWarning)
88
89 Code marked as deprecated is likely to be removed in a future version
90 of Biopython. To avoid removal of this code, please contact the Biopython
91 developers via the mailing list or GitHub.
92 """
93
94
95 class BiopythonExperimentalWarning(BiopythonWarning):
96 """Biopython experimental code warning.
97
98 Biopython uses this warning for experimental code ('alpha' or 'beta'
99 level code) which is released as part of the standard releases to mark
100 sub-modules or functions for early adopters to test & give feedback.
101
102 Code issuing this warning is likely to change (or even be removed) in
103 a subsequent release of Biopython. Such code should NOT be used for
104 production/stable code. It should only be used if:
105
106 - You are running the latest release of Biopython, or ideally the
107 latest code from our repository.
108 - You are subscribed to the biopython-dev mailing list to provide
109 feedback on this code, and to be alerted of changes to it.
110
111 If all goes well, experimental code would be promoted to stable in
112 a subsequent release, and this warning removed from it.
113 """
114
115
116 _parent_dir = os.path.dirname(os.path.dirname(__file__))
117 if os.path.exists(os.path.join(_parent_dir, "setup.py")):
118 warnings.warn(
119 "You may be importing Biopython from inside the source tree."
120 " This is bad practice and might lead to downstream issues."
121 " In particular, you might encounter ImportErrors due to"
122 " missing compiled C extensions. We recommend that you"
123 " try running your code from outside the source tree."
124 " If you are outside the source tree then you have a"
125 " setup.py file in an unexpected directory: " + _parent_dir,
126 BiopythonWarning,
127 )
128 # See #PR 2007 and issue #1991 for discussion on this warning:
129 # https://github.com/biopython/biopython/pull/2007
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Bio/__init__.py b/Bio/__init__.py
--- a/Bio/__init__.py
+++ b/Bio/__init__.py
@@ -115,15 +115,33 @@
_parent_dir = os.path.dirname(os.path.dirname(__file__))
if os.path.exists(os.path.join(_parent_dir, "setup.py")):
- warnings.warn(
- "You may be importing Biopython from inside the source tree."
- " This is bad practice and might lead to downstream issues."
- " In particular, you might encounter ImportErrors due to"
- " missing compiled C extensions. We recommend that you"
- " try running your code from outside the source tree."
- " If you are outside the source tree then you have a"
- " setup.py file in an unexpected directory: " + _parent_dir,
- BiopythonWarning,
- )
-# See #PR 2007 and issue #1991 for discussion on this warning:
-# https://github.com/biopython/biopython/pull/2007
+ # Looks like we are running from our source directory,
+ # a bad idea except if installed in development mode.
+ #
+ # See https://setuptools.readthedocs.io/en/latest/userguide/development_mode.html
+ # Do we have .../site-packages/biopython.egg-link present?
+ #
+ # Note "pip install -e ." currently calls setuptools internally
+ import site
+
+ _dev_mode = False
+ for _p in site.getsitepackages():
+ if os.path.isfile(os.path.join(_p, "biopython.egg-link")):
+ _dev_mode = True
+ break
+ # Also check the user specific site packages
+ if not _dev_mode and os.path.isfile(
+ os.path.join(site.getusersitepackages(), "biopython.egg-link")
+ ):
+ _dev_mode = True
+ if not _dev_mode:
+ warnings.warn(
+ "You may be importing Biopython from inside the source tree."
+ " This is bad practice and might lead to downstream issues."
+ " In particular, you might encounter ImportErrors due to"
+ " missing compiled C extensions. We recommend that you"
+ " try running your code from outside the source tree."
+ " If you are outside the source tree then you have a"
+ " setup.py file in an unexpected directory: " + _parent_dir,
+ BiopythonWarning,
+ )
| {"golden_diff": "diff --git a/Bio/__init__.py b/Bio/__init__.py\n--- a/Bio/__init__.py\n+++ b/Bio/__init__.py\n@@ -115,15 +115,33 @@\n \n _parent_dir = os.path.dirname(os.path.dirname(__file__))\n if os.path.exists(os.path.join(_parent_dir, \"setup.py\")):\n- warnings.warn(\n- \"You may be importing Biopython from inside the source tree.\"\n- \" This is bad practice and might lead to downstream issues.\"\n- \" In particular, you might encounter ImportErrors due to\"\n- \" missing compiled C extensions. We recommend that you\"\n- \" try running your code from outside the source tree.\"\n- \" If you are outside the source tree then you have a\"\n- \" setup.py file in an unexpected directory: \" + _parent_dir,\n- BiopythonWarning,\n- )\n-# See #PR 2007 and issue #1991 for discussion on this warning:\n-# https://github.com/biopython/biopython/pull/2007\n+ # Looks like we are running from our source directory,\n+ # a bad idea except if installed in development mode.\n+ #\n+ # See https://setuptools.readthedocs.io/en/latest/userguide/development_mode.html\n+ # Do we have .../site-packages/biopython.egg-link present?\n+ #\n+ # Note \"pip install -e .\" currently calls setuptools internally\n+ import site\n+\n+ _dev_mode = False\n+ for _p in site.getsitepackages():\n+ if os.path.isfile(os.path.join(_p, \"biopython.egg-link\")):\n+ _dev_mode = True\n+ break\n+ # Also check the user specific site packages\n+ if not _dev_mode and os.path.isfile(\n+ os.path.join(site.getusersitepackages(), \"biopython.egg-link\")\n+ ):\n+ _dev_mode = True\n+ if not _dev_mode:\n+ warnings.warn(\n+ \"You may be importing Biopython from inside the source tree.\"\n+ \" This is bad practice and might lead to downstream issues.\"\n+ \" In particular, you might encounter ImportErrors due to\"\n+ \" missing compiled C extensions. We recommend that you\"\n+ \" try running your code from outside the source tree.\"\n+ \" If you are outside the source tree then you have a\"\n+ \" setup.py file in an unexpected directory: \" + _parent_dir,\n+ BiopythonWarning,\n+ )\n", "issue": "editable/develop install warning: You may be importing Biopython from inside the source tree. \n### Setup\r\n\r\nI am reporting a problem with Biopython version, Python version, and operating\r\nsystem as follows:\r\n\r\n```pycon\r\n$ python\r\nPython 3.7.4 (default, Aug 13 2019, 15:17:50) \r\n[Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import sys; print(sys.version)\r\n3.7.4 (default, Aug 13 2019, 15:17:50) \r\n[Clang 4.0.1 (tags/RELEASE_401/final)]\r\n>>> import platform; print(platform.python_implementation()); print(platform.platform())\r\nCPython\r\nDarwin-18.7.0-x86_64-i386-64bit\r\n>>> import Bio; print(Bio.__version__)\r\n/Users/xxx/repositories/biopython/Bio/__init__.py:128: BiopythonWarning: You may be importing Biopython from inside the source tree. This is bad practice and might lead to downstream issues. In particular, you might encounter ImportErrors due to missing compiled C extensions. We recommend that you try running your code from outside the source tree. If you are outside the source tree then you have a setup.py file in an unexpected directory: /Users/xxx/repositories/biopython.\r\n format(_parent_dir), BiopythonWarning)\r\n1.75.dev0\r\n```\r\n\r\n(*Please copy and run the above in your Python, and copy-and-paste the output*)\r\n\r\n### Expected behaviour\r\n\r\nNo warning ``BiopythonWarning: You may be importing Biopython from inside the source tree. ...``\r\n\r\n### Actual behaviour\r\n\r\nNoisy warning as above.\r\n\r\n### Steps to reproduce\r\n\r\nUsing pip to install in editable (develop) mode:\r\n\r\n```\r\n$ pip install -h\r\n...\r\n -e, --editable <path/url> Install a project in editable mode (i.e. setuptools \"develop mode\") from a local\r\n project path or a VCS url.\r\n...\r\n```\r\n\r\n```\r\n$ git clone [email protected]:biopython/biopython.git\r\n$ cd biopython\r\n$ pip install -e .\r\n```\r\n\r\nThis is an unfortunate side effect of the changes in #2007, intended to help with confusing messages when C code was not compiled.\n", "before_files": [{"content": "# Copyright 1999-2003 by Jeffrey Chang. All rights reserved.\n#\n# This file is part of the Biopython distribution and governed by your\n# choice of the \"Biopython License Agreement\" or the \"BSD 3-Clause License\".\n# Please see the LICENSE file that should have been included as part of this\n# package.\n\"\"\"Collection of modules for dealing with biological data in Python.\n\nThe Biopython Project is an international association of developers\nof freely available Python tools for computational molecular biology.\n\nhttps://biopython.org\n\"\"\"\n\nimport os\nimport warnings\n\n__version__ = \"1.80.dev0\"\n\n\nclass MissingExternalDependencyError(Exception):\n \"\"\"Missing an external dependency.\n\n Used for things like missing command line tools. Important for our unit\n tests to allow skipping tests with missing external dependencies.\n \"\"\"\n\n\nclass MissingPythonDependencyError(MissingExternalDependencyError, ImportError):\n \"\"\"Missing an external python dependency (subclass of ImportError).\n\n Used for missing Python modules (rather than just a typical ImportError).\n Important for our unit tests to allow skipping tests with missing external\n python dependencies, while also allowing the exception to be caught as an\n ImportError.\n \"\"\"\n\n\nclass StreamModeError(ValueError):\n \"\"\"Incorrect stream mode (text vs binary).\n\n This error should be raised when a stream (file or file-like object)\n argument is in text mode while the receiving function expects binary mode,\n or vice versa.\n \"\"\"\n\n\nclass BiopythonWarning(Warning):\n \"\"\"Biopython warning.\n\n Biopython should use this warning (or subclasses of it), making it easy to\n silence all our warning messages should you wish to:\n\n >>> import warnings\n >>> from Bio import BiopythonWarning\n >>> warnings.simplefilter('ignore', BiopythonWarning)\n\n Consult the warnings module documentation for more details.\n \"\"\"\n\n\nclass BiopythonParserWarning(BiopythonWarning):\n \"\"\"Biopython parser warning.\n\n Some in-valid data files cannot be parsed and will trigger an exception.\n Where a reasonable interpretation is possible, Biopython will issue this\n warning to indicate a potential problem. To silence these warnings, use:\n\n >>> import warnings\n >>> from Bio import BiopythonParserWarning\n >>> warnings.simplefilter('ignore', BiopythonParserWarning)\n\n Consult the warnings module documentation for more details.\n \"\"\"\n\n\nclass BiopythonDeprecationWarning(BiopythonWarning):\n \"\"\"Biopython deprecation warning.\n\n Biopython uses this warning instead of the built in DeprecationWarning\n since those are ignored by default since Python 2.7.\n\n To silence all our deprecation warning messages, use:\n\n >>> import warnings\n >>> from Bio import BiopythonDeprecationWarning\n >>> warnings.simplefilter('ignore', BiopythonDeprecationWarning)\n\n Code marked as deprecated is likely to be removed in a future version\n of Biopython. To avoid removal of this code, please contact the Biopython\n developers via the mailing list or GitHub.\n \"\"\"\n\n\nclass BiopythonExperimentalWarning(BiopythonWarning):\n \"\"\"Biopython experimental code warning.\n\n Biopython uses this warning for experimental code ('alpha' or 'beta'\n level code) which is released as part of the standard releases to mark\n sub-modules or functions for early adopters to test & give feedback.\n\n Code issuing this warning is likely to change (or even be removed) in\n a subsequent release of Biopython. Such code should NOT be used for\n production/stable code. It should only be used if:\n\n - You are running the latest release of Biopython, or ideally the\n latest code from our repository.\n - You are subscribed to the biopython-dev mailing list to provide\n feedback on this code, and to be alerted of changes to it.\n\n If all goes well, experimental code would be promoted to stable in\n a subsequent release, and this warning removed from it.\n \"\"\"\n\n\n_parent_dir = os.path.dirname(os.path.dirname(__file__))\nif os.path.exists(os.path.join(_parent_dir, \"setup.py\")):\n warnings.warn(\n \"You may be importing Biopython from inside the source tree.\"\n \" This is bad practice and might lead to downstream issues.\"\n \" In particular, you might encounter ImportErrors due to\"\n \" missing compiled C extensions. We recommend that you\"\n \" try running your code from outside the source tree.\"\n \" If you are outside the source tree then you have a\"\n \" setup.py file in an unexpected directory: \" + _parent_dir,\n BiopythonWarning,\n )\n# See #PR 2007 and issue #1991 for discussion on this warning:\n# https://github.com/biopython/biopython/pull/2007\n", "path": "Bio/__init__.py"}], "after_files": [{"content": "# Copyright 1999-2003 by Jeffrey Chang. All rights reserved.\n#\n# This file is part of the Biopython distribution and governed by your\n# choice of the \"Biopython License Agreement\" or the \"BSD 3-Clause License\".\n# Please see the LICENSE file that should have been included as part of this\n# package.\n\"\"\"Collection of modules for dealing with biological data in Python.\n\nThe Biopython Project is an international association of developers\nof freely available Python tools for computational molecular biology.\n\nhttps://biopython.org\n\"\"\"\n\nimport os\nimport warnings\n\n__version__ = \"1.80.dev0\"\n\n\nclass MissingExternalDependencyError(Exception):\n \"\"\"Missing an external dependency.\n\n Used for things like missing command line tools. Important for our unit\n tests to allow skipping tests with missing external dependencies.\n \"\"\"\n\n\nclass MissingPythonDependencyError(MissingExternalDependencyError, ImportError):\n \"\"\"Missing an external python dependency (subclass of ImportError).\n\n Used for missing Python modules (rather than just a typical ImportError).\n Important for our unit tests to allow skipping tests with missing external\n python dependencies, while also allowing the exception to be caught as an\n ImportError.\n \"\"\"\n\n\nclass StreamModeError(ValueError):\n \"\"\"Incorrect stream mode (text vs binary).\n\n This error should be raised when a stream (file or file-like object)\n argument is in text mode while the receiving function expects binary mode,\n or vice versa.\n \"\"\"\n\n\nclass BiopythonWarning(Warning):\n \"\"\"Biopython warning.\n\n Biopython should use this warning (or subclasses of it), making it easy to\n silence all our warning messages should you wish to:\n\n >>> import warnings\n >>> from Bio import BiopythonWarning\n >>> warnings.simplefilter('ignore', BiopythonWarning)\n\n Consult the warnings module documentation for more details.\n \"\"\"\n\n\nclass BiopythonParserWarning(BiopythonWarning):\n \"\"\"Biopython parser warning.\n\n Some in-valid data files cannot be parsed and will trigger an exception.\n Where a reasonable interpretation is possible, Biopython will issue this\n warning to indicate a potential problem. To silence these warnings, use:\n\n >>> import warnings\n >>> from Bio import BiopythonParserWarning\n >>> warnings.simplefilter('ignore', BiopythonParserWarning)\n\n Consult the warnings module documentation for more details.\n \"\"\"\n\n\nclass BiopythonDeprecationWarning(BiopythonWarning):\n \"\"\"Biopython deprecation warning.\n\n Biopython uses this warning instead of the built in DeprecationWarning\n since those are ignored by default since Python 2.7.\n\n To silence all our deprecation warning messages, use:\n\n >>> import warnings\n >>> from Bio import BiopythonDeprecationWarning\n >>> warnings.simplefilter('ignore', BiopythonDeprecationWarning)\n\n Code marked as deprecated is likely to be removed in a future version\n of Biopython. To avoid removal of this code, please contact the Biopython\n developers via the mailing list or GitHub.\n \"\"\"\n\n\nclass BiopythonExperimentalWarning(BiopythonWarning):\n \"\"\"Biopython experimental code warning.\n\n Biopython uses this warning for experimental code ('alpha' or 'beta'\n level code) which is released as part of the standard releases to mark\n sub-modules or functions for early adopters to test & give feedback.\n\n Code issuing this warning is likely to change (or even be removed) in\n a subsequent release of Biopython. Such code should NOT be used for\n production/stable code. It should only be used if:\n\n - You are running the latest release of Biopython, or ideally the\n latest code from our repository.\n - You are subscribed to the biopython-dev mailing list to provide\n feedback on this code, and to be alerted of changes to it.\n\n If all goes well, experimental code would be promoted to stable in\n a subsequent release, and this warning removed from it.\n \"\"\"\n\n\n_parent_dir = os.path.dirname(os.path.dirname(__file__))\nif os.path.exists(os.path.join(_parent_dir, \"setup.py\")):\n # Looks like we are running from our source directory,\n # a bad idea except if installed in development mode.\n #\n # See https://setuptools.readthedocs.io/en/latest/userguide/development_mode.html\n # Do we have .../site-packages/biopython.egg-link present?\n #\n # Note \"pip install -e .\" currently calls setuptools internally\n import site\n\n _dev_mode = False\n for _p in site.getsitepackages():\n if os.path.isfile(os.path.join(_p, \"biopython.egg-link\")):\n _dev_mode = True\n break\n # Also check the user specific site packages\n if not _dev_mode and os.path.isfile(\n os.path.join(site.getusersitepackages(), \"biopython.egg-link\")\n ):\n _dev_mode = True\n if not _dev_mode:\n warnings.warn(\n \"You may be importing Biopython from inside the source tree.\"\n \" This is bad practice and might lead to downstream issues.\"\n \" In particular, you might encounter ImportErrors due to\"\n \" missing compiled C extensions. We recommend that you\"\n \" try running your code from outside the source tree.\"\n \" If you are outside the source tree then you have a\"\n \" setup.py file in an unexpected directory: \" + _parent_dir,\n BiopythonWarning,\n )\n", "path": "Bio/__init__.py"}]} | 2,154 | 567 |
gh_patches_debug_7086 | rasdani/github-patches | git_diff | PaddlePaddle__models-2456 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
There is a small error in face_detection reader
https://github.com/PaddlePaddle/models/blob/55138a40a219975bcbbeb6c1edef5284de721f72/PaddleCV/face_detection/reader.py#L312
the code should be
```
img = img.convert('RGB')
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PaddleCV/face_detection/reader.py`
Content:
```
1 # Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16 from __future__ import division
17 from __future__ import print_function
18
19 from PIL import Image
20 from PIL import ImageDraw
21 import numpy as np
22 import xml.etree.ElementTree
23 import os
24 import time
25 import copy
26 import random
27 import cv2
28 import six
29 import math
30 from itertools import islice
31 import paddle
32 import image_util
33
34
35 class Settings(object):
36 def __init__(self,
37 dataset=None,
38 data_dir=None,
39 label_file=None,
40 resize_h=None,
41 resize_w=None,
42 mean_value=[104., 117., 123.],
43 apply_distort=True,
44 apply_expand=True,
45 ap_version='11point',
46 toy=0):
47 self.dataset = dataset
48 self.ap_version = ap_version
49 self.toy = toy
50 self.data_dir = data_dir
51 self.apply_distort = apply_distort
52 self.apply_expand = apply_expand
53 self.resize_height = resize_h
54 self.resize_width = resize_w
55 self.img_mean = np.array(mean_value)[:, np.newaxis, np.newaxis].astype(
56 'float32')
57 self.expand_prob = 0.5
58 self.expand_max_ratio = 4
59 self.hue_prob = 0.5
60 self.hue_delta = 18
61 self.contrast_prob = 0.5
62 self.contrast_delta = 0.5
63 self.saturation_prob = 0.5
64 self.saturation_delta = 0.5
65 self.brightness_prob = 0.5
66 # _brightness_delta is the normalized value by 256
67 self.brightness_delta = 0.125
68 self.scale = 0.007843 # 1 / 127.5
69 self.data_anchor_sampling_prob = 0.5
70 self.min_face_size = 8.0
71
72
73 def to_chw_bgr(image):
74 """
75 Transpose image from HWC to CHW and from RBG to BGR.
76 Args:
77 image (np.array): an image with HWC and RBG layout.
78 """
79 # HWC to CHW
80 if len(image.shape) == 3:
81 image = np.swapaxes(image, 1, 2)
82 image = np.swapaxes(image, 1, 0)
83 # RBG to BGR
84 image = image[[2, 1, 0], :, :]
85 return image
86
87
88 def preprocess(img, bbox_labels, mode, settings, image_path):
89 img_width, img_height = img.size
90 sampled_labels = bbox_labels
91 if mode == 'train':
92 if settings.apply_distort:
93 img = image_util.distort_image(img, settings)
94 if settings.apply_expand:
95 img, bbox_labels, img_width, img_height = image_util.expand_image(
96 img, bbox_labels, img_width, img_height, settings)
97
98 # sampling
99 batch_sampler = []
100
101 prob = np.random.uniform(0., 1.)
102 if prob > settings.data_anchor_sampling_prob:
103 scale_array = np.array([16, 32, 64, 128, 256, 512])
104 batch_sampler.append(
105 image_util.sampler(1, 10, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.2,
106 0.0, True))
107 sampled_bbox = image_util.generate_batch_random_samples(
108 batch_sampler, bbox_labels, img_width, img_height, scale_array,
109 settings.resize_width, settings.resize_height)
110 img = np.array(img)
111 if len(sampled_bbox) > 0:
112 idx = int(np.random.uniform(0, len(sampled_bbox)))
113 img, sampled_labels = image_util.crop_image_sampling(
114 img, bbox_labels, sampled_bbox[idx], img_width, img_height,
115 settings.resize_width, settings.resize_height,
116 settings.min_face_size)
117
118 img = img.astype('uint8')
119 img = Image.fromarray(img)
120
121 else:
122 # hard-code here
123 batch_sampler.append(
124 image_util.sampler(1, 50, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,
125 0.0, True))
126 batch_sampler.append(
127 image_util.sampler(1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,
128 0.0, True))
129 batch_sampler.append(
130 image_util.sampler(1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,
131 0.0, True))
132 batch_sampler.append(
133 image_util.sampler(1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,
134 0.0, True))
135 batch_sampler.append(
136 image_util.sampler(1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,
137 0.0, True))
138 sampled_bbox = image_util.generate_batch_samples(
139 batch_sampler, bbox_labels, img_width, img_height)
140
141 img = np.array(img)
142 if len(sampled_bbox) > 0:
143 idx = int(np.random.uniform(0, len(sampled_bbox)))
144 img, sampled_labels = image_util.crop_image(
145 img, bbox_labels, sampled_bbox[idx], img_width, img_height,
146 settings.resize_width, settings.resize_height,
147 settings.min_face_size)
148
149 img = Image.fromarray(img)
150 interp_mode = [
151 Image.BILINEAR, Image.HAMMING, Image.NEAREST, Image.BICUBIC,
152 Image.LANCZOS
153 ]
154 interp_indx = np.random.randint(0, 5)
155
156 img = img.resize(
157 (settings.resize_width, settings.resize_height),
158 resample=interp_mode[interp_indx])
159 img = np.array(img)
160
161 if mode == 'train':
162 mirror = int(np.random.uniform(0, 2))
163 if mirror == 1:
164 img = img[:, ::-1, :]
165 for i in six.moves.xrange(len(sampled_labels)):
166 tmp = sampled_labels[i][1]
167 sampled_labels[i][1] = 1 - sampled_labels[i][3]
168 sampled_labels[i][3] = 1 - tmp
169
170 img = to_chw_bgr(img)
171 img = img.astype('float32')
172 img -= settings.img_mean
173 img = img * settings.scale
174 return img, sampled_labels
175
176
177 def load_file_list(input_txt):
178 with open(input_txt, 'r') as f_dir:
179 lines_input_txt = f_dir.readlines()
180
181 file_dict = {}
182 num_class = 0
183 for i in range(len(lines_input_txt)):
184 line_txt = lines_input_txt[i].strip('\n\t\r')
185 if '--' in line_txt:
186 if i != 0:
187 num_class += 1
188 file_dict[num_class] = []
189 file_dict[num_class].append(line_txt)
190 if '--' not in line_txt:
191 if len(line_txt) > 6:
192 split_str = line_txt.split(' ')
193 x1_min = float(split_str[0])
194 y1_min = float(split_str[1])
195 x2_max = float(split_str[2])
196 y2_max = float(split_str[3])
197 line_txt = str(x1_min) + ' ' + str(y1_min) + ' ' + str(
198 x2_max) + ' ' + str(y2_max)
199 file_dict[num_class].append(line_txt)
200 else:
201 file_dict[num_class].append(line_txt)
202
203 return list(file_dict.values())
204
205
206 def expand_bboxes(bboxes,
207 expand_left=2.,
208 expand_up=2.,
209 expand_right=2.,
210 expand_down=2.):
211 """
212 Expand bboxes, expand 2 times by defalut.
213 """
214 expand_boxes = []
215 for bbox in bboxes:
216 xmin = bbox[0]
217 ymin = bbox[1]
218 xmax = bbox[2]
219 ymax = bbox[3]
220 w = xmax - xmin
221 h = ymax - ymin
222 ex_xmin = max(xmin - w / expand_left, 0.)
223 ex_ymin = max(ymin - h / expand_up, 0.)
224 ex_xmax = min(xmax + w / expand_right, 1.)
225 ex_ymax = min(ymax + h / expand_down, 1.)
226 expand_boxes.append([ex_xmin, ex_ymin, ex_xmax, ex_ymax])
227 return expand_boxes
228
229
230 def train_generator(settings, file_list, batch_size, shuffle=True):
231 def reader():
232 if shuffle:
233 np.random.shuffle(file_list)
234 batch_out = []
235 for item in file_list:
236 image_name = item[0]
237 image_path = os.path.join(settings.data_dir, image_name)
238 im = Image.open(image_path)
239 if im.mode == 'L':
240 im = im.convert('RGB')
241 im_width, im_height = im.size
242
243 # layout: label | xmin | ymin | xmax | ymax
244 bbox_labels = []
245 for index_box in range(len(item)):
246 if index_box >= 2:
247 bbox_sample = []
248 temp_info_box = item[index_box].split(' ')
249 xmin = float(temp_info_box[0])
250 ymin = float(temp_info_box[1])
251 w = float(temp_info_box[2])
252 h = float(temp_info_box[3])
253
254 # Filter out wrong labels
255 if w < 0 or h < 0:
256 continue
257 xmax = xmin + w
258 ymax = ymin + h
259
260 bbox_sample.append(1)
261 bbox_sample.append(float(xmin) / im_width)
262 bbox_sample.append(float(ymin) / im_height)
263 bbox_sample.append(float(xmax) / im_width)
264 bbox_sample.append(float(ymax) / im_height)
265 bbox_labels.append(bbox_sample)
266 im, sample_labels = preprocess(im, bbox_labels, "train", settings,
267 image_path)
268 sample_labels = np.array(sample_labels)
269 if len(sample_labels) == 0: continue
270
271 im = im.astype('float32')
272 face_box = sample_labels[:, 1:5]
273 head_box = expand_bboxes(face_box)
274 label = [1] * len(face_box)
275 batch_out.append((im, face_box, head_box, label))
276 if len(batch_out) == batch_size:
277 yield batch_out
278 batch_out = []
279
280 return reader
281
282
283 def train(settings, file_list, batch_size, shuffle=True, num_workers=8):
284 file_lists = load_file_list(file_list)
285 n = int(math.ceil(len(file_lists) // num_workers))
286 split_lists = [file_lists[i:i + n] for i in range(0, len(file_lists), n)]
287 readers = []
288 for iterm in split_lists:
289 readers.append(train_generator(settings, iterm, batch_size, shuffle))
290 return paddle.reader.multiprocess_reader(readers, False)
291
292
293 def test(settings, file_list):
294 file_lists = load_file_list(file_list)
295
296 def reader():
297 for image in file_lists:
298 image_name = image[0]
299 image_path = os.path.join(settings.data_dir, image_name)
300 im = Image.open(image_path)
301 if im.mode == 'L':
302 im = im.convert('RGB')
303 yield im, image_path
304
305 return reader
306
307
308 def infer(settings, image_path):
309 def batch_reader():
310 img = Image.open(image_path)
311 if img.mode == 'L':
312 img = im.convert('RGB')
313 im_width, im_height = img.size
314 if settings.resize_width and settings.resize_height:
315 img = img.resize((settings.resize_width, settings.resize_height),
316 Image.ANTIALIAS)
317 img = np.array(img)
318 img = to_chw_bgr(img)
319 img = img.astype('float32')
320 img -= settings.img_mean
321 img = img * settings.scale
322 return np.array([img])
323
324 return batch_reader
325
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/PaddleCV/face_detection/reader.py b/PaddleCV/face_detection/reader.py
--- a/PaddleCV/face_detection/reader.py
+++ b/PaddleCV/face_detection/reader.py
@@ -309,7 +309,7 @@
def batch_reader():
img = Image.open(image_path)
if img.mode == 'L':
- img = im.convert('RGB')
+ img = img.convert('RGB')
im_width, im_height = img.size
if settings.resize_width and settings.resize_height:
img = img.resize((settings.resize_width, settings.resize_height),
| {"golden_diff": "diff --git a/PaddleCV/face_detection/reader.py b/PaddleCV/face_detection/reader.py\n--- a/PaddleCV/face_detection/reader.py\n+++ b/PaddleCV/face_detection/reader.py\n@@ -309,7 +309,7 @@\n def batch_reader():\n img = Image.open(image_path)\n if img.mode == 'L':\n- img = im.convert('RGB')\n+ img = img.convert('RGB')\n im_width, im_height = img.size\n if settings.resize_width and settings.resize_height:\n img = img.resize((settings.resize_width, settings.resize_height),\n", "issue": "There is a small error in face_detection reader\nhttps://github.com/PaddlePaddle/models/blob/55138a40a219975bcbbeb6c1edef5284de721f72/PaddleCV/face_detection/reader.py#L312\r\n\r\nthe code should be \r\n```\r\n img = img.convert('RGB')\r\n```\r\n\n", "before_files": [{"content": "# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom PIL import Image\nfrom PIL import ImageDraw\nimport numpy as np\nimport xml.etree.ElementTree\nimport os\nimport time\nimport copy\nimport random\nimport cv2\nimport six\nimport math\nfrom itertools import islice\nimport paddle\nimport image_util\n\n\nclass Settings(object):\n def __init__(self,\n dataset=None,\n data_dir=None,\n label_file=None,\n resize_h=None,\n resize_w=None,\n mean_value=[104., 117., 123.],\n apply_distort=True,\n apply_expand=True,\n ap_version='11point',\n toy=0):\n self.dataset = dataset\n self.ap_version = ap_version\n self.toy = toy\n self.data_dir = data_dir\n self.apply_distort = apply_distort\n self.apply_expand = apply_expand\n self.resize_height = resize_h\n self.resize_width = resize_w\n self.img_mean = np.array(mean_value)[:, np.newaxis, np.newaxis].astype(\n 'float32')\n self.expand_prob = 0.5\n self.expand_max_ratio = 4\n self.hue_prob = 0.5\n self.hue_delta = 18\n self.contrast_prob = 0.5\n self.contrast_delta = 0.5\n self.saturation_prob = 0.5\n self.saturation_delta = 0.5\n self.brightness_prob = 0.5\n # _brightness_delta is the normalized value by 256\n self.brightness_delta = 0.125\n self.scale = 0.007843 # 1 / 127.5\n self.data_anchor_sampling_prob = 0.5\n self.min_face_size = 8.0\n\n\ndef to_chw_bgr(image):\n \"\"\"\n Transpose image from HWC to CHW and from RBG to BGR.\n Args:\n image (np.array): an image with HWC and RBG layout.\n \"\"\"\n # HWC to CHW\n if len(image.shape) == 3:\n image = np.swapaxes(image, 1, 2)\n image = np.swapaxes(image, 1, 0)\n # RBG to BGR\n image = image[[2, 1, 0], :, :]\n return image\n\n\ndef preprocess(img, bbox_labels, mode, settings, image_path):\n img_width, img_height = img.size\n sampled_labels = bbox_labels\n if mode == 'train':\n if settings.apply_distort:\n img = image_util.distort_image(img, settings)\n if settings.apply_expand:\n img, bbox_labels, img_width, img_height = image_util.expand_image(\n img, bbox_labels, img_width, img_height, settings)\n\n # sampling\n batch_sampler = []\n\n prob = np.random.uniform(0., 1.)\n if prob > settings.data_anchor_sampling_prob:\n scale_array = np.array([16, 32, 64, 128, 256, 512])\n batch_sampler.append(\n image_util.sampler(1, 10, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.2,\n 0.0, True))\n sampled_bbox = image_util.generate_batch_random_samples(\n batch_sampler, bbox_labels, img_width, img_height, scale_array,\n settings.resize_width, settings.resize_height)\n img = np.array(img)\n if len(sampled_bbox) > 0:\n idx = int(np.random.uniform(0, len(sampled_bbox)))\n img, sampled_labels = image_util.crop_image_sampling(\n img, bbox_labels, sampled_bbox[idx], img_width, img_height,\n settings.resize_width, settings.resize_height,\n settings.min_face_size)\n\n img = img.astype('uint8')\n img = Image.fromarray(img)\n\n else:\n # hard-code here\n batch_sampler.append(\n image_util.sampler(1, 50, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,\n 0.0, True))\n batch_sampler.append(\n image_util.sampler(1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,\n 0.0, True))\n batch_sampler.append(\n image_util.sampler(1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,\n 0.0, True))\n batch_sampler.append(\n image_util.sampler(1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,\n 0.0, True))\n batch_sampler.append(\n image_util.sampler(1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,\n 0.0, True))\n sampled_bbox = image_util.generate_batch_samples(\n batch_sampler, bbox_labels, img_width, img_height)\n\n img = np.array(img)\n if len(sampled_bbox) > 0:\n idx = int(np.random.uniform(0, len(sampled_bbox)))\n img, sampled_labels = image_util.crop_image(\n img, bbox_labels, sampled_bbox[idx], img_width, img_height,\n settings.resize_width, settings.resize_height,\n settings.min_face_size)\n\n img = Image.fromarray(img)\n interp_mode = [\n Image.BILINEAR, Image.HAMMING, Image.NEAREST, Image.BICUBIC,\n Image.LANCZOS\n ]\n interp_indx = np.random.randint(0, 5)\n\n img = img.resize(\n (settings.resize_width, settings.resize_height),\n resample=interp_mode[interp_indx])\n img = np.array(img)\n\n if mode == 'train':\n mirror = int(np.random.uniform(0, 2))\n if mirror == 1:\n img = img[:, ::-1, :]\n for i in six.moves.xrange(len(sampled_labels)):\n tmp = sampled_labels[i][1]\n sampled_labels[i][1] = 1 - sampled_labels[i][3]\n sampled_labels[i][3] = 1 - tmp\n\n img = to_chw_bgr(img)\n img = img.astype('float32')\n img -= settings.img_mean\n img = img * settings.scale\n return img, sampled_labels\n\n\ndef load_file_list(input_txt):\n with open(input_txt, 'r') as f_dir:\n lines_input_txt = f_dir.readlines()\n\n file_dict = {}\n num_class = 0\n for i in range(len(lines_input_txt)):\n line_txt = lines_input_txt[i].strip('\\n\\t\\r')\n if '--' in line_txt:\n if i != 0:\n num_class += 1\n file_dict[num_class] = []\n file_dict[num_class].append(line_txt)\n if '--' not in line_txt:\n if len(line_txt) > 6:\n split_str = line_txt.split(' ')\n x1_min = float(split_str[0])\n y1_min = float(split_str[1])\n x2_max = float(split_str[2])\n y2_max = float(split_str[3])\n line_txt = str(x1_min) + ' ' + str(y1_min) + ' ' + str(\n x2_max) + ' ' + str(y2_max)\n file_dict[num_class].append(line_txt)\n else:\n file_dict[num_class].append(line_txt)\n\n return list(file_dict.values())\n\n\ndef expand_bboxes(bboxes,\n expand_left=2.,\n expand_up=2.,\n expand_right=2.,\n expand_down=2.):\n \"\"\"\n Expand bboxes, expand 2 times by defalut.\n \"\"\"\n expand_boxes = []\n for bbox in bboxes:\n xmin = bbox[0]\n ymin = bbox[1]\n xmax = bbox[2]\n ymax = bbox[3]\n w = xmax - xmin\n h = ymax - ymin\n ex_xmin = max(xmin - w / expand_left, 0.)\n ex_ymin = max(ymin - h / expand_up, 0.)\n ex_xmax = min(xmax + w / expand_right, 1.)\n ex_ymax = min(ymax + h / expand_down, 1.)\n expand_boxes.append([ex_xmin, ex_ymin, ex_xmax, ex_ymax])\n return expand_boxes\n\n\ndef train_generator(settings, file_list, batch_size, shuffle=True):\n def reader():\n if shuffle:\n np.random.shuffle(file_list)\n batch_out = []\n for item in file_list:\n image_name = item[0]\n image_path = os.path.join(settings.data_dir, image_name)\n im = Image.open(image_path)\n if im.mode == 'L':\n im = im.convert('RGB')\n im_width, im_height = im.size\n\n # layout: label | xmin | ymin | xmax | ymax\n bbox_labels = []\n for index_box in range(len(item)):\n if index_box >= 2:\n bbox_sample = []\n temp_info_box = item[index_box].split(' ')\n xmin = float(temp_info_box[0])\n ymin = float(temp_info_box[1])\n w = float(temp_info_box[2])\n h = float(temp_info_box[3])\n\n # Filter out wrong labels\n if w < 0 or h < 0:\n continue\n xmax = xmin + w\n ymax = ymin + h\n\n bbox_sample.append(1)\n bbox_sample.append(float(xmin) / im_width)\n bbox_sample.append(float(ymin) / im_height)\n bbox_sample.append(float(xmax) / im_width)\n bbox_sample.append(float(ymax) / im_height)\n bbox_labels.append(bbox_sample)\n im, sample_labels = preprocess(im, bbox_labels, \"train\", settings,\n image_path)\n sample_labels = np.array(sample_labels)\n if len(sample_labels) == 0: continue\n\n im = im.astype('float32')\n face_box = sample_labels[:, 1:5]\n head_box = expand_bboxes(face_box)\n label = [1] * len(face_box)\n batch_out.append((im, face_box, head_box, label))\n if len(batch_out) == batch_size:\n yield batch_out\n batch_out = []\n\n return reader\n\n\ndef train(settings, file_list, batch_size, shuffle=True, num_workers=8):\n file_lists = load_file_list(file_list)\n n = int(math.ceil(len(file_lists) // num_workers))\n split_lists = [file_lists[i:i + n] for i in range(0, len(file_lists), n)]\n readers = []\n for iterm in split_lists:\n readers.append(train_generator(settings, iterm, batch_size, shuffle))\n return paddle.reader.multiprocess_reader(readers, False)\n\n\ndef test(settings, file_list):\n file_lists = load_file_list(file_list)\n\n def reader():\n for image in file_lists:\n image_name = image[0]\n image_path = os.path.join(settings.data_dir, image_name)\n im = Image.open(image_path)\n if im.mode == 'L':\n im = im.convert('RGB')\n yield im, image_path\n\n return reader\n\n\ndef infer(settings, image_path):\n def batch_reader():\n img = Image.open(image_path)\n if img.mode == 'L':\n img = im.convert('RGB')\n im_width, im_height = img.size\n if settings.resize_width and settings.resize_height:\n img = img.resize((settings.resize_width, settings.resize_height),\n Image.ANTIALIAS)\n img = np.array(img)\n img = to_chw_bgr(img)\n img = img.astype('float32')\n img -= settings.img_mean\n img = img * settings.scale\n return np.array([img])\n\n return batch_reader\n", "path": "PaddleCV/face_detection/reader.py"}], "after_files": [{"content": "# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom PIL import Image\nfrom PIL import ImageDraw\nimport numpy as np\nimport xml.etree.ElementTree\nimport os\nimport time\nimport copy\nimport random\nimport cv2\nimport six\nimport math\nfrom itertools import islice\nimport paddle\nimport image_util\n\n\nclass Settings(object):\n def __init__(self,\n dataset=None,\n data_dir=None,\n label_file=None,\n resize_h=None,\n resize_w=None,\n mean_value=[104., 117., 123.],\n apply_distort=True,\n apply_expand=True,\n ap_version='11point',\n toy=0):\n self.dataset = dataset\n self.ap_version = ap_version\n self.toy = toy\n self.data_dir = data_dir\n self.apply_distort = apply_distort\n self.apply_expand = apply_expand\n self.resize_height = resize_h\n self.resize_width = resize_w\n self.img_mean = np.array(mean_value)[:, np.newaxis, np.newaxis].astype(\n 'float32')\n self.expand_prob = 0.5\n self.expand_max_ratio = 4\n self.hue_prob = 0.5\n self.hue_delta = 18\n self.contrast_prob = 0.5\n self.contrast_delta = 0.5\n self.saturation_prob = 0.5\n self.saturation_delta = 0.5\n self.brightness_prob = 0.5\n # _brightness_delta is the normalized value by 256\n self.brightness_delta = 0.125\n self.scale = 0.007843 # 1 / 127.5\n self.data_anchor_sampling_prob = 0.5\n self.min_face_size = 8.0\n\n\ndef to_chw_bgr(image):\n \"\"\"\n Transpose image from HWC to CHW and from RBG to BGR.\n Args:\n image (np.array): an image with HWC and RBG layout.\n \"\"\"\n # HWC to CHW\n if len(image.shape) == 3:\n image = np.swapaxes(image, 1, 2)\n image = np.swapaxes(image, 1, 0)\n # RBG to BGR\n image = image[[2, 1, 0], :, :]\n return image\n\n\ndef preprocess(img, bbox_labels, mode, settings, image_path):\n img_width, img_height = img.size\n sampled_labels = bbox_labels\n if mode == 'train':\n if settings.apply_distort:\n img = image_util.distort_image(img, settings)\n if settings.apply_expand:\n img, bbox_labels, img_width, img_height = image_util.expand_image(\n img, bbox_labels, img_width, img_height, settings)\n\n # sampling\n batch_sampler = []\n\n prob = np.random.uniform(0., 1.)\n if prob > settings.data_anchor_sampling_prob:\n scale_array = np.array([16, 32, 64, 128, 256, 512])\n batch_sampler.append(\n image_util.sampler(1, 10, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.2,\n 0.0, True))\n sampled_bbox = image_util.generate_batch_random_samples(\n batch_sampler, bbox_labels, img_width, img_height, scale_array,\n settings.resize_width, settings.resize_height)\n img = np.array(img)\n if len(sampled_bbox) > 0:\n idx = int(np.random.uniform(0, len(sampled_bbox)))\n img, sampled_labels = image_util.crop_image_sampling(\n img, bbox_labels, sampled_bbox[idx], img_width, img_height,\n settings.resize_width, settings.resize_height,\n settings.min_face_size)\n\n img = img.astype('uint8')\n img = Image.fromarray(img)\n\n else:\n # hard-code here\n batch_sampler.append(\n image_util.sampler(1, 50, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,\n 0.0, True))\n batch_sampler.append(\n image_util.sampler(1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,\n 0.0, True))\n batch_sampler.append(\n image_util.sampler(1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,\n 0.0, True))\n batch_sampler.append(\n image_util.sampler(1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,\n 0.0, True))\n batch_sampler.append(\n image_util.sampler(1, 50, 0.3, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0,\n 0.0, True))\n sampled_bbox = image_util.generate_batch_samples(\n batch_sampler, bbox_labels, img_width, img_height)\n\n img = np.array(img)\n if len(sampled_bbox) > 0:\n idx = int(np.random.uniform(0, len(sampled_bbox)))\n img, sampled_labels = image_util.crop_image(\n img, bbox_labels, sampled_bbox[idx], img_width, img_height,\n settings.resize_width, settings.resize_height,\n settings.min_face_size)\n\n img = Image.fromarray(img)\n interp_mode = [\n Image.BILINEAR, Image.HAMMING, Image.NEAREST, Image.BICUBIC,\n Image.LANCZOS\n ]\n interp_indx = np.random.randint(0, 5)\n\n img = img.resize(\n (settings.resize_width, settings.resize_height),\n resample=interp_mode[interp_indx])\n img = np.array(img)\n\n if mode == 'train':\n mirror = int(np.random.uniform(0, 2))\n if mirror == 1:\n img = img[:, ::-1, :]\n for i in six.moves.xrange(len(sampled_labels)):\n tmp = sampled_labels[i][1]\n sampled_labels[i][1] = 1 - sampled_labels[i][3]\n sampled_labels[i][3] = 1 - tmp\n\n img = to_chw_bgr(img)\n img = img.astype('float32')\n img -= settings.img_mean\n img = img * settings.scale\n return img, sampled_labels\n\n\ndef load_file_list(input_txt):\n with open(input_txt, 'r') as f_dir:\n lines_input_txt = f_dir.readlines()\n\n file_dict = {}\n num_class = 0\n for i in range(len(lines_input_txt)):\n line_txt = lines_input_txt[i].strip('\\n\\t\\r')\n if '--' in line_txt:\n if i != 0:\n num_class += 1\n file_dict[num_class] = []\n file_dict[num_class].append(line_txt)\n if '--' not in line_txt:\n if len(line_txt) > 6:\n split_str = line_txt.split(' ')\n x1_min = float(split_str[0])\n y1_min = float(split_str[1])\n x2_max = float(split_str[2])\n y2_max = float(split_str[3])\n line_txt = str(x1_min) + ' ' + str(y1_min) + ' ' + str(\n x2_max) + ' ' + str(y2_max)\n file_dict[num_class].append(line_txt)\n else:\n file_dict[num_class].append(line_txt)\n\n return list(file_dict.values())\n\n\ndef expand_bboxes(bboxes,\n expand_left=2.,\n expand_up=2.,\n expand_right=2.,\n expand_down=2.):\n \"\"\"\n Expand bboxes, expand 2 times by defalut.\n \"\"\"\n expand_boxes = []\n for bbox in bboxes:\n xmin = bbox[0]\n ymin = bbox[1]\n xmax = bbox[2]\n ymax = bbox[3]\n w = xmax - xmin\n h = ymax - ymin\n ex_xmin = max(xmin - w / expand_left, 0.)\n ex_ymin = max(ymin - h / expand_up, 0.)\n ex_xmax = min(xmax + w / expand_right, 1.)\n ex_ymax = min(ymax + h / expand_down, 1.)\n expand_boxes.append([ex_xmin, ex_ymin, ex_xmax, ex_ymax])\n return expand_boxes\n\n\ndef train_generator(settings, file_list, batch_size, shuffle=True):\n def reader():\n if shuffle:\n np.random.shuffle(file_list)\n batch_out = []\n for item in file_list:\n image_name = item[0]\n image_path = os.path.join(settings.data_dir, image_name)\n im = Image.open(image_path)\n if im.mode == 'L':\n im = im.convert('RGB')\n im_width, im_height = im.size\n\n # layout: label | xmin | ymin | xmax | ymax\n bbox_labels = []\n for index_box in range(len(item)):\n if index_box >= 2:\n bbox_sample = []\n temp_info_box = item[index_box].split(' ')\n xmin = float(temp_info_box[0])\n ymin = float(temp_info_box[1])\n w = float(temp_info_box[2])\n h = float(temp_info_box[3])\n\n # Filter out wrong labels\n if w < 0 or h < 0:\n continue\n xmax = xmin + w\n ymax = ymin + h\n\n bbox_sample.append(1)\n bbox_sample.append(float(xmin) / im_width)\n bbox_sample.append(float(ymin) / im_height)\n bbox_sample.append(float(xmax) / im_width)\n bbox_sample.append(float(ymax) / im_height)\n bbox_labels.append(bbox_sample)\n im, sample_labels = preprocess(im, bbox_labels, \"train\", settings,\n image_path)\n sample_labels = np.array(sample_labels)\n if len(sample_labels) == 0: continue\n\n im = im.astype('float32')\n face_box = sample_labels[:, 1:5]\n head_box = expand_bboxes(face_box)\n label = [1] * len(face_box)\n batch_out.append((im, face_box, head_box, label))\n if len(batch_out) == batch_size:\n yield batch_out\n batch_out = []\n\n return reader\n\n\ndef train(settings, file_list, batch_size, shuffle=True, num_workers=8):\n file_lists = load_file_list(file_list)\n n = int(math.ceil(len(file_lists) // num_workers))\n split_lists = [file_lists[i:i + n] for i in range(0, len(file_lists), n)]\n readers = []\n for iterm in split_lists:\n readers.append(train_generator(settings, iterm, batch_size, shuffle))\n return paddle.reader.multiprocess_reader(readers, False)\n\n\ndef test(settings, file_list):\n file_lists = load_file_list(file_list)\n\n def reader():\n for image in file_lists:\n image_name = image[0]\n image_path = os.path.join(settings.data_dir, image_name)\n im = Image.open(image_path)\n if im.mode == 'L':\n im = im.convert('RGB')\n yield im, image_path\n\n return reader\n\n\ndef infer(settings, image_path):\n def batch_reader():\n img = Image.open(image_path)\n if img.mode == 'L':\n img = img.convert('RGB')\n im_width, im_height = img.size\n if settings.resize_width and settings.resize_height:\n img = img.resize((settings.resize_width, settings.resize_height),\n Image.ANTIALIAS)\n img = np.array(img)\n img = to_chw_bgr(img)\n img = img.astype('float32')\n img -= settings.img_mean\n img = img * settings.scale\n return np.array([img])\n\n return batch_reader\n", "path": "PaddleCV/face_detection/reader.py"}]} | 4,091 | 135 |
gh_patches_debug_38619 | rasdani/github-patches | git_diff | sktime__sktime-1600 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Refactor issue #1043
Fixes #1043
Removed methods load_UCR_UEA_dataset & _load_dataset from datasets/base.py and moved them to utils/data_io.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sktime/transformations/panel/signature_based/_signature_method.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from sklearn.pipeline import Pipeline
3 from sktime.transformations.base import _PanelToTabularTransformer
4 from sktime.transformations.panel.signature_based._compute import (
5 _WindowSignatureTransform,
6 )
7 from sktime.transformations.panel.signature_based._augmentations import (
8 _make_augmentation_pipeline,
9 )
10 from sktime.transformations.panel.signature_based._checks import (
11 _handle_sktime_signatures,
12 )
13
14
15 class SignatureTransformer(_PanelToTabularTransformer):
16 """Transformation class from the signature method.
17
18 Follows the methodology laid out in the paper:
19 "A Generalised Signature Method for Multivariate Time Series"
20
21 Parameters
22 ----------
23 augmentation_list: tuple of strings, contains the augmentations to be
24 applied before application of the signature transform.
25 window_name: str, The name of the window transform to apply.
26 window_depth: int, The depth of the dyadic window. (Active only if
27 `window_name == 'dyadic'`).
28 window_length: int, The length of the sliding/expanding window. (Active
29 only if `window_name in ['sliding, 'expanding']`.
30 window_step: int, The step of the sliding/expanding window. (Active
31 only if `window_name in ['sliding, 'expanding']`.
32 rescaling: str or None, The method of signature rescaling.
33 sig_tfm: str, String to specify the type of signature transform. One of:
34 ['signature', 'logsignature']).
35 depth: int, Signature truncation depth.
36
37 Attributes
38 ----------
39 signature_method: sklearn.Pipeline, A sklearn pipeline object that contains
40 all the steps to extract the signature features.
41 """
42
43 def __init__(
44 self,
45 augmentation_list=("basepoint", "addtime"),
46 window_name="dyadic",
47 window_depth=3,
48 window_length=None,
49 window_step=None,
50 rescaling=None,
51 sig_tfm="signature",
52 depth=4,
53 ):
54 super(SignatureTransformer, self).__init__()
55 self.augmentation_list = augmentation_list
56 self.window_name = window_name
57 self.window_depth = window_depth
58 self.window_length = window_length
59 self.window_step = window_step
60 self.rescaling = rescaling
61 self.sig_tfm = sig_tfm
62 self.depth = depth
63
64 self.setup_feature_pipeline()
65
66 def _assertions(self):
67 """Some assertions to run on initialisation."""
68 assert not all(
69 [self.sig_tfm == "logsignature", self.rescaling == "post"]
70 ), "Cannot have post rescaling with the logsignature."
71
72 def setup_feature_pipeline(self):
73 """Sets up the signature method as an sklearn pipeline."""
74 augmentation_step = _make_augmentation_pipeline(self.augmentation_list)
75 transform_step = _WindowSignatureTransform(
76 window_name=self.window_name,
77 window_depth=self.window_depth,
78 window_length=self.window_length,
79 window_step=self.window_step,
80 sig_tfm=self.sig_tfm,
81 sig_depth=self.depth,
82 rescaling=self.rescaling,
83 )
84
85 # The so-called 'signature method' as defined in the reference paper
86 self.signature_method = Pipeline(
87 [
88 ("augmentations", augmentation_step),
89 ("window_and_transform", transform_step),
90 ]
91 )
92
93 @_handle_sktime_signatures(check_fitted=False)
94 def fit(self, data, labels=None):
95 self.signature_method.fit(data, labels)
96 self._is_fitted = True
97 return self
98
99 @_handle_sktime_signatures(check_fitted=True)
100 def transform(self, data, labels=None):
101 return self.signature_method.transform(data)
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sktime/transformations/panel/signature_based/_signature_method.py b/sktime/transformations/panel/signature_based/_signature_method.py
--- a/sktime/transformations/panel/signature_based/_signature_method.py
+++ b/sktime/transformations/panel/signature_based/_signature_method.py
@@ -1,15 +1,16 @@
# -*- coding: utf-8 -*-
from sklearn.pipeline import Pipeline
+
from sktime.transformations.base import _PanelToTabularTransformer
-from sktime.transformations.panel.signature_based._compute import (
- _WindowSignatureTransform,
-)
from sktime.transformations.panel.signature_based._augmentations import (
_make_augmentation_pipeline,
)
from sktime.transformations.panel.signature_based._checks import (
_handle_sktime_signatures,
)
+from sktime.transformations.panel.signature_based._compute import (
+ _WindowSignatureTransform,
+)
class SignatureTransformer(_PanelToTabularTransformer):
@@ -63,14 +64,8 @@
self.setup_feature_pipeline()
- def _assertions(self):
- """Some assertions to run on initialisation."""
- assert not all(
- [self.sig_tfm == "logsignature", self.rescaling == "post"]
- ), "Cannot have post rescaling with the logsignature."
-
def setup_feature_pipeline(self):
- """Sets up the signature method as an sklearn pipeline."""
+ """Set up the signature method as an sklearn pipeline."""
augmentation_step = _make_augmentation_pipeline(self.augmentation_list)
transform_step = _WindowSignatureTransform(
window_name=self.window_name,
@@ -92,10 +87,38 @@
@_handle_sktime_signatures(check_fitted=False)
def fit(self, data, labels=None):
+ """Fit to data, then transform it.
+
+ Parameters
+ ----------
+ data: pd.Dataframe or np.ndarray (3d array)
+ Data to transform.
+ labels: np.ndarray (1d array) or pd.series or list
+ Labels for the data.
+
+ Returns
+ -------
+ pd.Dataframe or np.ndarray or pd.series
+ Transformed data.
+ """
self.signature_method.fit(data, labels)
self._is_fitted = True
return self
@_handle_sktime_signatures(check_fitted=True)
def transform(self, data, labels=None):
+ """Transform the class from the signature method.
+
+ Parameters
+ ----------
+ data: pd.Dataframe or np.ndarray (3d array)
+ Data to transform.
+ labels: np.ndarray (1d array) or pd.series or list
+ Labels for the data.
+
+ Returns
+ -------
+ pd.Dataframe or np.ndarray or pd.series
+ Transformed data.
+ """
return self.signature_method.transform(data)
| {"golden_diff": "diff --git a/sktime/transformations/panel/signature_based/_signature_method.py b/sktime/transformations/panel/signature_based/_signature_method.py\n--- a/sktime/transformations/panel/signature_based/_signature_method.py\n+++ b/sktime/transformations/panel/signature_based/_signature_method.py\n@@ -1,15 +1,16 @@\n # -*- coding: utf-8 -*-\n from sklearn.pipeline import Pipeline\n+\n from sktime.transformations.base import _PanelToTabularTransformer\n-from sktime.transformations.panel.signature_based._compute import (\n- _WindowSignatureTransform,\n-)\n from sktime.transformations.panel.signature_based._augmentations import (\n _make_augmentation_pipeline,\n )\n from sktime.transformations.panel.signature_based._checks import (\n _handle_sktime_signatures,\n )\n+from sktime.transformations.panel.signature_based._compute import (\n+ _WindowSignatureTransform,\n+)\n \n \n class SignatureTransformer(_PanelToTabularTransformer):\n@@ -63,14 +64,8 @@\n \n self.setup_feature_pipeline()\n \n- def _assertions(self):\n- \"\"\"Some assertions to run on initialisation.\"\"\"\n- assert not all(\n- [self.sig_tfm == \"logsignature\", self.rescaling == \"post\"]\n- ), \"Cannot have post rescaling with the logsignature.\"\n-\n def setup_feature_pipeline(self):\n- \"\"\"Sets up the signature method as an sklearn pipeline.\"\"\"\n+ \"\"\"Set up the signature method as an sklearn pipeline.\"\"\"\n augmentation_step = _make_augmentation_pipeline(self.augmentation_list)\n transform_step = _WindowSignatureTransform(\n window_name=self.window_name,\n@@ -92,10 +87,38 @@\n \n @_handle_sktime_signatures(check_fitted=False)\n def fit(self, data, labels=None):\n+ \"\"\"Fit to data, then transform it.\n+\n+ Parameters\n+ ----------\n+ data: pd.Dataframe or np.ndarray (3d array)\n+ Data to transform.\n+ labels: np.ndarray (1d array) or pd.series or list\n+ Labels for the data.\n+\n+ Returns\n+ -------\n+ pd.Dataframe or np.ndarray or pd.series\n+ Transformed data.\n+ \"\"\"\n self.signature_method.fit(data, labels)\n self._is_fitted = True\n return self\n \n @_handle_sktime_signatures(check_fitted=True)\n def transform(self, data, labels=None):\n+ \"\"\"Transform the class from the signature method.\n+\n+ Parameters\n+ ----------\n+ data: pd.Dataframe or np.ndarray (3d array)\n+ Data to transform.\n+ labels: np.ndarray (1d array) or pd.series or list\n+ Labels for the data.\n+\n+ Returns\n+ -------\n+ pd.Dataframe or np.ndarray or pd.series\n+ Transformed data.\n+ \"\"\"\n return self.signature_method.transform(data)\n", "issue": "Refactor issue #1043\nFixes #1043 \r\n\r\nRemoved methods load_UCR_UEA_dataset & _load_dataset from datasets/base.py and moved them to utils/data_io.py\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom sklearn.pipeline import Pipeline\nfrom sktime.transformations.base import _PanelToTabularTransformer\nfrom sktime.transformations.panel.signature_based._compute import (\n _WindowSignatureTransform,\n)\nfrom sktime.transformations.panel.signature_based._augmentations import (\n _make_augmentation_pipeline,\n)\nfrom sktime.transformations.panel.signature_based._checks import (\n _handle_sktime_signatures,\n)\n\n\nclass SignatureTransformer(_PanelToTabularTransformer):\n \"\"\"Transformation class from the signature method.\n\n Follows the methodology laid out in the paper:\n \"A Generalised Signature Method for Multivariate Time Series\"\n\n Parameters\n ----------\n augmentation_list: tuple of strings, contains the augmentations to be\n applied before application of the signature transform.\n window_name: str, The name of the window transform to apply.\n window_depth: int, The depth of the dyadic window. (Active only if\n `window_name == 'dyadic'`).\n window_length: int, The length of the sliding/expanding window. (Active\n only if `window_name in ['sliding, 'expanding']`.\n window_step: int, The step of the sliding/expanding window. (Active\n only if `window_name in ['sliding, 'expanding']`.\n rescaling: str or None, The method of signature rescaling.\n sig_tfm: str, String to specify the type of signature transform. One of:\n ['signature', 'logsignature']).\n depth: int, Signature truncation depth.\n\n Attributes\n ----------\n signature_method: sklearn.Pipeline, A sklearn pipeline object that contains\n all the steps to extract the signature features.\n \"\"\"\n\n def __init__(\n self,\n augmentation_list=(\"basepoint\", \"addtime\"),\n window_name=\"dyadic\",\n window_depth=3,\n window_length=None,\n window_step=None,\n rescaling=None,\n sig_tfm=\"signature\",\n depth=4,\n ):\n super(SignatureTransformer, self).__init__()\n self.augmentation_list = augmentation_list\n self.window_name = window_name\n self.window_depth = window_depth\n self.window_length = window_length\n self.window_step = window_step\n self.rescaling = rescaling\n self.sig_tfm = sig_tfm\n self.depth = depth\n\n self.setup_feature_pipeline()\n\n def _assertions(self):\n \"\"\"Some assertions to run on initialisation.\"\"\"\n assert not all(\n [self.sig_tfm == \"logsignature\", self.rescaling == \"post\"]\n ), \"Cannot have post rescaling with the logsignature.\"\n\n def setup_feature_pipeline(self):\n \"\"\"Sets up the signature method as an sklearn pipeline.\"\"\"\n augmentation_step = _make_augmentation_pipeline(self.augmentation_list)\n transform_step = _WindowSignatureTransform(\n window_name=self.window_name,\n window_depth=self.window_depth,\n window_length=self.window_length,\n window_step=self.window_step,\n sig_tfm=self.sig_tfm,\n sig_depth=self.depth,\n rescaling=self.rescaling,\n )\n\n # The so-called 'signature method' as defined in the reference paper\n self.signature_method = Pipeline(\n [\n (\"augmentations\", augmentation_step),\n (\"window_and_transform\", transform_step),\n ]\n )\n\n @_handle_sktime_signatures(check_fitted=False)\n def fit(self, data, labels=None):\n self.signature_method.fit(data, labels)\n self._is_fitted = True\n return self\n\n @_handle_sktime_signatures(check_fitted=True)\n def transform(self, data, labels=None):\n return self.signature_method.transform(data)\n", "path": "sktime/transformations/panel/signature_based/_signature_method.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom sklearn.pipeline import Pipeline\n\nfrom sktime.transformations.base import _PanelToTabularTransformer\nfrom sktime.transformations.panel.signature_based._augmentations import (\n _make_augmentation_pipeline,\n)\nfrom sktime.transformations.panel.signature_based._checks import (\n _handle_sktime_signatures,\n)\nfrom sktime.transformations.panel.signature_based._compute import (\n _WindowSignatureTransform,\n)\n\n\nclass SignatureTransformer(_PanelToTabularTransformer):\n \"\"\"Transformation class from the signature method.\n\n Follows the methodology laid out in the paper:\n \"A Generalised Signature Method for Multivariate Time Series\"\n\n Parameters\n ----------\n augmentation_list: tuple of strings, contains the augmentations to be\n applied before application of the signature transform.\n window_name: str, The name of the window transform to apply.\n window_depth: int, The depth of the dyadic window. (Active only if\n `window_name == 'dyadic'`).\n window_length: int, The length of the sliding/expanding window. (Active\n only if `window_name in ['sliding, 'expanding']`.\n window_step: int, The step of the sliding/expanding window. (Active\n only if `window_name in ['sliding, 'expanding']`.\n rescaling: str or None, The method of signature rescaling.\n sig_tfm: str, String to specify the type of signature transform. One of:\n ['signature', 'logsignature']).\n depth: int, Signature truncation depth.\n\n Attributes\n ----------\n signature_method: sklearn.Pipeline, A sklearn pipeline object that contains\n all the steps to extract the signature features.\n \"\"\"\n\n def __init__(\n self,\n augmentation_list=(\"basepoint\", \"addtime\"),\n window_name=\"dyadic\",\n window_depth=3,\n window_length=None,\n window_step=None,\n rescaling=None,\n sig_tfm=\"signature\",\n depth=4,\n ):\n super(SignatureTransformer, self).__init__()\n self.augmentation_list = augmentation_list\n self.window_name = window_name\n self.window_depth = window_depth\n self.window_length = window_length\n self.window_step = window_step\n self.rescaling = rescaling\n self.sig_tfm = sig_tfm\n self.depth = depth\n\n self.setup_feature_pipeline()\n\n def setup_feature_pipeline(self):\n \"\"\"Set up the signature method as an sklearn pipeline.\"\"\"\n augmentation_step = _make_augmentation_pipeline(self.augmentation_list)\n transform_step = _WindowSignatureTransform(\n window_name=self.window_name,\n window_depth=self.window_depth,\n window_length=self.window_length,\n window_step=self.window_step,\n sig_tfm=self.sig_tfm,\n sig_depth=self.depth,\n rescaling=self.rescaling,\n )\n\n # The so-called 'signature method' as defined in the reference paper\n self.signature_method = Pipeline(\n [\n (\"augmentations\", augmentation_step),\n (\"window_and_transform\", transform_step),\n ]\n )\n\n @_handle_sktime_signatures(check_fitted=False)\n def fit(self, data, labels=None):\n \"\"\"Fit to data, then transform it.\n\n Parameters\n ----------\n data: pd.Dataframe or np.ndarray (3d array)\n Data to transform.\n labels: np.ndarray (1d array) or pd.series or list\n Labels for the data.\n\n Returns\n -------\n pd.Dataframe or np.ndarray or pd.series\n Transformed data.\n \"\"\"\n self.signature_method.fit(data, labels)\n self._is_fitted = True\n return self\n\n @_handle_sktime_signatures(check_fitted=True)\n def transform(self, data, labels=None):\n \"\"\"Transform the class from the signature method.\n\n Parameters\n ----------\n data: pd.Dataframe or np.ndarray (3d array)\n Data to transform.\n labels: np.ndarray (1d array) or pd.series or list\n Labels for the data.\n\n Returns\n -------\n pd.Dataframe or np.ndarray or pd.series\n Transformed data.\n \"\"\"\n return self.signature_method.transform(data)\n", "path": "sktime/transformations/panel/signature_based/_signature_method.py"}]} | 1,294 | 629 |
gh_patches_debug_11144 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-1119 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Configuration object stores ints as floats
The global configuration object will store `"2"` as `2.0` instead of `2`. Fix that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-api/src/opentelemetry/configuration/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Simple configuration manager
17
18 This is a configuration manager for OpenTelemetry. It reads configuration
19 values from environment variables prefixed with ``OTEL_`` (for environment
20 variables that apply to any OpenTelemetry implementation) or with
21 ``OTEL_PYTHON_`` (for environment variables that are specific to the Python
22 implementation of OpenTelemetry) whose characters are only alphanumeric
23 characters and unserscores, except for the first character after ``OTEL_`` or
24 ``OTEL_PYTHON_`` which must not be a number.
25
26 For example, these environment variables will be read:
27
28 1. ``OTEL_SOMETHING``
29 2. ``OTEL_SOMETHING_ELSE_``
30 3. ``OTEL_SOMETHING_ELSE_AND__ELSE``
31 4. ``OTEL_SOMETHING_ELSE_AND_else``
32 5. ``OTEL_SOMETHING_ELSE_AND_else2``
33
34 These won't:
35
36 1. ``OPENTELEMETRY_PYTH_SOMETHING``
37 2. ``OTEL_2_SOMETHING_AND__ELSE``
38 3. ``OTEL_SOMETHING_%_ELSE``
39
40 The values stored in the environment variables can be found in an instance of
41 ``opentelemetry.configuration.Configuration``. This class can be instantiated
42 freely because instantiating it returns always the same object.
43
44 For example, if the environment variable
45 ``OTEL_PYTHON_METER_PROVIDER`` value is ``my_meter_provider``, then
46 ``Configuration().meter_provider == "my_meter_provider"`` would be ``True``.
47
48 Non defined attributes will always return ``None``. This is intended to make it
49 easier to use the ``Configuration`` object in actual code, because it won't be
50 necessary to check for the attribute to be defined first.
51
52 Environment variables used by OpenTelemetry
53 -------------------------------------------
54
55 1. OTEL_PYTHON_METER_PROVIDER
56 2. OTEL_PYTHON_TRACER_PROVIDER
57
58 The value of these environment variables should be the name of the entry point
59 that points to the class that implements either provider. This OpenTelemetry
60 API package provides one entry point for each, which can be found in the
61 setup.py file::
62
63 entry_points={
64 ...
65 "opentelemetry_meter_provider": [
66 "default_meter_provider = "
67 "opentelemetry.metrics:DefaultMeterProvider"
68 ],
69 "opentelemetry_tracer_provider": [
70 "default_tracer_provider = "
71 "opentelemetry.trace:DefaultTracerProvider"
72 ],
73 }
74
75 To use the meter provider above, then the
76 ``OTEL_PYTHON_METER_PROVIDER`` should be set to
77 ``"default_meter_provider"`` (this is not actually necessary since the
78 OpenTelemetry API provided providers are the default ones used if no
79 configuration is found in the environment variables).
80
81 Configuration values that are exactly ``"True"`` or ``"False"`` will be
82 converted to its boolean values of ``True`` and ``False`` respectively.
83
84 Configuration values that can be casted to integers or floats will be casted.
85
86 This object can be used by any OpenTelemetry component, native or external.
87 For that reason, the ``Configuration`` object is designed to be immutable.
88 If a component would change the value of one of the ``Configuration`` object
89 attributes then another component that relied on that value may break, leading
90 to bugs that are very hard to debug. To avoid this situation, the preferred
91 approach for components that need a different value than the one provided by
92 the ``Configuration`` object is to implement a mechanism that allows the user
93 to override this value instead of changing it.
94 """
95
96 from os import environ
97 from re import fullmatch
98 from typing import ClassVar, Dict, Optional, TypeVar, Union
99
100 ConfigValue = Union[str, bool, int, float]
101 _T = TypeVar("_T", ConfigValue, Optional[ConfigValue])
102
103
104 class Configuration:
105 _instance = None # type: ClassVar[Optional[Configuration]]
106 _config_map = {} # type: ClassVar[Dict[str, ConfigValue]]
107
108 def __new__(cls) -> "Configuration":
109 if cls._instance is not None:
110 instance = cls._instance
111 else:
112
113 instance = super().__new__(cls)
114 for key, value_str in environ.items():
115
116 match = fullmatch(r"OTEL_(PYTHON_)?([A-Za-z_][\w_]*)", key)
117
118 if match is not None:
119
120 key = match.group(2)
121 value = value_str # type: ConfigValue
122
123 if value_str == "True":
124 value = True
125 elif value_str == "False":
126 value = False
127 else:
128 try:
129 value = int(value_str)
130 except ValueError:
131 pass
132 try:
133 value = float(value_str)
134 except ValueError:
135 pass
136
137 instance._config_map[key] = value
138
139 cls._instance = instance
140
141 return instance
142
143 def __getattr__(self, name: str) -> Optional[ConfigValue]:
144 return self._config_map.get(name)
145
146 def __setattr__(self, name: str, value: ConfigValue) -> None:
147 if name not in self._config_map.keys():
148 self._config_map[name] = value
149 else:
150 raise AttributeError(name)
151
152 def get(self, name: str, default: _T) -> _T:
153 """Use this typed method for dynamic access instead of `getattr`
154
155 :rtype: str or bool or int or float or None
156 """
157 return self._config_map.get(name, default)
158
159 @classmethod
160 def _reset(cls) -> None:
161 """
162 This method "resets" the global configuration attributes
163
164 It is not intended to be used by production code but by testing code
165 only.
166 """
167
168 if cls._instance:
169 cls._instance._config_map.clear() # pylint: disable=protected-access
170 cls._instance = None
171
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opentelemetry-api/src/opentelemetry/configuration/__init__.py b/opentelemetry-api/src/opentelemetry/configuration/__init__.py
--- a/opentelemetry-api/src/opentelemetry/configuration/__init__.py
+++ b/opentelemetry-api/src/opentelemetry/configuration/__init__.py
@@ -128,11 +128,10 @@
try:
value = int(value_str)
except ValueError:
- pass
- try:
- value = float(value_str)
- except ValueError:
- pass
+ try:
+ value = float(value_str)
+ except ValueError:
+ pass
instance._config_map[key] = value
| {"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/configuration/__init__.py b/opentelemetry-api/src/opentelemetry/configuration/__init__.py\n--- a/opentelemetry-api/src/opentelemetry/configuration/__init__.py\n+++ b/opentelemetry-api/src/opentelemetry/configuration/__init__.py\n@@ -128,11 +128,10 @@\n try:\n value = int(value_str)\n except ValueError:\n- pass\n- try:\n- value = float(value_str)\n- except ValueError:\n- pass\n+ try:\n+ value = float(value_str)\n+ except ValueError:\n+ pass\n \n instance._config_map[key] = value\n", "issue": "Configuration object stores ints as floats\nThe global configuration object will store `\"2\"` as `2.0` instead of `2`. Fix that.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nSimple configuration manager\n\nThis is a configuration manager for OpenTelemetry. It reads configuration\nvalues from environment variables prefixed with ``OTEL_`` (for environment\nvariables that apply to any OpenTelemetry implementation) or with\n``OTEL_PYTHON_`` (for environment variables that are specific to the Python\nimplementation of OpenTelemetry) whose characters are only alphanumeric\ncharacters and unserscores, except for the first character after ``OTEL_`` or\n``OTEL_PYTHON_`` which must not be a number.\n\nFor example, these environment variables will be read:\n\n1. ``OTEL_SOMETHING``\n2. ``OTEL_SOMETHING_ELSE_``\n3. ``OTEL_SOMETHING_ELSE_AND__ELSE``\n4. ``OTEL_SOMETHING_ELSE_AND_else``\n5. ``OTEL_SOMETHING_ELSE_AND_else2``\n\nThese won't:\n\n1. ``OPENTELEMETRY_PYTH_SOMETHING``\n2. ``OTEL_2_SOMETHING_AND__ELSE``\n3. ``OTEL_SOMETHING_%_ELSE``\n\nThe values stored in the environment variables can be found in an instance of\n``opentelemetry.configuration.Configuration``. This class can be instantiated\nfreely because instantiating it returns always the same object.\n\nFor example, if the environment variable\n``OTEL_PYTHON_METER_PROVIDER`` value is ``my_meter_provider``, then\n``Configuration().meter_provider == \"my_meter_provider\"`` would be ``True``.\n\nNon defined attributes will always return ``None``. This is intended to make it\neasier to use the ``Configuration`` object in actual code, because it won't be\nnecessary to check for the attribute to be defined first.\n\nEnvironment variables used by OpenTelemetry\n-------------------------------------------\n\n1. OTEL_PYTHON_METER_PROVIDER\n2. OTEL_PYTHON_TRACER_PROVIDER\n\nThe value of these environment variables should be the name of the entry point\nthat points to the class that implements either provider. This OpenTelemetry\nAPI package provides one entry point for each, which can be found in the\nsetup.py file::\n\n entry_points={\n ...\n \"opentelemetry_meter_provider\": [\n \"default_meter_provider = \"\n \"opentelemetry.metrics:DefaultMeterProvider\"\n ],\n \"opentelemetry_tracer_provider\": [\n \"default_tracer_provider = \"\n \"opentelemetry.trace:DefaultTracerProvider\"\n ],\n }\n\nTo use the meter provider above, then the\n``OTEL_PYTHON_METER_PROVIDER`` should be set to\n``\"default_meter_provider\"`` (this is not actually necessary since the\nOpenTelemetry API provided providers are the default ones used if no\nconfiguration is found in the environment variables).\n\nConfiguration values that are exactly ``\"True\"`` or ``\"False\"`` will be\nconverted to its boolean values of ``True`` and ``False`` respectively.\n\nConfiguration values that can be casted to integers or floats will be casted.\n\nThis object can be used by any OpenTelemetry component, native or external.\nFor that reason, the ``Configuration`` object is designed to be immutable.\nIf a component would change the value of one of the ``Configuration`` object\nattributes then another component that relied on that value may break, leading\nto bugs that are very hard to debug. To avoid this situation, the preferred\napproach for components that need a different value than the one provided by\nthe ``Configuration`` object is to implement a mechanism that allows the user\nto override this value instead of changing it.\n\"\"\"\n\nfrom os import environ\nfrom re import fullmatch\nfrom typing import ClassVar, Dict, Optional, TypeVar, Union\n\nConfigValue = Union[str, bool, int, float]\n_T = TypeVar(\"_T\", ConfigValue, Optional[ConfigValue])\n\n\nclass Configuration:\n _instance = None # type: ClassVar[Optional[Configuration]]\n _config_map = {} # type: ClassVar[Dict[str, ConfigValue]]\n\n def __new__(cls) -> \"Configuration\":\n if cls._instance is not None:\n instance = cls._instance\n else:\n\n instance = super().__new__(cls)\n for key, value_str in environ.items():\n\n match = fullmatch(r\"OTEL_(PYTHON_)?([A-Za-z_][\\w_]*)\", key)\n\n if match is not None:\n\n key = match.group(2)\n value = value_str # type: ConfigValue\n\n if value_str == \"True\":\n value = True\n elif value_str == \"False\":\n value = False\n else:\n try:\n value = int(value_str)\n except ValueError:\n pass\n try:\n value = float(value_str)\n except ValueError:\n pass\n\n instance._config_map[key] = value\n\n cls._instance = instance\n\n return instance\n\n def __getattr__(self, name: str) -> Optional[ConfigValue]:\n return self._config_map.get(name)\n\n def __setattr__(self, name: str, value: ConfigValue) -> None:\n if name not in self._config_map.keys():\n self._config_map[name] = value\n else:\n raise AttributeError(name)\n\n def get(self, name: str, default: _T) -> _T:\n \"\"\"Use this typed method for dynamic access instead of `getattr`\n\n :rtype: str or bool or int or float or None\n \"\"\"\n return self._config_map.get(name, default)\n\n @classmethod\n def _reset(cls) -> None:\n \"\"\"\n This method \"resets\" the global configuration attributes\n\n It is not intended to be used by production code but by testing code\n only.\n \"\"\"\n\n if cls._instance:\n cls._instance._config_map.clear() # pylint: disable=protected-access\n cls._instance = None\n", "path": "opentelemetry-api/src/opentelemetry/configuration/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nSimple configuration manager\n\nThis is a configuration manager for OpenTelemetry. It reads configuration\nvalues from environment variables prefixed with ``OTEL_`` (for environment\nvariables that apply to any OpenTelemetry implementation) or with\n``OTEL_PYTHON_`` (for environment variables that are specific to the Python\nimplementation of OpenTelemetry) whose characters are only alphanumeric\ncharacters and unserscores, except for the first character after ``OTEL_`` or\n``OTEL_PYTHON_`` which must not be a number.\n\nFor example, these environment variables will be read:\n\n1. ``OTEL_SOMETHING``\n2. ``OTEL_SOMETHING_ELSE_``\n3. ``OTEL_SOMETHING_ELSE_AND__ELSE``\n4. ``OTEL_SOMETHING_ELSE_AND_else``\n5. ``OTEL_SOMETHING_ELSE_AND_else2``\n\nThese won't:\n\n1. ``OPENTELEMETRY_PYTH_SOMETHING``\n2. ``OTEL_2_SOMETHING_AND__ELSE``\n3. ``OTEL_SOMETHING_%_ELSE``\n\nThe values stored in the environment variables can be found in an instance of\n``opentelemetry.configuration.Configuration``. This class can be instantiated\nfreely because instantiating it returns always the same object.\n\nFor example, if the environment variable\n``OTEL_PYTHON_METER_PROVIDER`` value is ``my_meter_provider``, then\n``Configuration().meter_provider == \"my_meter_provider\"`` would be ``True``.\n\nNon defined attributes will always return ``None``. This is intended to make it\neasier to use the ``Configuration`` object in actual code, because it won't be\nnecessary to check for the attribute to be defined first.\n\nEnvironment variables used by OpenTelemetry\n-------------------------------------------\n\n1. OTEL_PYTHON_METER_PROVIDER\n2. OTEL_PYTHON_TRACER_PROVIDER\n\nThe value of these environment variables should be the name of the entry point\nthat points to the class that implements either provider. This OpenTelemetry\nAPI package provides one entry point for each, which can be found in the\nsetup.py file::\n\n entry_points={\n ...\n \"opentelemetry_meter_provider\": [\n \"default_meter_provider = \"\n \"opentelemetry.metrics:DefaultMeterProvider\"\n ],\n \"opentelemetry_tracer_provider\": [\n \"default_tracer_provider = \"\n \"opentelemetry.trace:DefaultTracerProvider\"\n ],\n }\n\nTo use the meter provider above, then the\n``OTEL_PYTHON_METER_PROVIDER`` should be set to\n``\"default_meter_provider\"`` (this is not actually necessary since the\nOpenTelemetry API provided providers are the default ones used if no\nconfiguration is found in the environment variables).\n\nConfiguration values that are exactly ``\"True\"`` or ``\"False\"`` will be\nconverted to its boolean values of ``True`` and ``False`` respectively.\n\nConfiguration values that can be casted to integers or floats will be casted.\n\nThis object can be used by any OpenTelemetry component, native or external.\nFor that reason, the ``Configuration`` object is designed to be immutable.\nIf a component would change the value of one of the ``Configuration`` object\nattributes then another component that relied on that value may break, leading\nto bugs that are very hard to debug. To avoid this situation, the preferred\napproach for components that need a different value than the one provided by\nthe ``Configuration`` object is to implement a mechanism that allows the user\nto override this value instead of changing it.\n\"\"\"\n\nfrom os import environ\nfrom re import fullmatch\nfrom typing import ClassVar, Dict, Optional, TypeVar, Union\n\nConfigValue = Union[str, bool, int, float]\n_T = TypeVar(\"_T\", ConfigValue, Optional[ConfigValue])\n\n\nclass Configuration:\n _instance = None # type: ClassVar[Optional[Configuration]]\n _config_map = {} # type: ClassVar[Dict[str, ConfigValue]]\n\n def __new__(cls) -> \"Configuration\":\n if cls._instance is not None:\n instance = cls._instance\n else:\n\n instance = super().__new__(cls)\n for key, value_str in environ.items():\n\n match = fullmatch(r\"OTEL_(PYTHON_)?([A-Za-z_][\\w_]*)\", key)\n\n if match is not None:\n\n key = match.group(2)\n value = value_str # type: ConfigValue\n\n if value_str == \"True\":\n value = True\n elif value_str == \"False\":\n value = False\n else:\n try:\n value = int(value_str)\n except ValueError:\n try:\n value = float(value_str)\n except ValueError:\n pass\n\n instance._config_map[key] = value\n\n cls._instance = instance\n\n return instance\n\n def __getattr__(self, name: str) -> Optional[ConfigValue]:\n return self._config_map.get(name)\n\n def __setattr__(self, name: str, value: ConfigValue) -> None:\n if name not in self._config_map.keys():\n self._config_map[name] = value\n else:\n raise AttributeError(name)\n\n def get(self, name: str, default: _T) -> _T:\n \"\"\"Use this typed method for dynamic access instead of `getattr`\n\n :rtype: str or bool or int or float or None\n \"\"\"\n return self._config_map.get(name, default)\n\n @classmethod\n def _reset(cls) -> None:\n \"\"\"\n This method \"resets\" the global configuration attributes\n\n It is not intended to be used by production code but by testing code\n only.\n \"\"\"\n\n if cls._instance:\n cls._instance._config_map.clear() # pylint: disable=protected-access\n cls._instance = None\n", "path": "opentelemetry-api/src/opentelemetry/configuration/__init__.py"}]} | 2,110 | 150 |
gh_patches_debug_56181 | rasdani/github-patches | git_diff | TOMToolkit__tom_base-196 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing dataclasses
Following the tom_base install instructions, I pip installed the requirements.txt and then tried
> ./manage.py migrate
which ended with the following error:
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "/Users/rstreet/software/tom_base/tom_alerts/urls.py", line 3, in <module>
from tom_alerts.views import BrokerQueryCreateView, BrokerQueryListView, BrokerQueryUpdateView, RunQueryView
File "/Users/rstreet/software/tom_base/tom_alerts/views.py", line 3, in <module>
from tom_alerts.alerts import get_service_class, get_service_classes
File "/Users/rstreet/software/tom_base/tom_alerts/alerts.py", line 5, in <module>
from dataclasses import dataclass
ModuleNotFoundError: No module named 'dataclasses'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2 from os import path
3
4 here = path.abspath(path.dirname(__file__))
5 with open(path.join(here, 'README.md'), encoding='utf-8') as f:
6 long_description = f.read()
7
8 setup(
9 name='tomtoolkit',
10 version='1.1.0',
11 description='The TOM Toolkit and base modules',
12 long_description=long_description,
13 long_description_content_type='text/markdown',
14 url='https://tomtoolkit.github.io',
15 author='TOM Toolkit Project',
16 author_email='[email protected]',
17 classifiers=[
18 'Development Status :: 3 - Alpha',
19 'Intended Audience :: Science/Research',
20 'License :: OSI Approved :: BSD License',
21 'Operating System :: OS Independent',
22 'Programming Language :: Python :: 3',
23 'Programming Language :: Python :: 3.7',
24 'Topic :: Scientific/Engineering :: Astronomy',
25 'Topic :: Scientific/Engineering :: Physics'
26 ],
27 keywords=['tomtoolkit', 'astronomy', 'astrophysics', 'cosmology', 'science', 'fits', 'observatory'],
28 packages=find_packages(),
29 install_requires=[
30 'django',
31 'django-bootstrap4',
32 'django-extensions',
33 'django-filter',
34 'django-contrib-comments',
35 'django-gravatar2',
36 'django-crispy-forms',
37 'django-guardian',
38 'numpy',
39 'python-dateutil',
40 'requests',
41 'astroquery',
42 'astropy',
43 'astroplan',
44 'plotly',
45 'matplotlib',
46 'pillow',
47 'fits2image',
48 'specutils',
49 ],
50 extras_require={
51 'test': ['factory_boy']
52 },
53 include_package_data=True,
54 )
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -46,6 +46,7 @@
'pillow',
'fits2image',
'specutils',
+ "dataclasses; python_version < '3.7'",
],
extras_require={
'test': ['factory_boy']
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -46,6 +46,7 @@\n 'pillow',\n 'fits2image',\n 'specutils',\n+ \"dataclasses; python_version < '3.7'\",\n ],\n extras_require={\n 'test': ['factory_boy']\n", "issue": "Missing dataclasses\nFollowing the tom_base install instructions, I pip installed the requirements.txt and then tried \r\n> ./manage.py migrate\r\n\r\nwhich ended with the following error:\r\n File \"<frozen importlib._bootstrap_external>\", line 678, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 205, in _call_with_frames_removed\r\n File \"/Users/rstreet/software/tom_base/tom_alerts/urls.py\", line 3, in <module>\r\n from tom_alerts.views import BrokerQueryCreateView, BrokerQueryListView, BrokerQueryUpdateView, RunQueryView\r\n File \"/Users/rstreet/software/tom_base/tom_alerts/views.py\", line 3, in <module>\r\n from tom_alerts.alerts import get_service_class, get_service_classes\r\n File \"/Users/rstreet/software/tom_base/tom_alerts/alerts.py\", line 5, in <module>\r\n from dataclasses import dataclass\r\nModuleNotFoundError: No module named 'dataclasses'\r\n\n", "before_files": [{"content": "from setuptools import setup, find_packages\nfrom os import path\n\nhere = path.abspath(path.dirname(__file__))\nwith open(path.join(here, 'README.md'), encoding='utf-8') as f:\n long_description = f.read()\n\nsetup(\n name='tomtoolkit',\n version='1.1.0',\n description='The TOM Toolkit and base modules',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://tomtoolkit.github.io',\n author='TOM Toolkit Project',\n author_email='[email protected]',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Astronomy',\n 'Topic :: Scientific/Engineering :: Physics'\n ],\n keywords=['tomtoolkit', 'astronomy', 'astrophysics', 'cosmology', 'science', 'fits', 'observatory'],\n packages=find_packages(),\n install_requires=[\n 'django',\n 'django-bootstrap4',\n 'django-extensions',\n 'django-filter',\n 'django-contrib-comments',\n 'django-gravatar2',\n 'django-crispy-forms',\n 'django-guardian',\n 'numpy',\n 'python-dateutil',\n 'requests',\n 'astroquery',\n 'astropy',\n 'astroplan',\n 'plotly',\n 'matplotlib',\n 'pillow',\n 'fits2image',\n 'specutils',\n ],\n extras_require={\n 'test': ['factory_boy']\n },\n include_package_data=True,\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\nfrom os import path\n\nhere = path.abspath(path.dirname(__file__))\nwith open(path.join(here, 'README.md'), encoding='utf-8') as f:\n long_description = f.read()\n\nsetup(\n name='tomtoolkit',\n version='1.1.0',\n description='The TOM Toolkit and base modules',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://tomtoolkit.github.io',\n author='TOM Toolkit Project',\n author_email='[email protected]',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Astronomy',\n 'Topic :: Scientific/Engineering :: Physics'\n ],\n keywords=['tomtoolkit', 'astronomy', 'astrophysics', 'cosmology', 'science', 'fits', 'observatory'],\n packages=find_packages(),\n install_requires=[\n 'django',\n 'django-bootstrap4',\n 'django-extensions',\n 'django-filter',\n 'django-contrib-comments',\n 'django-gravatar2',\n 'django-crispy-forms',\n 'django-guardian',\n 'numpy',\n 'python-dateutil',\n 'requests',\n 'astroquery',\n 'astropy',\n 'astroplan',\n 'plotly',\n 'matplotlib',\n 'pillow',\n 'fits2image',\n 'specutils',\n \"dataclasses; python_version < '3.7'\",\n ],\n extras_require={\n 'test': ['factory_boy']\n },\n include_package_data=True,\n)\n", "path": "setup.py"}]} | 951 | 77 |
gh_patches_debug_29137 | rasdani/github-patches | git_diff | spack__spack-4584 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
building flex with +lex variant fails
Using an older system (suse 13 with python 2.7.6) and the symlink code in the package fails entirely.
@mjwoods
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/flex/package.py`
Content:
```
1 ##############################################################################
2 # Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
3 # Produced at the Lawrence Livermore National Laboratory.
4 #
5 # This file is part of Spack.
6 # Created by Todd Gamblin, [email protected], All rights reserved.
7 # LLNL-CODE-647188
8 #
9 # For details, see https://github.com/llnl/spack
10 # Please also see the LICENSE file for our notice and the LGPL.
11 #
12 # This program is free software; you can redistribute it and/or modify
13 # it under the terms of the GNU Lesser General Public License (as
14 # published by the Free Software Foundation) version 2.1, February 1999.
15 #
16 # This program is distributed in the hope that it will be useful, but
17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
19 # conditions of the GNU Lesser General Public License for more details.
20 #
21 # You should have received a copy of the GNU Lesser General Public
22 # License along with this program; if not, write to the Free Software
23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
24 ##############################################################################
25 from spack import *
26 import os
27
28
29 class Flex(AutotoolsPackage):
30 """Flex is a tool for generating scanners."""
31
32 homepage = "https://github.com/westes/flex"
33 url = "https://github.com/westes/flex/releases/download/v2.6.1/flex-2.6.1.tar.gz"
34
35 version('2.6.3', 'a5f65570cd9107ec8a8ec88f17b31bb1')
36 # Problematic version:
37 # See issue #2554; https://github.com/westes/flex/issues/113
38 # version('2.6.2', 'cc6d76c333db7653d5caf423a3335239')
39 version('2.6.1', '05bcd8fb629e0ae130311e8a6106fa82')
40 version('2.6.0', '760be2ee9433e822b6eb65318311c19d')
41 version('2.5.39', '5865e76ac69c05699f476515592750d7')
42
43 variant('lex', default=True,
44 description="Provide symlinks for lex and libl")
45
46 depends_on('bison', type='build')
47 depends_on('[email protected]:', type='build')
48 depends_on('help2man', type='build')
49
50 # Older tarballs don't come with a configure script
51 depends_on('m4', type='build')
52 depends_on('autoconf', type='build', when='@:2.6.0')
53 depends_on('automake', type='build', when='@:2.6.0')
54 depends_on('libtool', type='build', when='@:2.6.0')
55
56 def url_for_version(self, version):
57 url = "https://github.com/westes/flex"
58 if version >= Version('2.6.1'):
59 url += "/releases/download/v{0}/flex-{0}.tar.gz".format(version)
60 elif version == Version('2.6.0'):
61 url += "/archive/v{0}.tar.gz".format(version)
62 elif version >= Version('2.5.37'):
63 url += "/archive/flex-{0}.tar.gz".format(version)
64 else:
65 url += "/archive/flex-{0}.tar.gz".format(version.dashed)
66
67 return url
68
69 @run_after('install')
70 def symlink_lex(self):
71 if self.spec.satisfies('+lex'):
72 dso = dso_suffix
73 for dir, flex, lex in \
74 ((self.prefix.bin, 'flex', 'lex'),
75 (self.prefix.lib, 'libfl.a', 'libl.a'),
76 (self.prefix.lib, 'libfl.' + dso, 'libl.' + dso)):
77 with working_dir(dir):
78 if (os.path.isfile(flex) and not
79 os.path.lexists(lex)):
80 symlink(flex, lex)
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/var/spack/repos/builtin/packages/flex/package.py b/var/spack/repos/builtin/packages/flex/package.py
--- a/var/spack/repos/builtin/packages/flex/package.py
+++ b/var/spack/repos/builtin/packages/flex/package.py
@@ -32,6 +32,7 @@
homepage = "https://github.com/westes/flex"
url = "https://github.com/westes/flex/releases/download/v2.6.1/flex-2.6.1.tar.gz"
+ version('2.6.4', '2882e3179748cc9f9c23ec593d6adc8d')
version('2.6.3', 'a5f65570cd9107ec8a8ec88f17b31bb1')
# Problematic version:
# See issue #2554; https://github.com/westes/flex/issues/113
@@ -68,13 +69,17 @@
@run_after('install')
def symlink_lex(self):
+ """Install symlinks for lex compatibility."""
if self.spec.satisfies('+lex'):
dso = dso_suffix
for dir, flex, lex in \
- ((self.prefix.bin, 'flex', 'lex'),
- (self.prefix.lib, 'libfl.a', 'libl.a'),
- (self.prefix.lib, 'libfl.' + dso, 'libl.' + dso)):
- with working_dir(dir):
- if (os.path.isfile(flex) and not
- os.path.lexists(lex)):
- symlink(flex, lex)
+ ((self.prefix.bin, 'flex', 'lex'),
+ (self.prefix.lib, 'libfl.a', 'libl.a'),
+ (self.prefix.lib, 'libfl.' + dso, 'libl.' + dso),
+ (self.prefix.lib64, 'libfl.a', 'libl.a'),
+ (self.prefix.lib64, 'libfl.' + dso, 'libl.' + dso)):
+
+ if os.path.isdir(dir):
+ with working_dir(dir):
+ if (os.path.isfile(flex) and not os.path.lexists(lex)):
+ symlink(flex, lex)
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/flex/package.py b/var/spack/repos/builtin/packages/flex/package.py\n--- a/var/spack/repos/builtin/packages/flex/package.py\n+++ b/var/spack/repos/builtin/packages/flex/package.py\n@@ -32,6 +32,7 @@\n homepage = \"https://github.com/westes/flex\"\n url = \"https://github.com/westes/flex/releases/download/v2.6.1/flex-2.6.1.tar.gz\"\n \n+ version('2.6.4', '2882e3179748cc9f9c23ec593d6adc8d')\n version('2.6.3', 'a5f65570cd9107ec8a8ec88f17b31bb1')\n # Problematic version:\n # See issue #2554; https://github.com/westes/flex/issues/113\n@@ -68,13 +69,17 @@\n \n @run_after('install')\n def symlink_lex(self):\n+ \"\"\"Install symlinks for lex compatibility.\"\"\"\n if self.spec.satisfies('+lex'):\n dso = dso_suffix\n for dir, flex, lex in \\\n- ((self.prefix.bin, 'flex', 'lex'),\n- (self.prefix.lib, 'libfl.a', 'libl.a'),\n- (self.prefix.lib, 'libfl.' + dso, 'libl.' + dso)):\n- with working_dir(dir):\n- if (os.path.isfile(flex) and not\n- os.path.lexists(lex)):\n- symlink(flex, lex)\n+ ((self.prefix.bin, 'flex', 'lex'),\n+ (self.prefix.lib, 'libfl.a', 'libl.a'),\n+ (self.prefix.lib, 'libfl.' + dso, 'libl.' + dso),\n+ (self.prefix.lib64, 'libfl.a', 'libl.a'),\n+ (self.prefix.lib64, 'libfl.' + dso, 'libl.' + dso)):\n+\n+ if os.path.isdir(dir):\n+ with working_dir(dir):\n+ if (os.path.isfile(flex) and not os.path.lexists(lex)):\n+ symlink(flex, lex)\n", "issue": "building flex with +lex variant fails\nUsing an older system (suse 13 with python 2.7.6) and the symlink code in the package fails entirely.\r\n@mjwoods \r\n\n", "before_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the LICENSE file for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom spack import *\nimport os\n\n\nclass Flex(AutotoolsPackage):\n \"\"\"Flex is a tool for generating scanners.\"\"\"\n\n homepage = \"https://github.com/westes/flex\"\n url = \"https://github.com/westes/flex/releases/download/v2.6.1/flex-2.6.1.tar.gz\"\n\n version('2.6.3', 'a5f65570cd9107ec8a8ec88f17b31bb1')\n # Problematic version:\n # See issue #2554; https://github.com/westes/flex/issues/113\n # version('2.6.2', 'cc6d76c333db7653d5caf423a3335239')\n version('2.6.1', '05bcd8fb629e0ae130311e8a6106fa82')\n version('2.6.0', '760be2ee9433e822b6eb65318311c19d')\n version('2.5.39', '5865e76ac69c05699f476515592750d7')\n\n variant('lex', default=True,\n description=\"Provide symlinks for lex and libl\")\n\n depends_on('bison', type='build')\n depends_on('[email protected]:', type='build')\n depends_on('help2man', type='build')\n\n # Older tarballs don't come with a configure script\n depends_on('m4', type='build')\n depends_on('autoconf', type='build', when='@:2.6.0')\n depends_on('automake', type='build', when='@:2.6.0')\n depends_on('libtool', type='build', when='@:2.6.0')\n\n def url_for_version(self, version):\n url = \"https://github.com/westes/flex\"\n if version >= Version('2.6.1'):\n url += \"/releases/download/v{0}/flex-{0}.tar.gz\".format(version)\n elif version == Version('2.6.0'):\n url += \"/archive/v{0}.tar.gz\".format(version)\n elif version >= Version('2.5.37'):\n url += \"/archive/flex-{0}.tar.gz\".format(version)\n else:\n url += \"/archive/flex-{0}.tar.gz\".format(version.dashed)\n\n return url\n\n @run_after('install')\n def symlink_lex(self):\n if self.spec.satisfies('+lex'):\n dso = dso_suffix\n for dir, flex, lex in \\\n ((self.prefix.bin, 'flex', 'lex'),\n (self.prefix.lib, 'libfl.a', 'libl.a'),\n (self.prefix.lib, 'libfl.' + dso, 'libl.' + dso)):\n with working_dir(dir):\n if (os.path.isfile(flex) and not\n os.path.lexists(lex)):\n symlink(flex, lex)\n", "path": "var/spack/repos/builtin/packages/flex/package.py"}], "after_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the LICENSE file for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom spack import *\nimport os\n\n\nclass Flex(AutotoolsPackage):\n \"\"\"Flex is a tool for generating scanners.\"\"\"\n\n homepage = \"https://github.com/westes/flex\"\n url = \"https://github.com/westes/flex/releases/download/v2.6.1/flex-2.6.1.tar.gz\"\n\n version('2.6.4', '2882e3179748cc9f9c23ec593d6adc8d')\n version('2.6.3', 'a5f65570cd9107ec8a8ec88f17b31bb1')\n # Problematic version:\n # See issue #2554; https://github.com/westes/flex/issues/113\n # version('2.6.2', 'cc6d76c333db7653d5caf423a3335239')\n version('2.6.1', '05bcd8fb629e0ae130311e8a6106fa82')\n version('2.6.0', '760be2ee9433e822b6eb65318311c19d')\n version('2.5.39', '5865e76ac69c05699f476515592750d7')\n\n variant('lex', default=True,\n description=\"Provide symlinks for lex and libl\")\n\n depends_on('bison', type='build')\n depends_on('[email protected]:', type='build')\n depends_on('help2man', type='build')\n\n # Older tarballs don't come with a configure script\n depends_on('m4', type='build')\n depends_on('autoconf', type='build', when='@:2.6.0')\n depends_on('automake', type='build', when='@:2.6.0')\n depends_on('libtool', type='build', when='@:2.6.0')\n\n def url_for_version(self, version):\n url = \"https://github.com/westes/flex\"\n if version >= Version('2.6.1'):\n url += \"/releases/download/v{0}/flex-{0}.tar.gz\".format(version)\n elif version == Version('2.6.0'):\n url += \"/archive/v{0}.tar.gz\".format(version)\n elif version >= Version('2.5.37'):\n url += \"/archive/flex-{0}.tar.gz\".format(version)\n else:\n url += \"/archive/flex-{0}.tar.gz\".format(version.dashed)\n\n return url\n\n @run_after('install')\n def symlink_lex(self):\n \"\"\"Install symlinks for lex compatibility.\"\"\"\n if self.spec.satisfies('+lex'):\n dso = dso_suffix\n for dir, flex, lex in \\\n ((self.prefix.bin, 'flex', 'lex'),\n (self.prefix.lib, 'libfl.a', 'libl.a'),\n (self.prefix.lib, 'libfl.' + dso, 'libl.' + dso),\n (self.prefix.lib64, 'libfl.a', 'libl.a'),\n (self.prefix.lib64, 'libfl.' + dso, 'libl.' + dso)):\n\n if os.path.isdir(dir):\n with working_dir(dir):\n if (os.path.isfile(flex) and not os.path.lexists(lex)):\n symlink(flex, lex)\n", "path": "var/spack/repos/builtin/packages/flex/package.py"}]} | 1,468 | 524 |
gh_patches_debug_15053 | rasdani/github-patches | git_diff | deis__deis-4373 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Better error message when registration is disabled
When `/deis/controller/registrationMode` is `disabled`, attempt to register returns
```
Registration failed: {"detail":"Authentication credentials were not provided."}
```
This message is misleading. It should explicitly say that registration is disabled.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `controller/api/permissions.py`
Content:
```
1 from rest_framework import permissions
2 from django.conf import settings
3 from django.contrib.auth.models import AnonymousUser
4
5 from api import models
6
7
8 def is_app_user(request, obj):
9 if request.user.is_superuser or \
10 isinstance(obj, models.App) and obj.owner == request.user or \
11 hasattr(obj, 'app') and obj.app.owner == request.user:
12 return True
13 elif request.user.has_perm('use_app', obj) or \
14 hasattr(obj, 'app') and request.user.has_perm('use_app', obj.app):
15 return request.method != 'DELETE'
16 else:
17 return False
18
19
20 class IsAnonymous(permissions.BasePermission):
21 """
22 View permission to allow anonymous users.
23 """
24
25 def has_permission(self, request, view):
26 """
27 Return `True` if permission is granted, `False` otherwise.
28 """
29 return type(request.user) is AnonymousUser
30
31
32 class IsOwner(permissions.BasePermission):
33 """
34 Object-level permission to allow only owners of an object to access it.
35 Assumes the model instance has an `owner` attribute.
36 """
37
38 def has_object_permission(self, request, view, obj):
39 if hasattr(obj, 'owner'):
40 return obj.owner == request.user
41 else:
42 return False
43
44
45 class IsOwnerOrAdmin(permissions.BasePermission):
46 """
47 Object-level permission to allow only owners of an object or administrators to access it.
48 Assumes the model instance has an `owner` attribute.
49 """
50 def has_object_permission(self, request, view, obj):
51 if request.user.is_superuser:
52 return True
53 if hasattr(obj, 'owner'):
54 return obj.owner == request.user
55 else:
56 return False
57
58
59 class IsAppUser(permissions.BasePermission):
60 """
61 Object-level permission to allow owners or collaborators to access
62 an app-related model.
63 """
64 def has_object_permission(self, request, view, obj):
65 return is_app_user(request, obj)
66
67
68 class IsAdmin(permissions.BasePermission):
69 """
70 View permission to allow only admins.
71 """
72
73 def has_permission(self, request, view):
74 """
75 Return `True` if permission is granted, `False` otherwise.
76 """
77 return request.user.is_superuser
78
79
80 class IsAdminOrSafeMethod(permissions.BasePermission):
81 """
82 View permission to allow only admins to use unsafe methods
83 including POST, PUT, DELETE.
84
85 This allows
86 """
87
88 def has_permission(self, request, view):
89 """
90 Return `True` if permission is granted, `False` otherwise.
91 """
92 return request.method in permissions.SAFE_METHODS or request.user.is_superuser
93
94
95 class HasRegistrationAuth(permissions.BasePermission):
96 """
97 Checks to see if registration is enabled
98 """
99 def has_permission(self, request, view):
100 """
101 If settings.REGISTRATION_MODE does not exist, such as during a test, return True
102 Return `True` if permission is granted, `False` otherwise.
103 """
104 try:
105 if settings.REGISTRATION_MODE == 'disabled':
106 return False
107 if settings.REGISTRATION_MODE == 'enabled':
108 return True
109 elif settings.REGISTRATION_MODE == 'admin_only':
110 return request.user.is_superuser
111 else:
112 raise Exception("{} is not a valid registation mode"
113 .format(settings.REGISTRATION_MODE))
114 except AttributeError:
115 return True
116
117
118 class HasBuilderAuth(permissions.BasePermission):
119 """
120 View permission to allow builder to perform actions
121 with a special HTTP header
122 """
123
124 def has_permission(self, request, view):
125 """
126 Return `True` if permission is granted, `False` otherwise.
127 """
128 auth_header = request.environ.get('HTTP_X_DEIS_BUILDER_AUTH')
129 if not auth_header:
130 return False
131 return auth_header == settings.BUILDER_KEY
132
133
134 class CanRegenerateToken(permissions.BasePermission):
135 """
136 Checks if a user can regenerate a token
137 """
138
139 def has_permission(self, request, view):
140 """
141 Return `True` if permission is granted, `False` otherwise.
142 """
143 if 'username' in request.data or 'all' in request.data:
144 return request.user.is_superuser
145 else:
146 return True
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/controller/api/permissions.py b/controller/api/permissions.py
--- a/controller/api/permissions.py
+++ b/controller/api/permissions.py
@@ -1,3 +1,5 @@
+
+from rest_framework import exceptions
from rest_framework import permissions
from django.conf import settings
from django.contrib.auth.models import AnonymousUser
@@ -103,7 +105,7 @@
"""
try:
if settings.REGISTRATION_MODE == 'disabled':
- return False
+ raise exceptions.PermissionDenied('Registration is disabled')
if settings.REGISTRATION_MODE == 'enabled':
return True
elif settings.REGISTRATION_MODE == 'admin_only':
| {"golden_diff": "diff --git a/controller/api/permissions.py b/controller/api/permissions.py\n--- a/controller/api/permissions.py\n+++ b/controller/api/permissions.py\n@@ -1,3 +1,5 @@\n+\n+from rest_framework import exceptions\n from rest_framework import permissions\n from django.conf import settings\n from django.contrib.auth.models import AnonymousUser\n@@ -103,7 +105,7 @@\n \"\"\"\n try:\n if settings.REGISTRATION_MODE == 'disabled':\n- return False\n+ raise exceptions.PermissionDenied('Registration is disabled')\n if settings.REGISTRATION_MODE == 'enabled':\n return True\n elif settings.REGISTRATION_MODE == 'admin_only':\n", "issue": "Better error message when registration is disabled\nWhen `/deis/controller/registrationMode` is `disabled`, attempt to register returns\n\n```\nRegistration failed: {\"detail\":\"Authentication credentials were not provided.\"}\n```\n\nThis message is misleading. It should explicitly say that registration is disabled.\n\n", "before_files": [{"content": "from rest_framework import permissions\nfrom django.conf import settings\nfrom django.contrib.auth.models import AnonymousUser\n\nfrom api import models\n\n\ndef is_app_user(request, obj):\n if request.user.is_superuser or \\\n isinstance(obj, models.App) and obj.owner == request.user or \\\n hasattr(obj, 'app') and obj.app.owner == request.user:\n return True\n elif request.user.has_perm('use_app', obj) or \\\n hasattr(obj, 'app') and request.user.has_perm('use_app', obj.app):\n return request.method != 'DELETE'\n else:\n return False\n\n\nclass IsAnonymous(permissions.BasePermission):\n \"\"\"\n View permission to allow anonymous users.\n \"\"\"\n\n def has_permission(self, request, view):\n \"\"\"\n Return `True` if permission is granted, `False` otherwise.\n \"\"\"\n return type(request.user) is AnonymousUser\n\n\nclass IsOwner(permissions.BasePermission):\n \"\"\"\n Object-level permission to allow only owners of an object to access it.\n Assumes the model instance has an `owner` attribute.\n \"\"\"\n\n def has_object_permission(self, request, view, obj):\n if hasattr(obj, 'owner'):\n return obj.owner == request.user\n else:\n return False\n\n\nclass IsOwnerOrAdmin(permissions.BasePermission):\n \"\"\"\n Object-level permission to allow only owners of an object or administrators to access it.\n Assumes the model instance has an `owner` attribute.\n \"\"\"\n def has_object_permission(self, request, view, obj):\n if request.user.is_superuser:\n return True\n if hasattr(obj, 'owner'):\n return obj.owner == request.user\n else:\n return False\n\n\nclass IsAppUser(permissions.BasePermission):\n \"\"\"\n Object-level permission to allow owners or collaborators to access\n an app-related model.\n \"\"\"\n def has_object_permission(self, request, view, obj):\n return is_app_user(request, obj)\n\n\nclass IsAdmin(permissions.BasePermission):\n \"\"\"\n View permission to allow only admins.\n \"\"\"\n\n def has_permission(self, request, view):\n \"\"\"\n Return `True` if permission is granted, `False` otherwise.\n \"\"\"\n return request.user.is_superuser\n\n\nclass IsAdminOrSafeMethod(permissions.BasePermission):\n \"\"\"\n View permission to allow only admins to use unsafe methods\n including POST, PUT, DELETE.\n\n This allows\n \"\"\"\n\n def has_permission(self, request, view):\n \"\"\"\n Return `True` if permission is granted, `False` otherwise.\n \"\"\"\n return request.method in permissions.SAFE_METHODS or request.user.is_superuser\n\n\nclass HasRegistrationAuth(permissions.BasePermission):\n \"\"\"\n Checks to see if registration is enabled\n \"\"\"\n def has_permission(self, request, view):\n \"\"\"\n If settings.REGISTRATION_MODE does not exist, such as during a test, return True\n Return `True` if permission is granted, `False` otherwise.\n \"\"\"\n try:\n if settings.REGISTRATION_MODE == 'disabled':\n return False\n if settings.REGISTRATION_MODE == 'enabled':\n return True\n elif settings.REGISTRATION_MODE == 'admin_only':\n return request.user.is_superuser\n else:\n raise Exception(\"{} is not a valid registation mode\"\n .format(settings.REGISTRATION_MODE))\n except AttributeError:\n return True\n\n\nclass HasBuilderAuth(permissions.BasePermission):\n \"\"\"\n View permission to allow builder to perform actions\n with a special HTTP header\n \"\"\"\n\n def has_permission(self, request, view):\n \"\"\"\n Return `True` if permission is granted, `False` otherwise.\n \"\"\"\n auth_header = request.environ.get('HTTP_X_DEIS_BUILDER_AUTH')\n if not auth_header:\n return False\n return auth_header == settings.BUILDER_KEY\n\n\nclass CanRegenerateToken(permissions.BasePermission):\n \"\"\"\n Checks if a user can regenerate a token\n \"\"\"\n\n def has_permission(self, request, view):\n \"\"\"\n Return `True` if permission is granted, `False` otherwise.\n \"\"\"\n if 'username' in request.data or 'all' in request.data:\n return request.user.is_superuser\n else:\n return True\n", "path": "controller/api/permissions.py"}], "after_files": [{"content": "\nfrom rest_framework import exceptions\nfrom rest_framework import permissions\nfrom django.conf import settings\nfrom django.contrib.auth.models import AnonymousUser\n\nfrom api import models\n\n\ndef is_app_user(request, obj):\n if request.user.is_superuser or \\\n isinstance(obj, models.App) and obj.owner == request.user or \\\n hasattr(obj, 'app') and obj.app.owner == request.user:\n return True\n elif request.user.has_perm('use_app', obj) or \\\n hasattr(obj, 'app') and request.user.has_perm('use_app', obj.app):\n return request.method != 'DELETE'\n else:\n return False\n\n\nclass IsAnonymous(permissions.BasePermission):\n \"\"\"\n View permission to allow anonymous users.\n \"\"\"\n\n def has_permission(self, request, view):\n \"\"\"\n Return `True` if permission is granted, `False` otherwise.\n \"\"\"\n return type(request.user) is AnonymousUser\n\n\nclass IsOwner(permissions.BasePermission):\n \"\"\"\n Object-level permission to allow only owners of an object to access it.\n Assumes the model instance has an `owner` attribute.\n \"\"\"\n\n def has_object_permission(self, request, view, obj):\n if hasattr(obj, 'owner'):\n return obj.owner == request.user\n else:\n return False\n\n\nclass IsOwnerOrAdmin(permissions.BasePermission):\n \"\"\"\n Object-level permission to allow only owners of an object or administrators to access it.\n Assumes the model instance has an `owner` attribute.\n \"\"\"\n def has_object_permission(self, request, view, obj):\n if request.user.is_superuser:\n return True\n if hasattr(obj, 'owner'):\n return obj.owner == request.user\n else:\n return False\n\n\nclass IsAppUser(permissions.BasePermission):\n \"\"\"\n Object-level permission to allow owners or collaborators to access\n an app-related model.\n \"\"\"\n def has_object_permission(self, request, view, obj):\n return is_app_user(request, obj)\n\n\nclass IsAdmin(permissions.BasePermission):\n \"\"\"\n View permission to allow only admins.\n \"\"\"\n\n def has_permission(self, request, view):\n \"\"\"\n Return `True` if permission is granted, `False` otherwise.\n \"\"\"\n return request.user.is_superuser\n\n\nclass IsAdminOrSafeMethod(permissions.BasePermission):\n \"\"\"\n View permission to allow only admins to use unsafe methods\n including POST, PUT, DELETE.\n\n This allows\n \"\"\"\n\n def has_permission(self, request, view):\n \"\"\"\n Return `True` if permission is granted, `False` otherwise.\n \"\"\"\n return request.method in permissions.SAFE_METHODS or request.user.is_superuser\n\n\nclass HasRegistrationAuth(permissions.BasePermission):\n \"\"\"\n Checks to see if registration is enabled\n \"\"\"\n def has_permission(self, request, view):\n \"\"\"\n If settings.REGISTRATION_MODE does not exist, such as during a test, return True\n Return `True` if permission is granted, `False` otherwise.\n \"\"\"\n try:\n if settings.REGISTRATION_MODE == 'disabled':\n raise exceptions.PermissionDenied('Registration is disabled')\n if settings.REGISTRATION_MODE == 'enabled':\n return True\n elif settings.REGISTRATION_MODE == 'admin_only':\n return request.user.is_superuser\n else:\n raise Exception(\"{} is not a valid registation mode\"\n .format(settings.REGISTRATION_MODE))\n except AttributeError:\n return True\n\n\nclass HasBuilderAuth(permissions.BasePermission):\n \"\"\"\n View permission to allow builder to perform actions\n with a special HTTP header\n \"\"\"\n\n def has_permission(self, request, view):\n \"\"\"\n Return `True` if permission is granted, `False` otherwise.\n \"\"\"\n auth_header = request.environ.get('HTTP_X_DEIS_BUILDER_AUTH')\n if not auth_header:\n return False\n return auth_header == settings.BUILDER_KEY\n\n\nclass CanRegenerateToken(permissions.BasePermission):\n \"\"\"\n Checks if a user can regenerate a token\n \"\"\"\n\n def has_permission(self, request, view):\n \"\"\"\n Return `True` if permission is granted, `False` otherwise.\n \"\"\"\n if 'username' in request.data or 'all' in request.data:\n return request.user.is_superuser\n else:\n return True\n", "path": "controller/api/permissions.py"}]} | 1,546 | 141 |
gh_patches_debug_37554 | rasdani/github-patches | git_diff | litestar-org__litestar-1695 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/contrib/jwt/jwt_token.py`
Content:
```
1 from __future__ import annotations
2
3 from dataclasses import asdict, dataclass, field
4 from datetime import datetime, timezone
5 from typing import cast
6
7 from jose import JWSError, JWTError, jwt
8
9 from litestar.exceptions import ImproperlyConfiguredException, NotAuthorizedException
10
11 __all__ = ("Token",)
12
13
14 def _normalize_datetime(value: datetime) -> datetime:
15 """Convert the given value into UTC and strip microseconds.
16
17 Args:
18 value: A datetime instance
19
20 Returns:
21 A datetime instance
22 """
23 if value.tzinfo is not None:
24 value.astimezone(timezone.utc)
25
26 return value.replace(microsecond=0)
27
28
29 @dataclass
30 class Token:
31 """JWT Token DTO."""
32
33 exp: datetime
34 """Expiration - datetime for token expiration."""
35 sub: str
36 """Subject - usually a unique identifier of the user or equivalent entity."""
37 iat: datetime = field(default_factory=lambda: _normalize_datetime(datetime.now(timezone.utc)))
38 """Issued at - should always be current now."""
39 iss: str | None = field(default=None)
40 """Issuer - optional unique identifier for the issuer."""
41 aud: str | None = field(default=None)
42 """Audience - intended audience."""
43 jti: str | None = field(default=None)
44 """JWT ID - a unique identifier of the JWT between different issuers."""
45
46 def __post_init__(self) -> None:
47 if len(self.sub) < 1:
48 raise ImproperlyConfiguredException("sub must be a string with a length greater than 0")
49
50 if isinstance(self.exp, datetime) and (
51 (exp := _normalize_datetime(self.exp))
52 and exp.timestamp() >= _normalize_datetime(datetime.now(timezone.utc)).timestamp()
53 ):
54 self.exp = exp
55 else:
56 raise ImproperlyConfiguredException("exp value must be a datetime in the future")
57
58 if isinstance(self.iat, datetime) and (
59 (iat := _normalize_datetime(self.iat))
60 and iat.timestamp() <= _normalize_datetime(datetime.now(timezone.utc)).timestamp()
61 ):
62 self.iat = iat
63 else:
64 raise ImproperlyConfiguredException("iat must be a current or past time")
65
66 @staticmethod
67 def decode(encoded_token: str, secret: str | dict[str, str], algorithm: str) -> Token:
68 """Decode a passed in token string and returns a Token instance.
69
70 Args:
71 encoded_token: A base64 string containing an encoded JWT.
72 secret: The secret with which the JWT is encoded. It may optionally be an individual JWK or JWS set dict
73 algorithm: The algorithm used to encode the JWT.
74
75 Returns:
76 A decoded Token instance.
77
78 Raises:
79 NotAuthorizedException: If the token is invalid.
80 """
81 try:
82 payload = jwt.decode(token=encoded_token, key=secret, algorithms=[algorithm], options={"verify_aud": False})
83 exp = datetime.fromtimestamp(payload.pop("exp"), tz=timezone.utc)
84 iat = datetime.fromtimestamp(payload.pop("iat"), tz=timezone.utc)
85 return Token(exp=exp, iat=iat, **payload)
86 except (KeyError, JWTError, ImproperlyConfiguredException) as e:
87 raise NotAuthorizedException("Invalid token") from e
88
89 def encode(self, secret: str, algorithm: str) -> str:
90 """Encode the token instance into a string.
91
92 Args:
93 secret: The secret with which the JWT is encoded.
94 algorithm: The algorithm used to encode the JWT.
95
96 Returns:
97 An encoded token string.
98
99 Raises:
100 ImproperlyConfiguredException: If encoding fails.
101 """
102 try:
103 return cast(
104 "str",
105 jwt.encode(
106 claims={k: v for k, v in asdict(self).items() if v is not None}, key=secret, algorithm=algorithm
107 ),
108 )
109 except (JWTError, JWSError) as e:
110 raise ImproperlyConfiguredException("Failed to encode token") from e
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/litestar/contrib/jwt/jwt_token.py b/litestar/contrib/jwt/jwt_token.py
--- a/litestar/contrib/jwt/jwt_token.py
+++ b/litestar/contrib/jwt/jwt_token.py
@@ -1,13 +1,18 @@
from __future__ import annotations
+import dataclasses
from dataclasses import asdict, dataclass, field
from datetime import datetime, timezone
-from typing import cast
+from typing import TYPE_CHECKING, Any, cast
from jose import JWSError, JWTError, jwt
from litestar.exceptions import ImproperlyConfiguredException, NotAuthorizedException
+if TYPE_CHECKING:
+ from typing_extensions import Self
+
+
__all__ = ("Token",)
@@ -42,6 +47,8 @@
"""Audience - intended audience."""
jti: str | None = field(default=None)
"""JWT ID - a unique identifier of the JWT between different issuers."""
+ extras: dict[str, Any] = field(default_factory=dict)
+ """Extra fields that were found on the JWT token."""
def __post_init__(self) -> None:
if len(self.sub) < 1:
@@ -63,8 +70,8 @@
else:
raise ImproperlyConfiguredException("iat must be a current or past time")
- @staticmethod
- def decode(encoded_token: str, secret: str | dict[str, str], algorithm: str) -> Token:
+ @classmethod
+ def decode(cls, encoded_token: str, secret: str | dict[str, str], algorithm: str) -> Self:
"""Decode a passed in token string and returns a Token instance.
Args:
@@ -82,7 +89,12 @@
payload = jwt.decode(token=encoded_token, key=secret, algorithms=[algorithm], options={"verify_aud": False})
exp = datetime.fromtimestamp(payload.pop("exp"), tz=timezone.utc)
iat = datetime.fromtimestamp(payload.pop("iat"), tz=timezone.utc)
- return Token(exp=exp, iat=iat, **payload)
+ field_names = {f.name for f in dataclasses.fields(Token)}
+ extra_fields = payload.keys() - field_names
+ extras = payload.pop("extras", {})
+ for key in extra_fields:
+ extras[key] = payload.pop(key)
+ return cls(exp=exp, iat=iat, **payload, extras=extras)
except (KeyError, JWTError, ImproperlyConfiguredException) as e:
raise NotAuthorizedException("Invalid token") from e
| {"golden_diff": "diff --git a/litestar/contrib/jwt/jwt_token.py b/litestar/contrib/jwt/jwt_token.py\n--- a/litestar/contrib/jwt/jwt_token.py\n+++ b/litestar/contrib/jwt/jwt_token.py\n@@ -1,13 +1,18 @@\n from __future__ import annotations\n \n+import dataclasses\n from dataclasses import asdict, dataclass, field\n from datetime import datetime, timezone\n-from typing import cast\n+from typing import TYPE_CHECKING, Any, cast\n \n from jose import JWSError, JWTError, jwt\n \n from litestar.exceptions import ImproperlyConfiguredException, NotAuthorizedException\n \n+if TYPE_CHECKING:\n+ from typing_extensions import Self\n+\n+\n __all__ = (\"Token\",)\n \n \n@@ -42,6 +47,8 @@\n \"\"\"Audience - intended audience.\"\"\"\n jti: str | None = field(default=None)\n \"\"\"JWT ID - a unique identifier of the JWT between different issuers.\"\"\"\n+ extras: dict[str, Any] = field(default_factory=dict)\n+ \"\"\"Extra fields that were found on the JWT token.\"\"\"\n \n def __post_init__(self) -> None:\n if len(self.sub) < 1:\n@@ -63,8 +70,8 @@\n else:\n raise ImproperlyConfiguredException(\"iat must be a current or past time\")\n \n- @staticmethod\n- def decode(encoded_token: str, secret: str | dict[str, str], algorithm: str) -> Token:\n+ @classmethod\n+ def decode(cls, encoded_token: str, secret: str | dict[str, str], algorithm: str) -> Self:\n \"\"\"Decode a passed in token string and returns a Token instance.\n \n Args:\n@@ -82,7 +89,12 @@\n payload = jwt.decode(token=encoded_token, key=secret, algorithms=[algorithm], options={\"verify_aud\": False})\n exp = datetime.fromtimestamp(payload.pop(\"exp\"), tz=timezone.utc)\n iat = datetime.fromtimestamp(payload.pop(\"iat\"), tz=timezone.utc)\n- return Token(exp=exp, iat=iat, **payload)\n+ field_names = {f.name for f in dataclasses.fields(Token)}\n+ extra_fields = payload.keys() - field_names\n+ extras = payload.pop(\"extras\", {})\n+ for key in extra_fields:\n+ extras[key] = payload.pop(key)\n+ return cls(exp=exp, iat=iat, **payload, extras=extras)\n except (KeyError, JWTError, ImproperlyConfiguredException) as e:\n raise NotAuthorizedException(\"Invalid token\") from e\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom dataclasses import asdict, dataclass, field\nfrom datetime import datetime, timezone\nfrom typing import cast\n\nfrom jose import JWSError, JWTError, jwt\n\nfrom litestar.exceptions import ImproperlyConfiguredException, NotAuthorizedException\n\n__all__ = (\"Token\",)\n\n\ndef _normalize_datetime(value: datetime) -> datetime:\n \"\"\"Convert the given value into UTC and strip microseconds.\n\n Args:\n value: A datetime instance\n\n Returns:\n A datetime instance\n \"\"\"\n if value.tzinfo is not None:\n value.astimezone(timezone.utc)\n\n return value.replace(microsecond=0)\n\n\n@dataclass\nclass Token:\n \"\"\"JWT Token DTO.\"\"\"\n\n exp: datetime\n \"\"\"Expiration - datetime for token expiration.\"\"\"\n sub: str\n \"\"\"Subject - usually a unique identifier of the user or equivalent entity.\"\"\"\n iat: datetime = field(default_factory=lambda: _normalize_datetime(datetime.now(timezone.utc)))\n \"\"\"Issued at - should always be current now.\"\"\"\n iss: str | None = field(default=None)\n \"\"\"Issuer - optional unique identifier for the issuer.\"\"\"\n aud: str | None = field(default=None)\n \"\"\"Audience - intended audience.\"\"\"\n jti: str | None = field(default=None)\n \"\"\"JWT ID - a unique identifier of the JWT between different issuers.\"\"\"\n\n def __post_init__(self) -> None:\n if len(self.sub) < 1:\n raise ImproperlyConfiguredException(\"sub must be a string with a length greater than 0\")\n\n if isinstance(self.exp, datetime) and (\n (exp := _normalize_datetime(self.exp))\n and exp.timestamp() >= _normalize_datetime(datetime.now(timezone.utc)).timestamp()\n ):\n self.exp = exp\n else:\n raise ImproperlyConfiguredException(\"exp value must be a datetime in the future\")\n\n if isinstance(self.iat, datetime) and (\n (iat := _normalize_datetime(self.iat))\n and iat.timestamp() <= _normalize_datetime(datetime.now(timezone.utc)).timestamp()\n ):\n self.iat = iat\n else:\n raise ImproperlyConfiguredException(\"iat must be a current or past time\")\n\n @staticmethod\n def decode(encoded_token: str, secret: str | dict[str, str], algorithm: str) -> Token:\n \"\"\"Decode a passed in token string and returns a Token instance.\n\n Args:\n encoded_token: A base64 string containing an encoded JWT.\n secret: The secret with which the JWT is encoded. It may optionally be an individual JWK or JWS set dict\n algorithm: The algorithm used to encode the JWT.\n\n Returns:\n A decoded Token instance.\n\n Raises:\n NotAuthorizedException: If the token is invalid.\n \"\"\"\n try:\n payload = jwt.decode(token=encoded_token, key=secret, algorithms=[algorithm], options={\"verify_aud\": False})\n exp = datetime.fromtimestamp(payload.pop(\"exp\"), tz=timezone.utc)\n iat = datetime.fromtimestamp(payload.pop(\"iat\"), tz=timezone.utc)\n return Token(exp=exp, iat=iat, **payload)\n except (KeyError, JWTError, ImproperlyConfiguredException) as e:\n raise NotAuthorizedException(\"Invalid token\") from e\n\n def encode(self, secret: str, algorithm: str) -> str:\n \"\"\"Encode the token instance into a string.\n\n Args:\n secret: The secret with which the JWT is encoded.\n algorithm: The algorithm used to encode the JWT.\n\n Returns:\n An encoded token string.\n\n Raises:\n ImproperlyConfiguredException: If encoding fails.\n \"\"\"\n try:\n return cast(\n \"str\",\n jwt.encode(\n claims={k: v for k, v in asdict(self).items() if v is not None}, key=secret, algorithm=algorithm\n ),\n )\n except (JWTError, JWSError) as e:\n raise ImproperlyConfiguredException(\"Failed to encode token\") from e\n", "path": "litestar/contrib/jwt/jwt_token.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport dataclasses\nfrom dataclasses import asdict, dataclass, field\nfrom datetime import datetime, timezone\nfrom typing import TYPE_CHECKING, Any, cast\n\nfrom jose import JWSError, JWTError, jwt\n\nfrom litestar.exceptions import ImproperlyConfiguredException, NotAuthorizedException\n\nif TYPE_CHECKING:\n from typing_extensions import Self\n\n\n__all__ = (\"Token\",)\n\n\ndef _normalize_datetime(value: datetime) -> datetime:\n \"\"\"Convert the given value into UTC and strip microseconds.\n\n Args:\n value: A datetime instance\n\n Returns:\n A datetime instance\n \"\"\"\n if value.tzinfo is not None:\n value.astimezone(timezone.utc)\n\n return value.replace(microsecond=0)\n\n\n@dataclass\nclass Token:\n \"\"\"JWT Token DTO.\"\"\"\n\n exp: datetime\n \"\"\"Expiration - datetime for token expiration.\"\"\"\n sub: str\n \"\"\"Subject - usually a unique identifier of the user or equivalent entity.\"\"\"\n iat: datetime = field(default_factory=lambda: _normalize_datetime(datetime.now(timezone.utc)))\n \"\"\"Issued at - should always be current now.\"\"\"\n iss: str | None = field(default=None)\n \"\"\"Issuer - optional unique identifier for the issuer.\"\"\"\n aud: str | None = field(default=None)\n \"\"\"Audience - intended audience.\"\"\"\n jti: str | None = field(default=None)\n \"\"\"JWT ID - a unique identifier of the JWT between different issuers.\"\"\"\n extras: dict[str, Any] = field(default_factory=dict)\n \"\"\"Extra fields that were found on the JWT token.\"\"\"\n\n def __post_init__(self) -> None:\n if len(self.sub) < 1:\n raise ImproperlyConfiguredException(\"sub must be a string with a length greater than 0\")\n\n if isinstance(self.exp, datetime) and (\n (exp := _normalize_datetime(self.exp))\n and exp.timestamp() >= _normalize_datetime(datetime.now(timezone.utc)).timestamp()\n ):\n self.exp = exp\n else:\n raise ImproperlyConfiguredException(\"exp value must be a datetime in the future\")\n\n if isinstance(self.iat, datetime) and (\n (iat := _normalize_datetime(self.iat))\n and iat.timestamp() <= _normalize_datetime(datetime.now(timezone.utc)).timestamp()\n ):\n self.iat = iat\n else:\n raise ImproperlyConfiguredException(\"iat must be a current or past time\")\n\n @classmethod\n def decode(cls, encoded_token: str, secret: str | dict[str, str], algorithm: str) -> Self:\n \"\"\"Decode a passed in token string and returns a Token instance.\n\n Args:\n encoded_token: A base64 string containing an encoded JWT.\n secret: The secret with which the JWT is encoded. It may optionally be an individual JWK or JWS set dict\n algorithm: The algorithm used to encode the JWT.\n\n Returns:\n A decoded Token instance.\n\n Raises:\n NotAuthorizedException: If the token is invalid.\n \"\"\"\n try:\n payload = jwt.decode(token=encoded_token, key=secret, algorithms=[algorithm], options={\"verify_aud\": False})\n exp = datetime.fromtimestamp(payload.pop(\"exp\"), tz=timezone.utc)\n iat = datetime.fromtimestamp(payload.pop(\"iat\"), tz=timezone.utc)\n field_names = {f.name for f in dataclasses.fields(Token)}\n extra_fields = payload.keys() - field_names\n extras = payload.pop(\"extras\", {})\n for key in extra_fields:\n extras[key] = payload.pop(key)\n return cls(exp=exp, iat=iat, **payload, extras=extras)\n except (KeyError, JWTError, ImproperlyConfiguredException) as e:\n raise NotAuthorizedException(\"Invalid token\") from e\n\n def encode(self, secret: str, algorithm: str) -> str:\n \"\"\"Encode the token instance into a string.\n\n Args:\n secret: The secret with which the JWT is encoded.\n algorithm: The algorithm used to encode the JWT.\n\n Returns:\n An encoded token string.\n\n Raises:\n ImproperlyConfiguredException: If encoding fails.\n \"\"\"\n try:\n return cast(\n \"str\",\n jwt.encode(\n claims={k: v for k, v in asdict(self).items() if v is not None}, key=secret, algorithm=algorithm\n ),\n )\n except (JWTError, JWSError) as e:\n raise ImproperlyConfiguredException(\"Failed to encode token\") from e\n", "path": "litestar/contrib/jwt/jwt_token.py"}]} | 1,532 | 575 |
gh_patches_debug_26260 | rasdani/github-patches | git_diff | genialis__resolwe-196 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Elasticserach returns paginated results when querying/mapping features using RESDK
In resolwe-bio tools/goea.py `org_features = res.feature.filter(source=args.source_db, query=genes)` should return all genes, not just the first 10.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `resolwe/elastic/viewsets.py`
Content:
```
1 """.. Ignore pydocstyle D400.
2
3 ================
4 Elastic Viewsets
5 ================
6
7 .. autoclass:: resolwe.elastic.viewsets.ElasticSearchMixin
8 :members:
9
10 """
11 from __future__ import absolute_import, division, print_function, unicode_literals
12
13 from elasticsearch_dsl.query import Q
14
15 from django.conf import settings
16 from django.contrib.auth import get_user_model
17
18 from rest_framework.response import Response
19 from rest_framework.viewsets import GenericViewSet
20
21 __all__ = (
22 'ElasticSearchMixin',
23 'PaginationMixin',
24 'ElasticSearchBaseViewSet',
25 )
26
27
28 class ElasticSearchMixin(object):
29 """Mixin to use Django REST Framework with ElasticSearch based querysets.
30
31 This mixin adds following methods:
32 * :func:`~ElasticSearchMixin.order_search`
33 * :func:`~ElasticSearchMixin.filter_search`
34 * :func:`~ElasticSearchMixin.filter_permissions`
35
36 """
37
38 filtering_fields = []
39 ordering_fields = []
40 ordering = None
41
42 def get_query_param(self, key, default=None):
43 """Get query parameter uniformly for GET and POST requests."""
44 value = self.request.query_params.get(key, None)
45 if value is None:
46 value = self.request.data.get(key, None)
47 if value is None:
48 value = default
49 return value
50
51 def order_search(self, search):
52 """Order given search by the ordering parameter given in request.
53
54 :param search: ElasticSearch query object
55
56 """
57 ordering = self.get_query_param('ordering', self.ordering)
58
59 ordering_field = ordering.lstrip('-')
60 if ordering_field not in self.ordering_fields:
61 raise KeyError('Ordering by `{}` is not supported.'.format(ordering_field))
62
63 return search.sort(ordering)
64
65 def filter_search(self, search):
66 """Filter given search by the filter parameter given in request.
67
68 :param search: ElasticSearch query object
69
70 """
71 for field in self.filtering_fields:
72 value = self.get_query_param(field, None)
73 if value:
74 if isinstance(value, list):
75 filters = [Q('match', **{field: item}) for item in value]
76 search = search.query('bool', should=filters)
77 else:
78 search = search.query('wildcard', **{field: value})
79
80 return search
81
82 def filter_permissions(self, search):
83 """Filter given query based on permissions of the user in the request.
84
85 :param search: ElasticSearch query object
86
87 """
88 user = self.request.user
89 if user.is_superuser:
90 return search
91 if user.is_anonymous():
92 user_model = get_user_model()
93 user = user_model.objects.get(**{user_model.USERNAME_FIELD: settings.ANONYMOUS_USER_NAME})
94
95 filters = [Q('match', users_with_permissions=user.pk)]
96 filters.extend([
97 Q('match', groups_with_permissions=group.pk) for group in user.groups.all()
98 ])
99
100 # `minimum_should_match` is set to 1 by default
101 return search.query('bool', should=filters)
102
103
104 class PaginationMixin(object):
105 """Mixin for making paginated response in case pagination parameters are provided."""
106
107 def paginate_response(self, queryset):
108 """Optionally return paginated response.
109
110 If pagination parameters are provided in the request, then paginated response
111 is returned, otherwise response is not paginated.
112
113 """
114 page = self.paginate_queryset(queryset)
115 if page is not None:
116 serializer = self.get_serializer(page, many=True)
117 return self.get_paginated_response(serializer.data)
118
119 serializer = self.get_serializer(queryset, many=True)
120 return Response(serializer.data)
121
122
123 class ElasticSearchBaseViewSet(PaginationMixin, ElasticSearchMixin, GenericViewSet):
124 """Base ViewSet for ElasticSearch based views.
125
126 This ViewSet creates search based on ``document_class`` parameter, specified
127 in subclass and:
128
129 * filter it by query parameters
130 * apply ordering
131 * filter permissions
132 * apply pagination
133
134 .. IMPORTANT::
135
136 Both ``POST`` and ``GET`` requests are supported.
137 """
138
139 document_class = None
140
141 def custom_filter(self, search):
142 """Perform custom search filtering.
143
144 This method is intended to be overriden in subclasses.
145 """
146 return search
147
148 def list_with_post(self, request):
149 """Endpoint handler."""
150 search = self.document_class().search() # pylint: disable=not-callable
151
152 search = self.custom_filter(search)
153
154 search = self.filter_search(search)
155 search = self.order_search(search)
156 search = self.filter_permissions(search)
157
158 return self.paginate_response(search)
159
160 def list(self, request):
161 """Endpoint handler."""
162 return self.list_with_post(request)
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/resolwe/elastic/viewsets.py b/resolwe/elastic/viewsets.py
--- a/resolwe/elastic/viewsets.py
+++ b/resolwe/elastic/viewsets.py
@@ -15,6 +15,7 @@
from django.conf import settings
from django.contrib.auth import get_user_model
+from rest_framework.exceptions import APIException
from rest_framework.response import Response
from rest_framework.viewsets import GenericViewSet
@@ -24,6 +25,16 @@
'ElasticSearchBaseViewSet',
)
+ELASTICSEARCH_SIZE = 10000 # maximum number of results returned by ElasticSearch
+
+
+class TooManyResults(APIException):
+ """Exception when elastic query returns more than ``ELASTICSEARCH_SIZE`` results."""
+
+ status_code = 400
+ default_detail = 'Query returned too many results. Please, add more filters or use pagination.'
+ default_code = 'bad_request'
+
class ElasticSearchMixin(object):
"""Mixin to use Django REST Framework with ElasticSearch based querysets.
@@ -155,6 +166,11 @@
search = self.order_search(search)
search = self.filter_permissions(search)
+ if search.count() > ELASTICSEARCH_SIZE:
+ raise TooManyResults()
+
+ search = search.extra(size=ELASTICSEARCH_SIZE)
+
return self.paginate_response(search)
def list(self, request):
| {"golden_diff": "diff --git a/resolwe/elastic/viewsets.py b/resolwe/elastic/viewsets.py\n--- a/resolwe/elastic/viewsets.py\n+++ b/resolwe/elastic/viewsets.py\n@@ -15,6 +15,7 @@\n from django.conf import settings\n from django.contrib.auth import get_user_model\n \n+from rest_framework.exceptions import APIException\n from rest_framework.response import Response\n from rest_framework.viewsets import GenericViewSet\n \n@@ -24,6 +25,16 @@\n 'ElasticSearchBaseViewSet',\n )\n \n+ELASTICSEARCH_SIZE = 10000 # maximum number of results returned by ElasticSearch\n+\n+\n+class TooManyResults(APIException):\n+ \"\"\"Exception when elastic query returns more than ``ELASTICSEARCH_SIZE`` results.\"\"\"\n+\n+ status_code = 400\n+ default_detail = 'Query returned too many results. Please, add more filters or use pagination.'\n+ default_code = 'bad_request'\n+\n \n class ElasticSearchMixin(object):\n \"\"\"Mixin to use Django REST Framework with ElasticSearch based querysets.\n@@ -155,6 +166,11 @@\n search = self.order_search(search)\n search = self.filter_permissions(search)\n \n+ if search.count() > ELASTICSEARCH_SIZE:\n+ raise TooManyResults()\n+\n+ search = search.extra(size=ELASTICSEARCH_SIZE)\n+\n return self.paginate_response(search)\n \n def list(self, request):\n", "issue": "Elasticserach returns paginated results when querying/mapping features using RESDK\nIn resolwe-bio tools/goea.py `org_features = res.feature.filter(source=args.source_db, query=genes)` should return all genes, not just the first 10.\n", "before_files": [{"content": "\"\"\".. Ignore pydocstyle D400.\n\n================\nElastic Viewsets\n================\n\n.. autoclass:: resolwe.elastic.viewsets.ElasticSearchMixin\n :members:\n\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom elasticsearch_dsl.query import Q\n\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\n\nfrom rest_framework.response import Response\nfrom rest_framework.viewsets import GenericViewSet\n\n__all__ = (\n 'ElasticSearchMixin',\n 'PaginationMixin',\n 'ElasticSearchBaseViewSet',\n)\n\n\nclass ElasticSearchMixin(object):\n \"\"\"Mixin to use Django REST Framework with ElasticSearch based querysets.\n\n This mixin adds following methods:\n * :func:`~ElasticSearchMixin.order_search`\n * :func:`~ElasticSearchMixin.filter_search`\n * :func:`~ElasticSearchMixin.filter_permissions`\n\n \"\"\"\n\n filtering_fields = []\n ordering_fields = []\n ordering = None\n\n def get_query_param(self, key, default=None):\n \"\"\"Get query parameter uniformly for GET and POST requests.\"\"\"\n value = self.request.query_params.get(key, None)\n if value is None:\n value = self.request.data.get(key, None)\n if value is None:\n value = default\n return value\n\n def order_search(self, search):\n \"\"\"Order given search by the ordering parameter given in request.\n\n :param search: ElasticSearch query object\n\n \"\"\"\n ordering = self.get_query_param('ordering', self.ordering)\n\n ordering_field = ordering.lstrip('-')\n if ordering_field not in self.ordering_fields:\n raise KeyError('Ordering by `{}` is not supported.'.format(ordering_field))\n\n return search.sort(ordering)\n\n def filter_search(self, search):\n \"\"\"Filter given search by the filter parameter given in request.\n\n :param search: ElasticSearch query object\n\n \"\"\"\n for field in self.filtering_fields:\n value = self.get_query_param(field, None)\n if value:\n if isinstance(value, list):\n filters = [Q('match', **{field: item}) for item in value]\n search = search.query('bool', should=filters)\n else:\n search = search.query('wildcard', **{field: value})\n\n return search\n\n def filter_permissions(self, search):\n \"\"\"Filter given query based on permissions of the user in the request.\n\n :param search: ElasticSearch query object\n\n \"\"\"\n user = self.request.user\n if user.is_superuser:\n return search\n if user.is_anonymous():\n user_model = get_user_model()\n user = user_model.objects.get(**{user_model.USERNAME_FIELD: settings.ANONYMOUS_USER_NAME})\n\n filters = [Q('match', users_with_permissions=user.pk)]\n filters.extend([\n Q('match', groups_with_permissions=group.pk) for group in user.groups.all()\n ])\n\n # `minimum_should_match` is set to 1 by default\n return search.query('bool', should=filters)\n\n\nclass PaginationMixin(object):\n \"\"\"Mixin for making paginated response in case pagination parameters are provided.\"\"\"\n\n def paginate_response(self, queryset):\n \"\"\"Optionally return paginated response.\n\n If pagination parameters are provided in the request, then paginated response\n is returned, otherwise response is not paginated.\n\n \"\"\"\n page = self.paginate_queryset(queryset)\n if page is not None:\n serializer = self.get_serializer(page, many=True)\n return self.get_paginated_response(serializer.data)\n\n serializer = self.get_serializer(queryset, many=True)\n return Response(serializer.data)\n\n\nclass ElasticSearchBaseViewSet(PaginationMixin, ElasticSearchMixin, GenericViewSet):\n \"\"\"Base ViewSet for ElasticSearch based views.\n\n This ViewSet creates search based on ``document_class`` parameter, specified\n in subclass and:\n\n * filter it by query parameters\n * apply ordering\n * filter permissions\n * apply pagination\n\n .. IMPORTANT::\n\n Both ``POST`` and ``GET`` requests are supported.\n \"\"\"\n\n document_class = None\n\n def custom_filter(self, search):\n \"\"\"Perform custom search filtering.\n\n This method is intended to be overriden in subclasses.\n \"\"\"\n return search\n\n def list_with_post(self, request):\n \"\"\"Endpoint handler.\"\"\"\n search = self.document_class().search() # pylint: disable=not-callable\n\n search = self.custom_filter(search)\n\n search = self.filter_search(search)\n search = self.order_search(search)\n search = self.filter_permissions(search)\n\n return self.paginate_response(search)\n\n def list(self, request):\n \"\"\"Endpoint handler.\"\"\"\n return self.list_with_post(request)\n", "path": "resolwe/elastic/viewsets.py"}], "after_files": [{"content": "\"\"\".. Ignore pydocstyle D400.\n\n================\nElastic Viewsets\n================\n\n.. autoclass:: resolwe.elastic.viewsets.ElasticSearchMixin\n :members:\n\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom elasticsearch_dsl.query import Q\n\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\n\nfrom rest_framework.exceptions import APIException\nfrom rest_framework.response import Response\nfrom rest_framework.viewsets import GenericViewSet\n\n__all__ = (\n 'ElasticSearchMixin',\n 'PaginationMixin',\n 'ElasticSearchBaseViewSet',\n)\n\nELASTICSEARCH_SIZE = 10000 # maximum number of results returned by ElasticSearch\n\n\nclass TooManyResults(APIException):\n \"\"\"Exception when elastic query returns more than ``ELASTICSEARCH_SIZE`` results.\"\"\"\n\n status_code = 400\n default_detail = 'Query returned too many results. Please, add more filters or use pagination.'\n default_code = 'bad_request'\n\n\nclass ElasticSearchMixin(object):\n \"\"\"Mixin to use Django REST Framework with ElasticSearch based querysets.\n\n This mixin adds following methods:\n * :func:`~ElasticSearchMixin.order_search`\n * :func:`~ElasticSearchMixin.filter_search`\n * :func:`~ElasticSearchMixin.filter_permissions`\n\n \"\"\"\n\n filtering_fields = []\n ordering_fields = []\n ordering = None\n\n def get_query_param(self, key, default=None):\n \"\"\"Get query parameter uniformly for GET and POST requests.\"\"\"\n value = self.request.query_params.get(key, None)\n if value is None:\n value = self.request.data.get(key, None)\n if value is None:\n value = default\n return value\n\n def order_search(self, search):\n \"\"\"Order given search by the ordering parameter given in request.\n\n :param search: ElasticSearch query object\n\n \"\"\"\n ordering = self.get_query_param('ordering', self.ordering)\n\n ordering_field = ordering.lstrip('-')\n if ordering_field not in self.ordering_fields:\n raise KeyError('Ordering by `{}` is not supported.'.format(ordering_field))\n\n return search.sort(ordering)\n\n def filter_search(self, search):\n \"\"\"Filter given search by the filter parameter given in request.\n\n :param search: ElasticSearch query object\n\n \"\"\"\n for field in self.filtering_fields:\n value = self.get_query_param(field, None)\n if value:\n if isinstance(value, list):\n filters = [Q('match', **{field: item}) for item in value]\n search = search.query('bool', should=filters)\n else:\n search = search.query('wildcard', **{field: value})\n\n return search\n\n def filter_permissions(self, search):\n \"\"\"Filter given query based on permissions of the user in the request.\n\n :param search: ElasticSearch query object\n\n \"\"\"\n user = self.request.user\n if user.is_superuser:\n return search\n if user.is_anonymous():\n user_model = get_user_model()\n user = user_model.objects.get(**{user_model.USERNAME_FIELD: settings.ANONYMOUS_USER_NAME})\n\n filters = [Q('match', users_with_permissions=user.pk)]\n filters.extend([\n Q('match', groups_with_permissions=group.pk) for group in user.groups.all()\n ])\n\n # `minimum_should_match` is set to 1 by default\n return search.query('bool', should=filters)\n\n\nclass PaginationMixin(object):\n \"\"\"Mixin for making paginated response in case pagination parameters are provided.\"\"\"\n\n def paginate_response(self, queryset):\n \"\"\"Optionally return paginated response.\n\n If pagination parameters are provided in the request, then paginated response\n is returned, otherwise response is not paginated.\n\n \"\"\"\n page = self.paginate_queryset(queryset)\n if page is not None:\n serializer = self.get_serializer(page, many=True)\n return self.get_paginated_response(serializer.data)\n\n serializer = self.get_serializer(queryset, many=True)\n return Response(serializer.data)\n\n\nclass ElasticSearchBaseViewSet(PaginationMixin, ElasticSearchMixin, GenericViewSet):\n \"\"\"Base ViewSet for ElasticSearch based views.\n\n This ViewSet creates search based on ``document_class`` parameter, specified\n in subclass and:\n\n * filter it by query parameters\n * apply ordering\n * filter permissions\n * apply pagination\n\n .. IMPORTANT::\n\n Both ``POST`` and ``GET`` requests are supported.\n \"\"\"\n\n document_class = None\n\n def custom_filter(self, search):\n \"\"\"Perform custom search filtering.\n\n This method is intended to be overriden in subclasses.\n \"\"\"\n return search\n\n def list_with_post(self, request):\n \"\"\"Endpoint handler.\"\"\"\n search = self.document_class().search() # pylint: disable=not-callable\n\n search = self.custom_filter(search)\n\n search = self.filter_search(search)\n search = self.order_search(search)\n search = self.filter_permissions(search)\n\n if search.count() > ELASTICSEARCH_SIZE:\n raise TooManyResults()\n\n search = search.extra(size=ELASTICSEARCH_SIZE)\n\n return self.paginate_response(search)\n\n def list(self, request):\n \"\"\"Endpoint handler.\"\"\"\n return self.list_with_post(request)\n", "path": "resolwe/elastic/viewsets.py"}]} | 1,726 | 320 |
gh_patches_debug_28671 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3348 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider avis is broken
During the global build at 2021-06-02-14-42-40, spider **avis** failed with **4383 features** and **36 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/avis.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/avis.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/avis.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/avis.py`
Content:
```
1 import scrapy
2 import re
3
4 from locations.items import GeojsonPointItem
5 from locations.hours import OpeningHours
6
7
8 DAY_MAPPING = {
9 'Mon': 'Mo',
10 'Tue': 'Tu',
11 'Wed': 'We',
12 'Thu': 'Th',
13 'Fri': 'Fr',
14 'Sat': 'Sa',
15 'Sun': 'Su'
16 }
17 DAYS = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
18
19
20 class AvisSpider(scrapy.Spider):
21
22 name = "avis"
23 item_attributes = { 'brand': "Avis", 'brand_wikidata': "Q791136" }
24 download_delay = 0.5
25 allowed_domains = [
26 "avis.com",
27 ]
28 start_urls = (
29 'https://www.avis.com/en/locations/avisworldwide',
30 )
31
32 def parse_hours(self, hours):
33 "Sun - Sat 7:00 AM - 10:00 PM"
34 opening_hours = OpeningHours()
35 hours = [h.strip() for h in hours.split(';')]
36
37 for hour in hours:
38 if hour == "Sun - Sat open 24 hrs":
39 return "24/7"
40 range_match = re.search(r'([A-Za-z]{3})\s-\s([A-Za-z]{3})\s([\d:\sAMP]+)\s-\s([\d:\sAMP]+)', hour)
41 if range_match:
42 start_day, end_day, start_time, end_time = range_match.groups()
43 else:
44 single_match = re.search(r'([A-Za-z]{3})\s([\d:\sAMP]+)\s-\s([\d:\sAMP]+)', hour)
45 if not single_match:
46 continue
47 start_day, start_time, end_time = single_match.groups()
48 end_day = start_day
49
50 for day in DAYS[DAYS.index(start_day):DAYS.index(end_day)+1]:
51 opening_hours.add_range(day=DAY_MAPPING[day],
52 open_time=start_time.strip(),
53 close_time=end_time.strip(),
54 time_format='%I:%M %p')
55 return opening_hours.as_opening_hours()
56
57 def parse_store(self, response):
58 if response.url == 'https://www.avis.com/en/error/500':
59 # some closed locations get redirected to this error page
60 return
61
62 def clean(val):
63 if val:
64 return val.strip(', ')
65 return val
66
67 ref = response.url.split('/')[-1]
68
69 properties = {
70 'name': clean(response.xpath('//h2/span[@itemprop="name"]/text()').extract_first()),
71 'addr_full': clean(response.xpath('normalize-space(//span[@itemprop="streetAddress"]/text())').extract_first()),
72 'phone': response.xpath('normalize-space(//span[@itemprop="telephone"]/text())').extract_first(),
73 'city': clean(response.xpath('normalize-space(//span[@itemprop="addressLocality"]/text())').extract_first()),
74 'state': clean(response.xpath('normalize-space(//span[@itemprop="addressRegion"]/text())').extract_first()),
75 'postcode': clean(response.xpath('normalize-space(//span[@itemprop="postalCode"]/text())').extract_first()),
76 'country': clean(response.xpath('normalize-space(//span[@itemprop="addressCountry"]/text())').extract_first()),
77 'ref': ref,
78 'website': response.url,
79 'lat': float(response.xpath('//meta[@itemprop="latitude"]/@content').extract_first()),
80 'lon': float(response.xpath('//meta[@itemprop="longitude"]/@content').extract_first()),
81 }
82 hours = response.xpath('//meta[@itemprop="openingHours"]/@content').extract_first()
83 if hours:
84 properties['opening_hours'] = self.parse_hours(hours)
85 yield GeojsonPointItem(**properties)
86
87 def parse_state(self, response):
88 urls = response.xpath('//ul[contains(@class, "location-list-ul")]//li/a/@href').extract()
89
90 if not urls:
91 urls = set(response.xpath('//ul[contains(@class, "LocContainer")]//a/@href').extract())
92 urls = [u for u in urls if 'javascript:void' not in u]
93
94 location_list = re.compile("^/en/locations/(?:us|ca|au)/[a-z]{2}/[^/]+$")
95 us_single_location = re.compile(r'/en/locations/(?:us|ca|au)/[a-z]{2}/[^/]+/[^/]+$')
96 single_location = re.compile(r'/en/locations/(?!us|ca|au)[a-z]{2}/[^/]+/[^/]+$')
97
98 for url in urls:
99 if single_location.match(url) or us_single_location.match(url):
100 yield scrapy.Request(response.urljoin(url), callback=self.parse_store)
101 elif location_list.match(url):
102 # skip these, we get them already
103 continue
104 elif 'xx' in url:
105 continue
106
107 def parse_country(self,response):
108 urls = response.xpath('//div[contains(@class,"country-wrapper")]//li/a/@href').extract()
109
110 for url in urls:
111 yield scrapy.Request(response.urljoin(url), callback=self.parse_state)
112
113 def parse(self, response):
114 urls = response.xpath('//div[@class="wl-location-state"]//li/a/@href').extract()
115
116 for url in urls:
117 yield scrapy.Request(response.urljoin(url), callback=self.parse_country)
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/avis.py b/locations/spiders/avis.py
--- a/locations/spiders/avis.py
+++ b/locations/spiders/avis.py
@@ -66,6 +66,15 @@
ref = response.url.split('/')[-1]
+ latitude = None
+ longitude = None
+
+ if response.xpath('//meta[@itemprop="latitude"]/@content').extract_first() is not None:
+ latitude = float(response.xpath('//meta[@itemprop="latitude"]/@content').extract_first())
+
+ if response.xpath('//meta[@itemprop="longitude"]/@content').extract_first() is not None:
+ longitude = float(response.xpath('//meta[@itemprop="longitude"]/@content').extract_first())
+
properties = {
'name': clean(response.xpath('//h2/span[@itemprop="name"]/text()').extract_first()),
'addr_full': clean(response.xpath('normalize-space(//span[@itemprop="streetAddress"]/text())').extract_first()),
@@ -76,8 +85,8 @@
'country': clean(response.xpath('normalize-space(//span[@itemprop="addressCountry"]/text())').extract_first()),
'ref': ref,
'website': response.url,
- 'lat': float(response.xpath('//meta[@itemprop="latitude"]/@content').extract_first()),
- 'lon': float(response.xpath('//meta[@itemprop="longitude"]/@content').extract_first()),
+ 'lat': latitude,
+ 'lon': longitude,
}
hours = response.xpath('//meta[@itemprop="openingHours"]/@content').extract_first()
if hours:
| {"golden_diff": "diff --git a/locations/spiders/avis.py b/locations/spiders/avis.py\n--- a/locations/spiders/avis.py\n+++ b/locations/spiders/avis.py\n@@ -66,6 +66,15 @@\n \n ref = response.url.split('/')[-1]\n \n+ latitude = None\n+ longitude = None\n+\n+ if response.xpath('//meta[@itemprop=\"latitude\"]/@content').extract_first() is not None:\n+ latitude = float(response.xpath('//meta[@itemprop=\"latitude\"]/@content').extract_first())\n+\n+ if response.xpath('//meta[@itemprop=\"longitude\"]/@content').extract_first() is not None:\n+ longitude = float(response.xpath('//meta[@itemprop=\"longitude\"]/@content').extract_first())\n+\n properties = {\n 'name': clean(response.xpath('//h2/span[@itemprop=\"name\"]/text()').extract_first()),\n 'addr_full': clean(response.xpath('normalize-space(//span[@itemprop=\"streetAddress\"]/text())').extract_first()),\n@@ -76,8 +85,8 @@\n 'country': clean(response.xpath('normalize-space(//span[@itemprop=\"addressCountry\"]/text())').extract_first()),\n 'ref': ref,\n 'website': response.url,\n- 'lat': float(response.xpath('//meta[@itemprop=\"latitude\"]/@content').extract_first()),\n- 'lon': float(response.xpath('//meta[@itemprop=\"longitude\"]/@content').extract_first()),\n+ 'lat': latitude,\n+ 'lon': longitude,\n }\n hours = response.xpath('//meta[@itemprop=\"openingHours\"]/@content').extract_first()\n if hours:\n", "issue": "Spider avis is broken\nDuring the global build at 2021-06-02-14-42-40, spider **avis** failed with **4383 features** and **36 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/avis.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/avis.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/avis.geojson))\n", "before_files": [{"content": "import scrapy\nimport re\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nDAY_MAPPING = {\n 'Mon': 'Mo',\n 'Tue': 'Tu',\n 'Wed': 'We',\n 'Thu': 'Th',\n 'Fri': 'Fr',\n 'Sat': 'Sa',\n 'Sun': 'Su'\n}\nDAYS = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']\n\n\nclass AvisSpider(scrapy.Spider):\n\n name = \"avis\"\n item_attributes = { 'brand': \"Avis\", 'brand_wikidata': \"Q791136\" }\n download_delay = 0.5\n allowed_domains = [\n \"avis.com\",\n ]\n start_urls = (\n 'https://www.avis.com/en/locations/avisworldwide',\n )\n\n def parse_hours(self, hours):\n \"Sun - Sat 7:00 AM - 10:00 PM\"\n opening_hours = OpeningHours()\n hours = [h.strip() for h in hours.split(';')]\n\n for hour in hours:\n if hour == \"Sun - Sat open 24 hrs\":\n return \"24/7\"\n range_match = re.search(r'([A-Za-z]{3})\\s-\\s([A-Za-z]{3})\\s([\\d:\\sAMP]+)\\s-\\s([\\d:\\sAMP]+)', hour)\n if range_match:\n start_day, end_day, start_time, end_time = range_match.groups()\n else:\n single_match = re.search(r'([A-Za-z]{3})\\s([\\d:\\sAMP]+)\\s-\\s([\\d:\\sAMP]+)', hour)\n if not single_match:\n continue\n start_day, start_time, end_time = single_match.groups()\n end_day = start_day\n\n for day in DAYS[DAYS.index(start_day):DAYS.index(end_day)+1]:\n opening_hours.add_range(day=DAY_MAPPING[day],\n open_time=start_time.strip(),\n close_time=end_time.strip(),\n time_format='%I:%M %p')\n return opening_hours.as_opening_hours()\n\n def parse_store(self, response):\n if response.url == 'https://www.avis.com/en/error/500':\n # some closed locations get redirected to this error page\n return\n\n def clean(val):\n if val:\n return val.strip(', ')\n return val\n\n ref = response.url.split('/')[-1]\n\n properties = {\n 'name': clean(response.xpath('//h2/span[@itemprop=\"name\"]/text()').extract_first()),\n 'addr_full': clean(response.xpath('normalize-space(//span[@itemprop=\"streetAddress\"]/text())').extract_first()),\n 'phone': response.xpath('normalize-space(//span[@itemprop=\"telephone\"]/text())').extract_first(),\n 'city': clean(response.xpath('normalize-space(//span[@itemprop=\"addressLocality\"]/text())').extract_first()),\n 'state': clean(response.xpath('normalize-space(//span[@itemprop=\"addressRegion\"]/text())').extract_first()),\n 'postcode': clean(response.xpath('normalize-space(//span[@itemprop=\"postalCode\"]/text())').extract_first()),\n 'country': clean(response.xpath('normalize-space(//span[@itemprop=\"addressCountry\"]/text())').extract_first()),\n 'ref': ref,\n 'website': response.url,\n 'lat': float(response.xpath('//meta[@itemprop=\"latitude\"]/@content').extract_first()),\n 'lon': float(response.xpath('//meta[@itemprop=\"longitude\"]/@content').extract_first()),\n }\n hours = response.xpath('//meta[@itemprop=\"openingHours\"]/@content').extract_first()\n if hours:\n properties['opening_hours'] = self.parse_hours(hours)\n yield GeojsonPointItem(**properties)\n\n def parse_state(self, response):\n urls = response.xpath('//ul[contains(@class, \"location-list-ul\")]//li/a/@href').extract()\n\n if not urls:\n urls = set(response.xpath('//ul[contains(@class, \"LocContainer\")]//a/@href').extract())\n urls = [u for u in urls if 'javascript:void' not in u]\n\n location_list = re.compile(\"^/en/locations/(?:us|ca|au)/[a-z]{2}/[^/]+$\")\n us_single_location = re.compile(r'/en/locations/(?:us|ca|au)/[a-z]{2}/[^/]+/[^/]+$')\n single_location = re.compile(r'/en/locations/(?!us|ca|au)[a-z]{2}/[^/]+/[^/]+$')\n\n for url in urls:\n if single_location.match(url) or us_single_location.match(url):\n yield scrapy.Request(response.urljoin(url), callback=self.parse_store)\n elif location_list.match(url):\n # skip these, we get them already\n continue\n elif 'xx' in url:\n continue\n\n def parse_country(self,response):\n urls = response.xpath('//div[contains(@class,\"country-wrapper\")]//li/a/@href').extract()\n\n for url in urls:\n yield scrapy.Request(response.urljoin(url), callback=self.parse_state)\n\n def parse(self, response):\n urls = response.xpath('//div[@class=\"wl-location-state\"]//li/a/@href').extract()\n\n for url in urls:\n yield scrapy.Request(response.urljoin(url), callback=self.parse_country)\n", "path": "locations/spiders/avis.py"}], "after_files": [{"content": "import scrapy\nimport re\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nDAY_MAPPING = {\n 'Mon': 'Mo',\n 'Tue': 'Tu',\n 'Wed': 'We',\n 'Thu': 'Th',\n 'Fri': 'Fr',\n 'Sat': 'Sa',\n 'Sun': 'Su'\n}\nDAYS = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']\n\n\nclass AvisSpider(scrapy.Spider):\n\n name = \"avis\"\n item_attributes = { 'brand': \"Avis\", 'brand_wikidata': \"Q791136\" }\n download_delay = 0.5\n allowed_domains = [\n \"avis.com\",\n ]\n start_urls = (\n 'https://www.avis.com/en/locations/avisworldwide',\n )\n\n def parse_hours(self, hours):\n \"Sun - Sat 7:00 AM - 10:00 PM\"\n opening_hours = OpeningHours()\n hours = [h.strip() for h in hours.split(';')]\n\n for hour in hours:\n if hour == \"Sun - Sat open 24 hrs\":\n return \"24/7\"\n range_match = re.search(r'([A-Za-z]{3})\\s-\\s([A-Za-z]{3})\\s([\\d:\\sAMP]+)\\s-\\s([\\d:\\sAMP]+)', hour)\n if range_match:\n start_day, end_day, start_time, end_time = range_match.groups()\n else:\n single_match = re.search(r'([A-Za-z]{3})\\s([\\d:\\sAMP]+)\\s-\\s([\\d:\\sAMP]+)', hour)\n if not single_match:\n continue\n start_day, start_time, end_time = single_match.groups()\n end_day = start_day\n\n for day in DAYS[DAYS.index(start_day):DAYS.index(end_day)+1]:\n opening_hours.add_range(day=DAY_MAPPING[day],\n open_time=start_time.strip(),\n close_time=end_time.strip(),\n time_format='%I:%M %p')\n return opening_hours.as_opening_hours()\n\n def parse_store(self, response):\n if response.url == 'https://www.avis.com/en/error/500':\n # some closed locations get redirected to this error page\n return\n\n def clean(val):\n if val:\n return val.strip(', ')\n return val\n\n ref = response.url.split('/')[-1]\n\n latitude = None\n longitude = None\n\n if response.xpath('//meta[@itemprop=\"latitude\"]/@content').extract_first() is not None:\n latitude = float(response.xpath('//meta[@itemprop=\"latitude\"]/@content').extract_first())\n\n if response.xpath('//meta[@itemprop=\"longitude\"]/@content').extract_first() is not None:\n longitude = float(response.xpath('//meta[@itemprop=\"longitude\"]/@content').extract_first())\n\n properties = {\n 'name': clean(response.xpath('//h2/span[@itemprop=\"name\"]/text()').extract_first()),\n 'addr_full': clean(response.xpath('normalize-space(//span[@itemprop=\"streetAddress\"]/text())').extract_first()),\n 'phone': response.xpath('normalize-space(//span[@itemprop=\"telephone\"]/text())').extract_first(),\n 'city': clean(response.xpath('normalize-space(//span[@itemprop=\"addressLocality\"]/text())').extract_first()),\n 'state': clean(response.xpath('normalize-space(//span[@itemprop=\"addressRegion\"]/text())').extract_first()),\n 'postcode': clean(response.xpath('normalize-space(//span[@itemprop=\"postalCode\"]/text())').extract_first()),\n 'country': clean(response.xpath('normalize-space(//span[@itemprop=\"addressCountry\"]/text())').extract_first()),\n 'ref': ref,\n 'website': response.url,\n 'lat': latitude,\n 'lon': longitude,\n }\n hours = response.xpath('//meta[@itemprop=\"openingHours\"]/@content').extract_first()\n if hours:\n properties['opening_hours'] = self.parse_hours(hours)\n yield GeojsonPointItem(**properties)\n\n def parse_state(self, response):\n urls = response.xpath('//ul[contains(@class, \"location-list-ul\")]//li/a/@href').extract()\n\n if not urls:\n urls = set(response.xpath('//ul[contains(@class, \"LocContainer\")]//a/@href').extract())\n urls = [u for u in urls if 'javascript:void' not in u]\n\n location_list = re.compile(\"^/en/locations/(?:us|ca|au)/[a-z]{2}/[^/]+$\")\n us_single_location = re.compile(r'/en/locations/(?:us|ca|au)/[a-z]{2}/[^/]+/[^/]+$')\n single_location = re.compile(r'/en/locations/(?!us|ca|au)[a-z]{2}/[^/]+/[^/]+$')\n\n for url in urls:\n if single_location.match(url) or us_single_location.match(url):\n yield scrapy.Request(response.urljoin(url), callback=self.parse_store)\n elif location_list.match(url):\n # skip these, we get them already\n continue\n elif 'xx' in url:\n continue\n\n def parse_country(self,response):\n urls = response.xpath('//div[contains(@class,\"country-wrapper\")]//li/a/@href').extract()\n\n for url in urls:\n yield scrapy.Request(response.urljoin(url), callback=self.parse_state)\n\n def parse(self, response):\n urls = response.xpath('//div[@class=\"wl-location-state\"]//li/a/@href').extract()\n\n for url in urls:\n yield scrapy.Request(response.urljoin(url), callback=self.parse_country)\n", "path": "locations/spiders/avis.py"}]} | 1,867 | 360 |
gh_patches_debug_13499 | rasdani/github-patches | git_diff | lutris__lutris-488 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Specify a User Agent for HTTP requests
Right now it's python-urllib/someversion, and Cloudflare sites (tested on medium protection site) blocks it and returns 403 status code.
Testing the same url with curl works without it blocking, so I'm guessing Cloudflare checks the request UA.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lutris/util/http.py`
Content:
```
1 import json
2 import socket
3 import urllib.request
4 import urllib.error
5 import urllib.parse
6 from ssl import CertificateError
7
8 from lutris.settings import SITE_URL
9 from lutris.util.log import logger
10
11
12 class Request(object):
13 def __init__(self, url, timeout=5, stop_request=None,
14 thread_queue=None, headers={}):
15
16 if not url:
17 raise ValueError('An URL is required!')
18
19 if url.startswith('//'):
20 url = 'https:' + url
21
22 if url.startswith('/'):
23 url = SITE_URL + url
24
25 self.url = url
26 self.content = ''
27 self.timeout = timeout
28 self.stop_request = stop_request
29 self.thread_queue = thread_queue
30 self.buffer_size = 32 * 1024 # Bytes
31 self.downloaded_size = 0
32 self.headers = headers
33
34 def get(self, data=None):
35 req = urllib.request.Request(url=self.url, data=data, headers=self.headers)
36 try:
37 request = urllib.request.urlopen(req, timeout=self.timeout)
38 except (urllib.error.HTTPError, CertificateError) as e:
39 logger.error("Unavailable url (%s): %s", self.url, e)
40 except (socket.timeout, urllib.error.URLError) as e:
41 logger.error("Unable to connect to server (%s): %s", self.url, e)
42 else:
43 try:
44 total_size = request.info().get('Content-Length').strip()
45 total_size = int(total_size)
46 except AttributeError:
47 total_size = 0
48
49 chunks = []
50 while 1:
51 if self.stop_request and self.stop_request.is_set():
52 self.content = ''
53 return self
54 try:
55 chunk = request.read(self.buffer_size)
56 except socket.timeout as e:
57 logger.error("Request timed out")
58 self.content = ''
59 return self
60 self.downloaded_size += len(chunk)
61 if self.thread_queue:
62 self.thread_queue.put(
63 (chunk, self.downloaded_size, total_size)
64 )
65 else:
66 chunks.append(chunk)
67 if not chunk:
68 break
69 request.close()
70 self.content = b''.join(chunks)
71 return self
72
73 def post(self, data):
74 raise NotImplementedError
75
76 def write_to_file(self, path):
77 content = self.content
78 if content:
79 with open(path, 'wb') as dest_file:
80 dest_file.write(content)
81
82 @property
83 def json(self):
84 if self.content:
85 return json.loads(self.text)
86
87 @property
88 def text(self):
89 if self.content:
90 return self.content.decode()
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lutris/util/http.py b/lutris/util/http.py
--- a/lutris/util/http.py
+++ b/lutris/util/http.py
@@ -5,6 +5,8 @@
import urllib.parse
from ssl import CertificateError
+from lutris.settings import PROJECT
+from lutris.settings import VERSION
from lutris.settings import SITE_URL
from lutris.util.log import logger
@@ -29,6 +31,8 @@
self.thread_queue = thread_queue
self.buffer_size = 32 * 1024 # Bytes
self.downloaded_size = 0
+ if not headers.get('User-Agent'):
+ headers['User-Agent'] = PROJECT + '/' + VERSION
self.headers = headers
def get(self, data=None):
| {"golden_diff": "diff --git a/lutris/util/http.py b/lutris/util/http.py\n--- a/lutris/util/http.py\n+++ b/lutris/util/http.py\n@@ -5,6 +5,8 @@\n import urllib.parse\n from ssl import CertificateError\n \n+from lutris.settings import PROJECT\n+from lutris.settings import VERSION\n from lutris.settings import SITE_URL\n from lutris.util.log import logger\n \n@@ -29,6 +31,8 @@\n self.thread_queue = thread_queue\n self.buffer_size = 32 * 1024 # Bytes\n self.downloaded_size = 0\n+ if not headers.get('User-Agent'):\n+ headers['User-Agent'] = PROJECT + '/' + VERSION\n self.headers = headers\n \n def get(self, data=None):\n", "issue": "Specify a User Agent for HTTP requests\nRight now it's python-urllib/someversion, and Cloudflare sites (tested on medium protection site) blocks it and returns 403 status code.\r\nTesting the same url with curl works without it blocking, so I'm guessing Cloudflare checks the request UA.\n", "before_files": [{"content": "import json\nimport socket\nimport urllib.request\nimport urllib.error\nimport urllib.parse\nfrom ssl import CertificateError\n\nfrom lutris.settings import SITE_URL\nfrom lutris.util.log import logger\n\n\nclass Request(object):\n def __init__(self, url, timeout=5, stop_request=None,\n thread_queue=None, headers={}):\n\n if not url:\n raise ValueError('An URL is required!')\n\n if url.startswith('//'):\n url = 'https:' + url\n\n if url.startswith('/'):\n url = SITE_URL + url\n\n self.url = url\n self.content = ''\n self.timeout = timeout\n self.stop_request = stop_request\n self.thread_queue = thread_queue\n self.buffer_size = 32 * 1024 # Bytes\n self.downloaded_size = 0\n self.headers = headers\n\n def get(self, data=None):\n req = urllib.request.Request(url=self.url, data=data, headers=self.headers)\n try:\n request = urllib.request.urlopen(req, timeout=self.timeout)\n except (urllib.error.HTTPError, CertificateError) as e:\n logger.error(\"Unavailable url (%s): %s\", self.url, e)\n except (socket.timeout, urllib.error.URLError) as e:\n logger.error(\"Unable to connect to server (%s): %s\", self.url, e)\n else:\n try:\n total_size = request.info().get('Content-Length').strip()\n total_size = int(total_size)\n except AttributeError:\n total_size = 0\n\n chunks = []\n while 1:\n if self.stop_request and self.stop_request.is_set():\n self.content = ''\n return self\n try:\n chunk = request.read(self.buffer_size)\n except socket.timeout as e:\n logger.error(\"Request timed out\")\n self.content = ''\n return self\n self.downloaded_size += len(chunk)\n if self.thread_queue:\n self.thread_queue.put(\n (chunk, self.downloaded_size, total_size)\n )\n else:\n chunks.append(chunk)\n if not chunk:\n break\n request.close()\n self.content = b''.join(chunks)\n return self\n\n def post(self, data):\n raise NotImplementedError\n\n def write_to_file(self, path):\n content = self.content\n if content:\n with open(path, 'wb') as dest_file:\n dest_file.write(content)\n\n @property\n def json(self):\n if self.content:\n return json.loads(self.text)\n\n @property\n def text(self):\n if self.content:\n return self.content.decode()\n", "path": "lutris/util/http.py"}], "after_files": [{"content": "import json\nimport socket\nimport urllib.request\nimport urllib.error\nimport urllib.parse\nfrom ssl import CertificateError\n\nfrom lutris.settings import PROJECT\nfrom lutris.settings import VERSION\nfrom lutris.settings import SITE_URL\nfrom lutris.util.log import logger\n\n\nclass Request(object):\n def __init__(self, url, timeout=5, stop_request=None,\n thread_queue=None, headers={}):\n\n if not url:\n raise ValueError('An URL is required!')\n\n if url.startswith('//'):\n url = 'https:' + url\n\n if url.startswith('/'):\n url = SITE_URL + url\n\n self.url = url\n self.content = ''\n self.timeout = timeout\n self.stop_request = stop_request\n self.thread_queue = thread_queue\n self.buffer_size = 32 * 1024 # Bytes\n self.downloaded_size = 0\n if not headers.get('User-Agent'):\n headers['User-Agent'] = PROJECT + '/' + VERSION\n self.headers = headers\n\n def get(self, data=None):\n req = urllib.request.Request(url=self.url, data=data, headers=self.headers)\n try:\n request = urllib.request.urlopen(req, timeout=self.timeout)\n except (urllib.error.HTTPError, CertificateError) as e:\n logger.error(\"Unavailable url (%s): %s\", self.url, e)\n except (socket.timeout, urllib.error.URLError) as e:\n logger.error(\"Unable to connect to server (%s): %s\", self.url, e)\n else:\n try:\n total_size = request.info().get('Content-Length').strip()\n total_size = int(total_size)\n except AttributeError:\n total_size = 0\n\n chunks = []\n while 1:\n if self.stop_request and self.stop_request.is_set():\n self.content = ''\n return self\n try:\n chunk = request.read(self.buffer_size)\n except socket.timeout as e:\n logger.error(\"Request timed out\")\n self.content = ''\n return self\n self.downloaded_size += len(chunk)\n if self.thread_queue:\n self.thread_queue.put(\n (chunk, self.downloaded_size, total_size)\n )\n else:\n chunks.append(chunk)\n if not chunk:\n break\n request.close()\n self.content = b''.join(chunks)\n return self\n\n def post(self, data):\n raise NotImplementedError\n\n def write_to_file(self, path):\n content = self.content\n if content:\n with open(path, 'wb') as dest_file:\n dest_file.write(content)\n\n @property\n def json(self):\n if self.content:\n return json.loads(self.text)\n\n @property\n def text(self):\n if self.content:\n return self.content.decode()\n", "path": "lutris/util/http.py"}]} | 1,056 | 173 |
gh_patches_debug_6933 | rasdani/github-patches | git_diff | Flexget__Flexget-3204 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
python 3.10 issue
I have an issue with python 3.10 and Flexget. Greenlet has been updated to 1.1.2 because the 1.0.0 version is not compatible with python 3.10. After that Flexget was installed successfully but I got the error message below.
- FlexGet version: 3.1.137
- Python version: 3.10
- Installation method: pip
- Using daemon (yes/no): no
- OS and version: Linux / Slackware / 5.14.8 kernel
Traceback (most recent call last):
File "/usr/bin/flexget", line 5, in <module>
from flexget import main
File "/usr/lib/python3.10/site-packages/flexget/__init__.py", line 11, in <module>
from flexget.manager import Manager # noqa
File "/usr/lib/python3.10/site-packages/flexget/manager.py", line 47, in <module>
from flexget.ipc import IPCClient, IPCServer # noqa
File "/usr/lib/python3.10/site-packages/flexget/ipc.py", line 14, in <module>
from flexget import terminal
File "/usr/lib/python3.10/site-packages/flexget/terminal.py", line 7, in <module>
from colorclass import Color, Windows
File "/usr/lib/python3.10/site-packages/colorclass/__init__.py", line 11, in <module>
from colorclass.codes import list_tags # noqa
File "/usr/lib/python3.10/site-packages/colorclass/codes.py", line 4, in <module>
from collections import Mapping
ImportError: cannot import name 'Mapping' from 'collections' (/usr/lib/python3.10/collections/__init__.py)
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import sys
2 from pathlib import Path
3 from typing import List
4
5 from setuptools import find_packages, setup
6
7 long_description = Path('README.rst').read_text()
8
9 # Populates __version__ without importing the package
10 __version__ = None
11 with open('flexget/_version.py', encoding='utf-8') as ver_file:
12 exec(ver_file.read()) # pylint: disable=W0122
13 if not __version__:
14 print('Could not find __version__ from flexget/_version.py')
15 sys.exit(1)
16
17
18 def load_requirements(filename: str) -> List[str]:
19 return [
20 line.strip()
21 for line in Path(filename).read_text().splitlines()
22 if not line.startswith('#')
23 ]
24
25
26 setup(
27 name='FlexGet',
28 version=__version__,
29 description='FlexGet is a program aimed to automate downloading or processing content (torrents, podcasts, etc.) '
30 'from different sources like RSS-feeds, html-pages, various sites and more.',
31 long_description=long_description,
32 long_description_content_type='text/x-rst',
33 author='Marko Koivusalo',
34 author_email='[email protected]',
35 license='MIT',
36 url='https://flexget.com',
37 project_urls={
38 'Repository': 'https://github.com/Flexget/Flexget',
39 'Issue Tracker': 'https://github.com/Flexget/Flexget/issues',
40 'Forum': 'https://discuss.flexget.com',
41 },
42 packages=find_packages(exclude=['flexget.tests']),
43 include_package_data=True,
44 zip_safe=False,
45 install_requires=load_requirements('requirements.txt'),
46 tests_require=['pytest'],
47 extras_require={'dev': load_requirements('dev-requirements.txt')},
48 entry_points={
49 'console_scripts': ['flexget = flexget:main'],
50 'gui_scripts': [
51 'flexget-headless = flexget:main'
52 ], # This is useful on Windows to avoid a cmd popup
53 },
54 python_requires='>=3.6',
55 classifiers=[
56 "Development Status :: 5 - Production/Stable",
57 "License :: OSI Approved :: MIT License",
58 "Operating System :: OS Independent",
59 "Programming Language :: Python",
60 "Programming Language :: Python :: 3.6",
61 "Programming Language :: Python :: 3.7",
62 "Programming Language :: Python :: 3.8",
63 "Programming Language :: Python :: 3.9",
64 "Programming Language :: Python :: Implementation :: CPython",
65 "Programming Language :: Python :: Implementation :: PyPy",
66 ],
67 )
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -61,6 +61,7 @@
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -61,6 +61,7 @@\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n+ \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n", "issue": "python 3.10 issue\nI have an issue with python 3.10 and Flexget. Greenlet has been updated to 1.1.2 because the 1.0.0 version is not compatible with python 3.10. After that Flexget was installed successfully but I got the error message below.\r\n\r\n- FlexGet version: 3.1.137\r\n- Python version: 3.10\r\n- Installation method: pip\r\n- Using daemon (yes/no): no\r\n- OS and version: Linux / Slackware / 5.14.8 kernel\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/bin/flexget\", line 5, in <module>\r\n from flexget import main\r\n File \"/usr/lib/python3.10/site-packages/flexget/__init__.py\", line 11, in <module>\r\n from flexget.manager import Manager # noqa\r\n File \"/usr/lib/python3.10/site-packages/flexget/manager.py\", line 47, in <module>\r\n from flexget.ipc import IPCClient, IPCServer # noqa\r\n File \"/usr/lib/python3.10/site-packages/flexget/ipc.py\", line 14, in <module>\r\n from flexget import terminal\r\n File \"/usr/lib/python3.10/site-packages/flexget/terminal.py\", line 7, in <module>\r\n from colorclass import Color, Windows\r\n File \"/usr/lib/python3.10/site-packages/colorclass/__init__.py\", line 11, in <module>\r\n from colorclass.codes import list_tags # noqa\r\n File \"/usr/lib/python3.10/site-packages/colorclass/codes.py\", line 4, in <module>\r\n from collections import Mapping\r\nImportError: cannot import name 'Mapping' from 'collections' (/usr/lib/python3.10/collections/__init__.py)\r\n\r\nThanks!\n", "before_files": [{"content": "import sys\nfrom pathlib import Path\nfrom typing import List\n\nfrom setuptools import find_packages, setup\n\nlong_description = Path('README.rst').read_text()\n\n# Populates __version__ without importing the package\n__version__ = None\nwith open('flexget/_version.py', encoding='utf-8') as ver_file:\n exec(ver_file.read()) # pylint: disable=W0122\nif not __version__:\n print('Could not find __version__ from flexget/_version.py')\n sys.exit(1)\n\n\ndef load_requirements(filename: str) -> List[str]:\n return [\n line.strip()\n for line in Path(filename).read_text().splitlines()\n if not line.startswith('#')\n ]\n\n\nsetup(\n name='FlexGet',\n version=__version__,\n description='FlexGet is a program aimed to automate downloading or processing content (torrents, podcasts, etc.) '\n 'from different sources like RSS-feeds, html-pages, various sites and more.',\n long_description=long_description,\n long_description_content_type='text/x-rst',\n author='Marko Koivusalo',\n author_email='[email protected]',\n license='MIT',\n url='https://flexget.com',\n project_urls={\n 'Repository': 'https://github.com/Flexget/Flexget',\n 'Issue Tracker': 'https://github.com/Flexget/Flexget/issues',\n 'Forum': 'https://discuss.flexget.com',\n },\n packages=find_packages(exclude=['flexget.tests']),\n include_package_data=True,\n zip_safe=False,\n install_requires=load_requirements('requirements.txt'),\n tests_require=['pytest'],\n extras_require={'dev': load_requirements('dev-requirements.txt')},\n entry_points={\n 'console_scripts': ['flexget = flexget:main'],\n 'gui_scripts': [\n 'flexget-headless = flexget:main'\n ], # This is useful on Windows to avoid a cmd popup\n },\n python_requires='>=3.6',\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "import sys\nfrom pathlib import Path\nfrom typing import List\n\nfrom setuptools import find_packages, setup\n\nlong_description = Path('README.rst').read_text()\n\n# Populates __version__ without importing the package\n__version__ = None\nwith open('flexget/_version.py', encoding='utf-8') as ver_file:\n exec(ver_file.read()) # pylint: disable=W0122\nif not __version__:\n print('Could not find __version__ from flexget/_version.py')\n sys.exit(1)\n\n\ndef load_requirements(filename: str) -> List[str]:\n return [\n line.strip()\n for line in Path(filename).read_text().splitlines()\n if not line.startswith('#')\n ]\n\n\nsetup(\n name='FlexGet',\n version=__version__,\n description='FlexGet is a program aimed to automate downloading or processing content (torrents, podcasts, etc.) '\n 'from different sources like RSS-feeds, html-pages, various sites and more.',\n long_description=long_description,\n long_description_content_type='text/x-rst',\n author='Marko Koivusalo',\n author_email='[email protected]',\n license='MIT',\n url='https://flexget.com',\n project_urls={\n 'Repository': 'https://github.com/Flexget/Flexget',\n 'Issue Tracker': 'https://github.com/Flexget/Flexget/issues',\n 'Forum': 'https://discuss.flexget.com',\n },\n packages=find_packages(exclude=['flexget.tests']),\n include_package_data=True,\n zip_safe=False,\n install_requires=load_requirements('requirements.txt'),\n tests_require=['pytest'],\n extras_require={'dev': load_requirements('dev-requirements.txt')},\n entry_points={\n 'console_scripts': ['flexget = flexget:main'],\n 'gui_scripts': [\n 'flexget-headless = flexget:main'\n ], # This is useful on Windows to avoid a cmd popup\n },\n python_requires='>=3.6',\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n)\n", "path": "setup.py"}]} | 1,355 | 108 |
gh_patches_debug_31687 | rasdani/github-patches | git_diff | liqd__a4-opin-735 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DST | Changes on "Aim"-Page
Hey hey, Nicolas from Nexus/WP3 sent some remarks for changes. On this page it would be helpful to
1) Start the sentences on the left hand side with a capital letter 2) Change the question into "What is the aim of your participation project?" and lastly 3) Add a line above the first row: Text on left column "Aim", Text for the right column "Examples"

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `euth/blueprintsuggest/blueprints.py`
Content:
```
1 from collections import namedtuple
2 from enum import Enum, unique
3
4 from django.utils.translation import ugettext_lazy as _
5
6 from euth.documents import phases as documents_phases
7 from euth.flashpoll import phases as flashpoll_phases
8 from euth.ideas import phases as ideas_phases
9 from euth.maps import phases as map_phases
10
11
12 @unique
13 class Aim(Enum):
14 collect_ideas = (
15 'collect_ideas',
16 _('create and gather new ideas or visions.'),
17 [_('(Urban) planning processes'),
18 _('Develop concepts or guiding principles')]
19 )
20 discuss_topic = (
21 'discuss_topic',
22 _('gather feedback on a topic and discuss it in greater detail.'),
23 [_('Discuss existing concepts or plans'),
24 _('Develop solutions for existing problems')]
25 )
26 design_place = (
27 'design_place',
28 _('design a place.'),
29 [_('(Urban) planning processes'),
30 _('Set the agenda of an event')]
31 )
32 run_survey = (
33 'run_survey',
34 _('learn about what people like most.'),
35 [_('Majority votes'), _('Opinion polls')]
36 )
37 run_competition = (
38 'run_competition',
39 _('run a competition.'),
40 [_('All sorts of competitions, '
41 'like idea contests etc.')]
42 )
43 work_document = (
44 'work_document',
45 _('work collaboratively on a text document.'),
46 [_('Draft or revise statutes, articles, or charters'),
47 _('Involve different authors in writing a shared text')]
48 )
49
50 def __new__(cls, value, label, examples):
51 obj = object.__new__(cls)
52 obj._value_ = value
53 obj.label = label
54 obj.examples = examples
55 return obj
56
57
58 @unique
59 class Result(Enum):
60 collect_ideas = 3, _('Collection of ideas or arguments')
61 majority_vote = 2, _('Majority vote')
62 weighted_arguments = 1, _('Weighted arguments')
63
64 def __new__(cls, value, label):
65 obj = object.__new__(cls)
66 obj._value_ = value
67 obj.label = label
68 return obj
69
70
71 @unique
72 class Experience(Enum):
73 five_projects = 4, _('More than 5 participative projects')
74 two_projects = 3, _('More than 2 participative projects')
75 one_project = 2, ('1-2 partcipative projects')
76 no_projects = 1, ('I have no experiences in organising participative '
77 ' projects')
78
79 def __new__(cls, value, label):
80 obj = object.__new__(cls)
81 obj._value_ = value
82 obj.label = label
83 return obj
84
85
86 class Motivation(Enum):
87 high = 4, _('High motivation')
88 medium = 3, _('Medium motivation')
89 low = 2, _('Low motivation')
90 not_found = 1, _('No motivation')
91 unkown = 2, _('I don\'t know.')
92
93 def __new__(cls, value, label):
94 obj = object.__new__(cls)
95 obj._value_ = value
96 obj.label = label
97 return obj
98
99
100 Requirements = namedtuple(
101 'Requirements', [
102 'aims', 'results', 'experience', 'motivation'
103 ])
104
105
106 Blueprint = namedtuple(
107 'Blueprint', [
108 'title', 'description', 'content', 'image', 'settings_model',
109 'requirements'
110 ])
111
112
113 blueprints = [
114 ('brainstorming',
115 Blueprint(
116 title=_('Brainstorming'),
117 description=_('Collect ideas, questions and input concerning '
118 'a problem or a question from a wide array of people.'),
119 content=[
120 ideas_phases.CollectPhase(),
121 ],
122 image='images/brainstorming.png',
123 settings_model=None,
124 requirements=Requirements(
125 aims=[Aim.collect_ideas, Aim.discuss_topic],
126 results=[Result.collect_ideas],
127 experience=Experience.no_projects,
128 motivation=Motivation.not_found
129 ),
130 )),
131 ('map-brainstorming',
132 Blueprint(
133 title=_('Spatial Brainstorming'),
134 description=_('Collect ideas, questions and input concerning a '
135 'problem or a question from a wide array of people.'),
136 content=[
137 map_phases.CollectPhase(),
138 ],
139 image='images/spatial_brainstorming.png',
140 settings_model=('euth_maps', 'AreaSettings'),
141 requirements=Requirements(
142 aims=[Aim.design_place],
143 results=[Result.collect_ideas],
144 experience=Experience.no_projects,
145 motivation=Motivation.not_found
146 ),
147 )),
148 ('idea-challenge',
149 Blueprint(
150 title=_('Idea Challenge'),
151 description=_('Run a challenge and find the best ideas to solve '
152 'a particular problem.'),
153 content=[
154 ideas_phases.CollectPhase(),
155 ideas_phases.RatingPhase(),
156 ],
157 image='images/challenge.png',
158 settings_model=None,
159 requirements=Requirements(
160 aims=[Aim.run_competition, Aim.run_survey],
161 results=list(Result),
162 experience=Experience.one_project,
163 motivation=Motivation.low
164 ),
165 )),
166 ('map-idea-challenge',
167 Blueprint(
168 title=_('Spatial Idea Challenge'),
169 description=_('Run a challenge concerning a certain area or space in '
170 'your community and find the best ideas to solve a '
171 'particular problem.'),
172 content=[
173 map_phases.CollectPhase(),
174 map_phases.RatingPhase(),
175 ],
176 image='images/spatial_challenge.png',
177 settings_model=('euth_maps', 'AreaSettings'),
178 requirements=Requirements(
179 aims=[Aim.design_place],
180 results=list(Result),
181 experience=Experience.one_project,
182 motivation=Motivation.low
183 ),
184 )),
185 ('agenda-setting',
186 Blueprint(
187 title=_('Agenda Setting'),
188 description=_('You can involve everyone in planning a meeting. '
189 'Collect ideas for an upcoming event and let your '
190 'participants vote on the topics you want to tackle.'),
191 content=[
192 ideas_phases.CollectPhase(),
193 ideas_phases.RatingPhase(),
194 ],
195 image='images/agenda_setting.png',
196 settings_model=None,
197 requirements=Requirements(
198 aims=[Aim.collect_ideas, Aim.discuss_topic, Aim.run_survey],
199 results=list(Result),
200 experience=Experience.one_project,
201 motivation=Motivation.low
202 ),
203 )),
204 ('commenting-text',
205 Blueprint(
206 title=_('Text Review'),
207 description=_('Let participants discuss individual paragraphs of a '
208 'text. This is ideal for discussing position papers or '
209 'a mission statements with many people.'),
210 content=[
211 documents_phases.CreateDocumentPhase(),
212 documents_phases.CommentPhase(),
213 ],
214 image='images/text_review.png',
215 settings_model=None,
216 requirements=Requirements(
217 aims=[Aim.work_document],
218 results=None,
219 experience=None,
220 motivation=None
221 ),
222 )),
223 ('flashpoll',
224 Blueprint(
225 title=_('Poll'),
226 description=_('Run customizable, multi-step polls on OPIN to get '
227 'detailed opinions on topics from the public or your '
228 'members. Via the OPIN polling app for iOS and Android '
229 'these polls are also accessible on smartphones.'),
230 content=[
231 flashpoll_phases.FlashpollPhase(),
232 ],
233 image='images/poll.png',
234 settings_model=('euth_flashpoll', 'Flashpoll'),
235 requirements=Requirements(
236 aims=[Aim.run_survey],
237 results=[Result.majority_vote],
238 experience=Experience.no_projects,
239 motivation=Motivation.not_found
240 ),
241 )),
242 ]
243
244
245 def get_fallback_blueprint(aim):
246 fallbacks = {
247 Aim.collect_ideas: 'brainstorming',
248 Aim.discuss_topic: 'brainstorming',
249 Aim.design_place: 'map-brainstorming',
250 Aim.run_survey: 'flashpoll',
251 Aim.run_competition: 'agenda-setting',
252 Aim.work_document: 'commenting-text'
253 }
254
255 name = fallbacks[aim]
256 return name, dict(blueprints)[name]
257
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/euth/blueprintsuggest/blueprints.py b/euth/blueprintsuggest/blueprints.py
--- a/euth/blueprintsuggest/blueprints.py
+++ b/euth/blueprintsuggest/blueprints.py
@@ -13,36 +13,36 @@
class Aim(Enum):
collect_ideas = (
'collect_ideas',
- _('create and gather new ideas or visions.'),
+ _('Create and gather new ideas or visions.'),
[_('(Urban) planning processes'),
_('Develop concepts or guiding principles')]
)
discuss_topic = (
'discuss_topic',
- _('gather feedback on a topic and discuss it in greater detail.'),
+ _('Gather feedback on a topic and discuss it in greater detail.'),
[_('Discuss existing concepts or plans'),
_('Develop solutions for existing problems')]
)
design_place = (
'design_place',
- _('design a place.'),
+ _('Design a place.'),
[_('(Urban) planning processes'),
_('Set the agenda of an event')]
)
run_survey = (
'run_survey',
- _('learn about what people like most.'),
+ _('Learn about what people like most.'),
[_('Majority votes'), _('Opinion polls')]
)
run_competition = (
'run_competition',
- _('run a competition.'),
+ _('Run a competition.'),
[_('All sorts of competitions, '
'like idea contests etc.')]
)
work_document = (
'work_document',
- _('work collaboratively on a text document.'),
+ _('Work collaboratively on a text document.'),
[_('Draft or revise statutes, articles, or charters'),
_('Involve different authors in writing a shared text')]
)
| {"golden_diff": "diff --git a/euth/blueprintsuggest/blueprints.py b/euth/blueprintsuggest/blueprints.py\n--- a/euth/blueprintsuggest/blueprints.py\n+++ b/euth/blueprintsuggest/blueprints.py\n@@ -13,36 +13,36 @@\n class Aim(Enum):\n collect_ideas = (\n 'collect_ideas',\n- _('create and gather new ideas or visions.'),\n+ _('Create and gather new ideas or visions.'),\n [_('(Urban) planning processes'),\n _('Develop concepts or guiding principles')]\n )\n discuss_topic = (\n 'discuss_topic',\n- _('gather feedback on a topic and discuss it in greater detail.'),\n+ _('Gather feedback on a topic and discuss it in greater detail.'),\n [_('Discuss existing concepts or plans'),\n _('Develop solutions for existing problems')]\n )\n design_place = (\n 'design_place',\n- _('design a place.'),\n+ _('Design a place.'),\n [_('(Urban) planning processes'),\n _('Set the agenda of an event')]\n )\n run_survey = (\n 'run_survey',\n- _('learn about what people like most.'),\n+ _('Learn about what people like most.'),\n [_('Majority votes'), _('Opinion polls')]\n )\n run_competition = (\n 'run_competition',\n- _('run a competition.'),\n+ _('Run a competition.'),\n [_('All sorts of competitions, '\n 'like idea contests etc.')]\n )\n work_document = (\n 'work_document',\n- _('work collaboratively on a text document.'),\n+ _('Work collaboratively on a text document.'),\n [_('Draft or revise statutes, articles, or charters'),\n _('Involve different authors in writing a shared text')]\n )\n", "issue": "DST | Changes on \"Aim\"-Page \nHey hey, Nicolas from Nexus/WP3 sent some remarks for changes. On this page it would be helpful to \r\n1) Start the sentences on the left hand side with a capital letter 2) Change the question into \"What is the aim of your participation project?\" and lastly 3) Add a line above the first row: Text on left column \"Aim\", Text for the right column \"Examples\"\r\n\r\n\r\n\n", "before_files": [{"content": "from collections import namedtuple\nfrom enum import Enum, unique\n\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom euth.documents import phases as documents_phases\nfrom euth.flashpoll import phases as flashpoll_phases\nfrom euth.ideas import phases as ideas_phases\nfrom euth.maps import phases as map_phases\n\n\n@unique\nclass Aim(Enum):\n collect_ideas = (\n 'collect_ideas',\n _('create and gather new ideas or visions.'),\n [_('(Urban) planning processes'),\n _('Develop concepts or guiding principles')]\n )\n discuss_topic = (\n 'discuss_topic',\n _('gather feedback on a topic and discuss it in greater detail.'),\n [_('Discuss existing concepts or plans'),\n _('Develop solutions for existing problems')]\n )\n design_place = (\n 'design_place',\n _('design a place.'),\n [_('(Urban) planning processes'),\n _('Set the agenda of an event')]\n )\n run_survey = (\n 'run_survey',\n _('learn about what people like most.'),\n [_('Majority votes'), _('Opinion polls')]\n )\n run_competition = (\n 'run_competition',\n _('run a competition.'),\n [_('All sorts of competitions, '\n 'like idea contests etc.')]\n )\n work_document = (\n 'work_document',\n _('work collaboratively on a text document.'),\n [_('Draft or revise statutes, articles, or charters'),\n _('Involve different authors in writing a shared text')]\n )\n\n def __new__(cls, value, label, examples):\n obj = object.__new__(cls)\n obj._value_ = value\n obj.label = label\n obj.examples = examples\n return obj\n\n\n@unique\nclass Result(Enum):\n collect_ideas = 3, _('Collection of ideas or arguments')\n majority_vote = 2, _('Majority vote')\n weighted_arguments = 1, _('Weighted arguments')\n\n def __new__(cls, value, label):\n obj = object.__new__(cls)\n obj._value_ = value\n obj.label = label\n return obj\n\n\n@unique\nclass Experience(Enum):\n five_projects = 4, _('More than 5 participative projects')\n two_projects = 3, _('More than 2 participative projects')\n one_project = 2, ('1-2 partcipative projects')\n no_projects = 1, ('I have no experiences in organising participative '\n ' projects')\n\n def __new__(cls, value, label):\n obj = object.__new__(cls)\n obj._value_ = value\n obj.label = label\n return obj\n\n\nclass Motivation(Enum):\n high = 4, _('High motivation')\n medium = 3, _('Medium motivation')\n low = 2, _('Low motivation')\n not_found = 1, _('No motivation')\n unkown = 2, _('I don\\'t know.')\n\n def __new__(cls, value, label):\n obj = object.__new__(cls)\n obj._value_ = value\n obj.label = label\n return obj\n\n\nRequirements = namedtuple(\n 'Requirements', [\n 'aims', 'results', 'experience', 'motivation'\n ])\n\n\nBlueprint = namedtuple(\n 'Blueprint', [\n 'title', 'description', 'content', 'image', 'settings_model',\n 'requirements'\n ])\n\n\nblueprints = [\n ('brainstorming',\n Blueprint(\n title=_('Brainstorming'),\n description=_('Collect ideas, questions and input concerning '\n 'a problem or a question from a wide array of people.'),\n content=[\n ideas_phases.CollectPhase(),\n ],\n image='images/brainstorming.png',\n settings_model=None,\n requirements=Requirements(\n aims=[Aim.collect_ideas, Aim.discuss_topic],\n results=[Result.collect_ideas],\n experience=Experience.no_projects,\n motivation=Motivation.not_found\n ),\n )),\n ('map-brainstorming',\n Blueprint(\n title=_('Spatial Brainstorming'),\n description=_('Collect ideas, questions and input concerning a '\n 'problem or a question from a wide array of people.'),\n content=[\n map_phases.CollectPhase(),\n ],\n image='images/spatial_brainstorming.png',\n settings_model=('euth_maps', 'AreaSettings'),\n requirements=Requirements(\n aims=[Aim.design_place],\n results=[Result.collect_ideas],\n experience=Experience.no_projects,\n motivation=Motivation.not_found\n ),\n )),\n ('idea-challenge',\n Blueprint(\n title=_('Idea Challenge'),\n description=_('Run a challenge and find the best ideas to solve '\n 'a particular problem.'),\n content=[\n ideas_phases.CollectPhase(),\n ideas_phases.RatingPhase(),\n ],\n image='images/challenge.png',\n settings_model=None,\n requirements=Requirements(\n aims=[Aim.run_competition, Aim.run_survey],\n results=list(Result),\n experience=Experience.one_project,\n motivation=Motivation.low\n ),\n )),\n ('map-idea-challenge',\n Blueprint(\n title=_('Spatial Idea Challenge'),\n description=_('Run a challenge concerning a certain area or space in '\n 'your community and find the best ideas to solve a '\n 'particular problem.'),\n content=[\n map_phases.CollectPhase(),\n map_phases.RatingPhase(),\n ],\n image='images/spatial_challenge.png',\n settings_model=('euth_maps', 'AreaSettings'),\n requirements=Requirements(\n aims=[Aim.design_place],\n results=list(Result),\n experience=Experience.one_project,\n motivation=Motivation.low\n ),\n )),\n ('agenda-setting',\n Blueprint(\n title=_('Agenda Setting'),\n description=_('You can involve everyone in planning a meeting. '\n 'Collect ideas for an upcoming event and let your '\n 'participants vote on the topics you want to tackle.'),\n content=[\n ideas_phases.CollectPhase(),\n ideas_phases.RatingPhase(),\n ],\n image='images/agenda_setting.png',\n settings_model=None,\n requirements=Requirements(\n aims=[Aim.collect_ideas, Aim.discuss_topic, Aim.run_survey],\n results=list(Result),\n experience=Experience.one_project,\n motivation=Motivation.low\n ),\n )),\n ('commenting-text',\n Blueprint(\n title=_('Text Review'),\n description=_('Let participants discuss individual paragraphs of a '\n 'text. This is ideal for discussing position papers or '\n 'a mission statements with many people.'),\n content=[\n documents_phases.CreateDocumentPhase(),\n documents_phases.CommentPhase(),\n ],\n image='images/text_review.png',\n settings_model=None,\n requirements=Requirements(\n aims=[Aim.work_document],\n results=None,\n experience=None,\n motivation=None\n ),\n )),\n ('flashpoll',\n Blueprint(\n title=_('Poll'),\n description=_('Run customizable, multi-step polls on OPIN to get '\n 'detailed opinions on topics from the public or your '\n 'members. Via the OPIN polling app for iOS and Android '\n 'these polls are also accessible on smartphones.'),\n content=[\n flashpoll_phases.FlashpollPhase(),\n ],\n image='images/poll.png',\n settings_model=('euth_flashpoll', 'Flashpoll'),\n requirements=Requirements(\n aims=[Aim.run_survey],\n results=[Result.majority_vote],\n experience=Experience.no_projects,\n motivation=Motivation.not_found\n ),\n )),\n]\n\n\ndef get_fallback_blueprint(aim):\n fallbacks = {\n Aim.collect_ideas: 'brainstorming',\n Aim.discuss_topic: 'brainstorming',\n Aim.design_place: 'map-brainstorming',\n Aim.run_survey: 'flashpoll',\n Aim.run_competition: 'agenda-setting',\n Aim.work_document: 'commenting-text'\n }\n\n name = fallbacks[aim]\n return name, dict(blueprints)[name]\n", "path": "euth/blueprintsuggest/blueprints.py"}], "after_files": [{"content": "from collections import namedtuple\nfrom enum import Enum, unique\n\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom euth.documents import phases as documents_phases\nfrom euth.flashpoll import phases as flashpoll_phases\nfrom euth.ideas import phases as ideas_phases\nfrom euth.maps import phases as map_phases\n\n\n@unique\nclass Aim(Enum):\n collect_ideas = (\n 'collect_ideas',\n _('Create and gather new ideas or visions.'),\n [_('(Urban) planning processes'),\n _('Develop concepts or guiding principles')]\n )\n discuss_topic = (\n 'discuss_topic',\n _('Gather feedback on a topic and discuss it in greater detail.'),\n [_('Discuss existing concepts or plans'),\n _('Develop solutions for existing problems')]\n )\n design_place = (\n 'design_place',\n _('Design a place.'),\n [_('(Urban) planning processes'),\n _('Set the agenda of an event')]\n )\n run_survey = (\n 'run_survey',\n _('Learn about what people like most.'),\n [_('Majority votes'), _('Opinion polls')]\n )\n run_competition = (\n 'run_competition',\n _('Run a competition.'),\n [_('All sorts of competitions, '\n 'like idea contests etc.')]\n )\n work_document = (\n 'work_document',\n _('Work collaboratively on a text document.'),\n [_('Draft or revise statutes, articles, or charters'),\n _('Involve different authors in writing a shared text')]\n )\n\n def __new__(cls, value, label, examples):\n obj = object.__new__(cls)\n obj._value_ = value\n obj.label = label\n obj.examples = examples\n return obj\n\n\n@unique\nclass Result(Enum):\n collect_ideas = 3, _('Collection of ideas or arguments')\n majority_vote = 2, _('Majority vote')\n weighted_arguments = 1, _('Weighted arguments')\n\n def __new__(cls, value, label):\n obj = object.__new__(cls)\n obj._value_ = value\n obj.label = label\n return obj\n\n\n@unique\nclass Experience(Enum):\n five_projects = 4, _('More than 5 participative projects')\n two_projects = 3, _('More than 2 participative projects')\n one_project = 2, ('1-2 partcipative projects')\n no_projects = 1, ('I have no experiences in organising participative '\n ' projects')\n\n def __new__(cls, value, label):\n obj = object.__new__(cls)\n obj._value_ = value\n obj.label = label\n return obj\n\n\nclass Motivation(Enum):\n high = 4, _('High motivation')\n medium = 3, _('Medium motivation')\n low = 2, _('Low motivation')\n not_found = 1, _('No motivation')\n unkown = 2, _('I don\\'t know.')\n\n def __new__(cls, value, label):\n obj = object.__new__(cls)\n obj._value_ = value\n obj.label = label\n return obj\n\n\nRequirements = namedtuple(\n 'Requirements', [\n 'aims', 'results', 'experience', 'motivation'\n ])\n\n\nBlueprint = namedtuple(\n 'Blueprint', [\n 'title', 'description', 'content', 'image', 'settings_model',\n 'requirements'\n ])\n\n\nblueprints = [\n ('brainstorming',\n Blueprint(\n title=_('Brainstorming'),\n description=_('Collect ideas, questions and input concerning '\n 'a problem or a question from a wide array of people.'),\n content=[\n ideas_phases.CollectPhase(),\n ],\n image='images/brainstorming.png',\n settings_model=None,\n requirements=Requirements(\n aims=[Aim.collect_ideas, Aim.discuss_topic],\n results=[Result.collect_ideas],\n experience=Experience.no_projects,\n motivation=Motivation.not_found\n ),\n )),\n ('map-brainstorming',\n Blueprint(\n title=_('Spatial Brainstorming'),\n description=_('Collect ideas, questions and input concerning a '\n 'problem or a question from a wide array of people.'),\n content=[\n map_phases.CollectPhase(),\n ],\n image='images/spatial_brainstorming.png',\n settings_model=('euth_maps', 'AreaSettings'),\n requirements=Requirements(\n aims=[Aim.design_place],\n results=[Result.collect_ideas],\n experience=Experience.no_projects,\n motivation=Motivation.not_found\n ),\n )),\n ('idea-challenge',\n Blueprint(\n title=_('Idea Challenge'),\n description=_('Run a challenge and find the best ideas to solve '\n 'a particular problem.'),\n content=[\n ideas_phases.CollectPhase(),\n ideas_phases.RatingPhase(),\n ],\n image='images/challenge.png',\n settings_model=None,\n requirements=Requirements(\n aims=[Aim.run_competition, Aim.run_survey],\n results=list(Result),\n experience=Experience.one_project,\n motivation=Motivation.low\n ),\n )),\n ('map-idea-challenge',\n Blueprint(\n title=_('Spatial Idea Challenge'),\n description=_('Run a challenge concerning a certain area or space in '\n 'your community and find the best ideas to solve a '\n 'particular problem.'),\n content=[\n map_phases.CollectPhase(),\n map_phases.RatingPhase(),\n ],\n image='images/spatial_challenge.png',\n settings_model=('euth_maps', 'AreaSettings'),\n requirements=Requirements(\n aims=[Aim.design_place],\n results=list(Result),\n experience=Experience.one_project,\n motivation=Motivation.low\n ),\n )),\n ('agenda-setting',\n Blueprint(\n title=_('Agenda Setting'),\n description=_('You can involve everyone in planning a meeting. '\n 'Collect ideas for an upcoming event and let your '\n 'participants vote on the topics you want to tackle.'),\n content=[\n ideas_phases.CollectPhase(),\n ideas_phases.RatingPhase(),\n ],\n image='images/agenda_setting.png',\n settings_model=None,\n requirements=Requirements(\n aims=[Aim.collect_ideas, Aim.discuss_topic, Aim.run_survey],\n results=list(Result),\n experience=Experience.one_project,\n motivation=Motivation.low\n ),\n )),\n ('commenting-text',\n Blueprint(\n title=_('Text Review'),\n description=_('Let participants discuss individual paragraphs of a '\n 'text. This is ideal for discussing position papers or '\n 'a mission statements with many people.'),\n content=[\n documents_phases.CreateDocumentPhase(),\n documents_phases.CommentPhase(),\n ],\n image='images/text_review.png',\n settings_model=None,\n requirements=Requirements(\n aims=[Aim.work_document],\n results=None,\n experience=None,\n motivation=None\n ),\n )),\n ('flashpoll',\n Blueprint(\n title=_('Poll'),\n description=_('Run customizable, multi-step polls on OPIN to get '\n 'detailed opinions on topics from the public or your '\n 'members. Via the OPIN polling app for iOS and Android '\n 'these polls are also accessible on smartphones.'),\n content=[\n flashpoll_phases.FlashpollPhase(),\n ],\n image='images/poll.png',\n settings_model=('euth_flashpoll', 'Flashpoll'),\n requirements=Requirements(\n aims=[Aim.run_survey],\n results=[Result.majority_vote],\n experience=Experience.no_projects,\n motivation=Motivation.not_found\n ),\n )),\n]\n\n\ndef get_fallback_blueprint(aim):\n fallbacks = {\n Aim.collect_ideas: 'brainstorming',\n Aim.discuss_topic: 'brainstorming',\n Aim.design_place: 'map-brainstorming',\n Aim.run_survey: 'flashpoll',\n Aim.run_competition: 'agenda-setting',\n Aim.work_document: 'commenting-text'\n }\n\n name = fallbacks[aim]\n return name, dict(blueprints)[name]\n", "path": "euth/blueprintsuggest/blueprints.py"}]} | 2,804 | 367 |
gh_patches_debug_57793 | rasdani/github-patches | git_diff | catalyst-team__catalyst-855 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
EarlyStoppingCallback considers first epoch as bad
## 🐛 Bug Report
EarlyStoppingCallback considers first epoch as bad. This can lead for example to always stopping after first epoch if patience=1.
### How To Reproduce
You can train a model with early stopping and patience=1 and see that it always stops after first epoch. Or you can use the unit test below that I added to pull request.
#### Code sample
```python
from unittest.mock import MagicMock, PropertyMock
from catalyst.core import EarlyStoppingCallback
def test_patience1():
"""@TODO: Docs. Contribution is welcome."""
early_stop = EarlyStoppingCallback(1)
runner = MagicMock()
type(runner).stage_name = PropertyMock(return_value="training")
type(runner).valid_metrics = PropertyMock(return_value={"loss": 0.001})
stop_mock = PropertyMock(return_value=False)
type(runner).need_early_stop = stop_mock
early_stop.on_epoch_end(runner)
assert stop_mock.mock_calls == []
```
### Expected behavior
Training doesn't stop after first epoch. And the unit test passes.
### Environment
```bash
Catalyst version: 20.06
PyTorch version: 1.5.1
Is debug build: No
CUDA used to build PyTorch: None
TensorFlow version: N/A
TensorBoard version: 2.2.2
OS: Mac OSX 10.15.5
GCC version: Could not collect
CMake version: version 3.8.0
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip3] catalyst-codestyle==20.4
[pip3] catalyst-sphinx-theme==1.1.1
[pip3] efficientnet-pytorch==0.6.3
[pip3] numpy==1.18.5
[pip3] segmentation-models-pytorch==0.1.0
[pip3] tensorboard==2.2.2
[pip3] tensorboard-plugin-wit==1.6.0.post3
[pip3] tensorboardX==2.0
[pip3] torch==1.5.1
[pip3] torchvision==0.6.1
[conda] catalyst-codestyle 20.4 <pip>
[conda] catalyst-sphinx-theme 1.1.1 <pip>
[conda] efficientnet-pytorch 0.6.3 <pip>
[conda] numpy 1.18.5 <pip>
[conda] segmentation-models-pytorch 0.1.0 <pip>
[conda] tensorboard 2.2.2 <pip>
[conda] tensorboard-plugin-wit 1.6.0.post3 <pip>
[conda] tensorboardX 2.0 <pip>
[conda] torch 1.5.1 <pip>
[conda] torchvision 0.6.1 <pip>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `catalyst/core/callbacks/early_stop.py`
Content:
```
1 from catalyst.core.callback import Callback, CallbackNode, CallbackOrder
2 from catalyst.core.runner import IRunner
3
4
5 class CheckRunCallback(Callback):
6 """@TODO: Docs. Contribution is welcome."""
7
8 def __init__(self, num_batch_steps: int = 3, num_epoch_steps: int = 2):
9 """@TODO: Docs. Contribution is welcome."""
10 super().__init__(order=CallbackOrder.external, node=CallbackNode.all)
11 self.num_batch_steps = num_batch_steps
12 self.num_epoch_steps = num_epoch_steps
13
14 def on_epoch_end(self, runner: IRunner):
15 """@TODO: Docs. Contribution is welcome."""
16 if runner.epoch >= self.num_epoch_steps:
17 runner.need_early_stop = True
18
19 def on_batch_end(self, runner: IRunner):
20 """@TODO: Docs. Contribution is welcome."""
21 if runner.loader_batch_step >= self.num_batch_steps:
22 runner.need_early_stop = True
23
24
25 class EarlyStoppingCallback(Callback):
26 """@TODO: Docs. Contribution is welcome."""
27
28 def __init__(
29 self,
30 patience: int,
31 metric: str = "loss",
32 minimize: bool = True,
33 min_delta: float = 1e-6,
34 ):
35 """@TODO: Docs. Contribution is welcome."""
36 super().__init__(order=CallbackOrder.external, node=CallbackNode.all)
37 self.best_score = None
38 self.metric = metric
39 self.patience = patience
40 self.num_bad_epochs = 0
41 self.is_better = None
42
43 if minimize:
44 self.is_better = lambda score, best: score <= (best - min_delta)
45 else:
46 self.is_better = lambda score, best: score >= (best + min_delta)
47
48 def on_epoch_end(self, runner: IRunner) -> None:
49 """@TODO: Docs. Contribution is welcome."""
50 if runner.stage_name.startswith("infer"):
51 return
52
53 score = runner.valid_metrics[self.metric]
54 if self.best_score is None:
55 self.best_score = score
56 if self.is_better(score, self.best_score):
57 self.num_bad_epochs = 0
58 self.best_score = score
59 else:
60 self.num_bad_epochs += 1
61
62 if self.num_bad_epochs >= self.patience:
63 print(f"Early stop at {runner.epoch} epoch")
64 runner.need_early_stop = True
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/catalyst/core/callbacks/early_stop.py b/catalyst/core/callbacks/early_stop.py
--- a/catalyst/core/callbacks/early_stop.py
+++ b/catalyst/core/callbacks/early_stop.py
@@ -51,9 +51,7 @@
return
score = runner.valid_metrics[self.metric]
- if self.best_score is None:
- self.best_score = score
- if self.is_better(score, self.best_score):
+ if self.best_score is None or self.is_better(score, self.best_score):
self.num_bad_epochs = 0
self.best_score = score
else:
| {"golden_diff": "diff --git a/catalyst/core/callbacks/early_stop.py b/catalyst/core/callbacks/early_stop.py\n--- a/catalyst/core/callbacks/early_stop.py\n+++ b/catalyst/core/callbacks/early_stop.py\n@@ -51,9 +51,7 @@\n return\n \n score = runner.valid_metrics[self.metric]\n- if self.best_score is None:\n- self.best_score = score\n- if self.is_better(score, self.best_score):\n+ if self.best_score is None or self.is_better(score, self.best_score):\n self.num_bad_epochs = 0\n self.best_score = score\n else:\n", "issue": "EarlyStoppingCallback considers first epoch as bad\n## \ud83d\udc1b Bug Report\r\nEarlyStoppingCallback considers first epoch as bad. This can lead for example to always stopping after first epoch if patience=1.\r\n\r\n\r\n### How To Reproduce\r\nYou can train a model with early stopping and patience=1 and see that it always stops after first epoch. Or you can use the unit test below that I added to pull request.\r\n\r\n#### Code sample\r\n```python\r\nfrom unittest.mock import MagicMock, PropertyMock\r\n\r\nfrom catalyst.core import EarlyStoppingCallback\r\n\r\n\r\ndef test_patience1():\r\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\r\n early_stop = EarlyStoppingCallback(1)\r\n runner = MagicMock()\r\n type(runner).stage_name = PropertyMock(return_value=\"training\")\r\n type(runner).valid_metrics = PropertyMock(return_value={\"loss\": 0.001})\r\n stop_mock = PropertyMock(return_value=False)\r\n type(runner).need_early_stop = stop_mock\r\n\r\n early_stop.on_epoch_end(runner)\r\n\r\n assert stop_mock.mock_calls == []\r\n```\r\n\r\n### Expected behavior\r\nTraining doesn't stop after first epoch. And the unit test passes.\r\n\r\n\r\n### Environment\r\n```bash\r\nCatalyst version: 20.06\r\nPyTorch version: 1.5.1\r\nIs debug build: No\r\nCUDA used to build PyTorch: None\r\nTensorFlow version: N/A\r\nTensorBoard version: 2.2.2\r\n\r\nOS: Mac OSX 10.15.5\r\nGCC version: Could not collect\r\nCMake version: version 3.8.0\r\n\r\nPython version: 3.7\r\nIs CUDA available: No\r\nCUDA runtime version: No CUDA\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\n\r\nVersions of relevant libraries:\r\n[pip3] catalyst-codestyle==20.4\r\n[pip3] catalyst-sphinx-theme==1.1.1\r\n[pip3] efficientnet-pytorch==0.6.3\r\n[pip3] numpy==1.18.5\r\n[pip3] segmentation-models-pytorch==0.1.0\r\n[pip3] tensorboard==2.2.2\r\n[pip3] tensorboard-plugin-wit==1.6.0.post3\r\n[pip3] tensorboardX==2.0\r\n[pip3] torch==1.5.1\r\n[pip3] torchvision==0.6.1\r\n[conda] catalyst-codestyle 20.4 <pip>\r\n[conda] catalyst-sphinx-theme 1.1.1 <pip>\r\n[conda] efficientnet-pytorch 0.6.3 <pip>\r\n[conda] numpy 1.18.5 <pip>\r\n[conda] segmentation-models-pytorch 0.1.0 <pip>\r\n[conda] tensorboard 2.2.2 <pip>\r\n[conda] tensorboard-plugin-wit 1.6.0.post3 <pip>\r\n[conda] tensorboardX 2.0 <pip>\r\n[conda] torch 1.5.1 <pip>\r\n[conda] torchvision 0.6.1 <pip>\r\n```\r\n\n", "before_files": [{"content": "from catalyst.core.callback import Callback, CallbackNode, CallbackOrder\nfrom catalyst.core.runner import IRunner\n\n\nclass CheckRunCallback(Callback):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n\n def __init__(self, num_batch_steps: int = 3, num_epoch_steps: int = 2):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n super().__init__(order=CallbackOrder.external, node=CallbackNode.all)\n self.num_batch_steps = num_batch_steps\n self.num_epoch_steps = num_epoch_steps\n\n def on_epoch_end(self, runner: IRunner):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n if runner.epoch >= self.num_epoch_steps:\n runner.need_early_stop = True\n\n def on_batch_end(self, runner: IRunner):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n if runner.loader_batch_step >= self.num_batch_steps:\n runner.need_early_stop = True\n\n\nclass EarlyStoppingCallback(Callback):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n\n def __init__(\n self,\n patience: int,\n metric: str = \"loss\",\n minimize: bool = True,\n min_delta: float = 1e-6,\n ):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n super().__init__(order=CallbackOrder.external, node=CallbackNode.all)\n self.best_score = None\n self.metric = metric\n self.patience = patience\n self.num_bad_epochs = 0\n self.is_better = None\n\n if minimize:\n self.is_better = lambda score, best: score <= (best - min_delta)\n else:\n self.is_better = lambda score, best: score >= (best + min_delta)\n\n def on_epoch_end(self, runner: IRunner) -> None:\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n if runner.stage_name.startswith(\"infer\"):\n return\n\n score = runner.valid_metrics[self.metric]\n if self.best_score is None:\n self.best_score = score\n if self.is_better(score, self.best_score):\n self.num_bad_epochs = 0\n self.best_score = score\n else:\n self.num_bad_epochs += 1\n\n if self.num_bad_epochs >= self.patience:\n print(f\"Early stop at {runner.epoch} epoch\")\n runner.need_early_stop = True\n", "path": "catalyst/core/callbacks/early_stop.py"}], "after_files": [{"content": "from catalyst.core.callback import Callback, CallbackNode, CallbackOrder\nfrom catalyst.core.runner import IRunner\n\n\nclass CheckRunCallback(Callback):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n\n def __init__(self, num_batch_steps: int = 3, num_epoch_steps: int = 2):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n super().__init__(order=CallbackOrder.external, node=CallbackNode.all)\n self.num_batch_steps = num_batch_steps\n self.num_epoch_steps = num_epoch_steps\n\n def on_epoch_end(self, runner: IRunner):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n if runner.epoch >= self.num_epoch_steps:\n runner.need_early_stop = True\n\n def on_batch_end(self, runner: IRunner):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n if runner.loader_batch_step >= self.num_batch_steps:\n runner.need_early_stop = True\n\n\nclass EarlyStoppingCallback(Callback):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n\n def __init__(\n self,\n patience: int,\n metric: str = \"loss\",\n minimize: bool = True,\n min_delta: float = 1e-6,\n ):\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n super().__init__(order=CallbackOrder.external, node=CallbackNode.all)\n self.best_score = None\n self.metric = metric\n self.patience = patience\n self.num_bad_epochs = 0\n self.is_better = None\n\n if minimize:\n self.is_better = lambda score, best: score <= (best - min_delta)\n else:\n self.is_better = lambda score, best: score >= (best + min_delta)\n\n def on_epoch_end(self, runner: IRunner) -> None:\n \"\"\"@TODO: Docs. Contribution is welcome.\"\"\"\n if runner.stage_name.startswith(\"infer\"):\n return\n\n score = runner.valid_metrics[self.metric]\n if self.best_score is None or self.is_better(score, self.best_score):\n self.num_bad_epochs = 0\n self.best_score = score\n else:\n self.num_bad_epochs += 1\n\n if self.num_bad_epochs >= self.patience:\n print(f\"Early stop at {runner.epoch} epoch\")\n runner.need_early_stop = True\n", "path": "catalyst/core/callbacks/early_stop.py"}]} | 1,605 | 144 |
gh_patches_debug_840 | rasdani/github-patches | git_diff | nilearn__nilearn-507 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add test for compatibility of old version of six
For the moment, we are compatible with the latest version of six. Recently, somebody pointed out that we did not support six 1.5.2. We should investigate, decide which version we should be compatible with and then add this to Travis.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `continuous_integration/show-python-packages-versions.py`
Content:
```
1 import sys
2
3 DEPENDENCIES = ['numpy', 'scipy', 'sklearn', 'matplotlib', 'nibabel']
4
5
6 def print_package_version(package_name, indent=' '):
7 try:
8 package = __import__(package_name)
9 version = getattr(package, '__version__', None)
10 package_file = getattr(package, '__file__', )
11 provenance_info = '{0} from {1}'.format(version, package_file)
12 except ImportError:
13 provenance_info = 'not installed'
14
15 print('{0}{1}: {2}'.format(indent, package_name, provenance_info))
16
17 if __name__ == '__main__':
18 print('=' * 120)
19 print('Python %s' % str(sys.version))
20 print('from: %s\n' % sys.executable)
21
22 print('Dependencies versions')
23 for package_name in DEPENDENCIES:
24 print_package_version(package_name)
25 print('=' * 120)
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/continuous_integration/show-python-packages-versions.py b/continuous_integration/show-python-packages-versions.py
--- a/continuous_integration/show-python-packages-versions.py
+++ b/continuous_integration/show-python-packages-versions.py
@@ -1,6 +1,6 @@
import sys
-DEPENDENCIES = ['numpy', 'scipy', 'sklearn', 'matplotlib', 'nibabel']
+DEPENDENCIES = ['six', 'numpy', 'scipy', 'sklearn', 'matplotlib', 'nibabel']
def print_package_version(package_name, indent=' '):
| {"golden_diff": "diff --git a/continuous_integration/show-python-packages-versions.py b/continuous_integration/show-python-packages-versions.py\n--- a/continuous_integration/show-python-packages-versions.py\n+++ b/continuous_integration/show-python-packages-versions.py\n@@ -1,6 +1,6 @@\n import sys\n \n-DEPENDENCIES = ['numpy', 'scipy', 'sklearn', 'matplotlib', 'nibabel']\n+DEPENDENCIES = ['six', 'numpy', 'scipy', 'sklearn', 'matplotlib', 'nibabel']\n \n \n def print_package_version(package_name, indent=' '):\n", "issue": "Add test for compatibility of old version of six\nFor the moment, we are compatible with the latest version of six. Recently, somebody pointed out that we did not support six 1.5.2. We should investigate, decide which version we should be compatible with and then add this to Travis.\n\n", "before_files": [{"content": "import sys\n\nDEPENDENCIES = ['numpy', 'scipy', 'sklearn', 'matplotlib', 'nibabel']\n\n\ndef print_package_version(package_name, indent=' '):\n try:\n package = __import__(package_name)\n version = getattr(package, '__version__', None)\n package_file = getattr(package, '__file__', )\n provenance_info = '{0} from {1}'.format(version, package_file)\n except ImportError:\n provenance_info = 'not installed'\n\n print('{0}{1}: {2}'.format(indent, package_name, provenance_info))\n\nif __name__ == '__main__':\n print('=' * 120)\n print('Python %s' % str(sys.version))\n print('from: %s\\n' % sys.executable)\n\n print('Dependencies versions')\n for package_name in DEPENDENCIES:\n print_package_version(package_name)\n print('=' * 120)\n", "path": "continuous_integration/show-python-packages-versions.py"}], "after_files": [{"content": "import sys\n\nDEPENDENCIES = ['six', 'numpy', 'scipy', 'sklearn', 'matplotlib', 'nibabel']\n\n\ndef print_package_version(package_name, indent=' '):\n try:\n package = __import__(package_name)\n version = getattr(package, '__version__', None)\n package_file = getattr(package, '__file__', )\n provenance_info = '{0} from {1}'.format(version, package_file)\n except ImportError:\n provenance_info = 'not installed'\n\n print('{0}{1}: {2}'.format(indent, package_name, provenance_info))\n\nif __name__ == '__main__':\n print('=' * 120)\n print('Python %s' % str(sys.version))\n print('from: %s\\n' % sys.executable)\n\n print('Dependencies versions')\n for package_name in DEPENDENCIES:\n print_package_version(package_name)\n print('=' * 120)\n", "path": "continuous_integration/show-python-packages-versions.py"}]} | 569 | 123 |
gh_patches_debug_11637 | rasdani/github-patches | git_diff | getsentry__sentry-59857 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Jira deprecation of glance panels
Notice from Atlassian Support team about glance panel deprecation.
AC:
- Review the deprecation plan
- Build a recommendation based on how we're impacted. If minor development work is required, complete that with this ticket. If significant work is required, notify EM/PM to share impact and come up with next steps together.
Email from Atlassian:
```
Hope you are having a good day!
As part of this deprecation notice (https://developer.atlassian.com/cloud/jira/platform/changelog/#CHANGE-897), we are reaching out because we have identified that your app, “Sentry,” will be affected by the deprecation of glance panels.
This was initially scheduled for the 6th of October, but we have delayed it until the 30th of November.
The jiraIssueGlances and jira:issueGlance modules in Forge (https://developer.atlassian.com/platform/forge/manifest-reference/modules/jira-issue-glance/) and Connect (https://developer.atlassian.com/cloud/jira/platform/modules/issue-glance/) are being deprecated and replaced with the issueContext module.
We recommend transitioning from the glance panel to the new issue context module before the 30th of November.
Please note, we will not be extending this deprecation date as we announced it on the 30th of March.
Let me know if you need any further assistance,
Ahmud
Product Manager-Jira Cloud
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/integrations/jira/endpoints/descriptor.py`
Content:
```
1 from django.conf import settings
2 from django.urls import reverse
3 from rest_framework.request import Request
4 from rest_framework.response import Response
5
6 from sentry.api.api_publish_status import ApiPublishStatus
7 from sentry.api.base import Endpoint, control_silo_endpoint
8 from sentry.utils.assets import get_frontend_app_asset_url
9 from sentry.utils.http import absolute_uri
10
11 from .. import JIRA_KEY
12
13 scopes = ["read", "write", "act_as_user"]
14 # For Jira, only approved apps can use the access_email_addresses scope
15 # This scope allows Sentry to use the email endpoint (https://developer.atlassian.com/cloud/jira/platform/rest/v3/#api-rest-api-3-user-email-get)
16 # We use the email with Jira 2-way sync in order to match the user
17 if settings.JIRA_USE_EMAIL_SCOPE:
18 scopes.append("access_email_addresses")
19
20
21 @control_silo_endpoint
22 class JiraDescriptorEndpoint(Endpoint):
23 publish_status = {
24 "GET": ApiPublishStatus.UNKNOWN,
25 }
26 """
27 Provides the metadata needed by Jira to setup an instance of the Sentry integration within Jira.
28 Only used by on-prem orgs and devs setting up local instances of the integration. (Sentry SAAS
29 already has an established, official instance of the Sentry integration registered with Jira.)
30 """
31
32 authentication_classes = ()
33 permission_classes = ()
34
35 def get(self, request: Request) -> Response:
36 sentry_logo = absolute_uri(
37 get_frontend_app_asset_url("sentry", "entrypoints/logo-sentry.svg")
38 )
39 return self.respond(
40 {
41 "name": "Sentry",
42 "description": "Connect your Sentry organization to one or more of your Jira cloud instances. Get started streamlining your bug-squashing workflow by allowing your Sentry and Jira instances to work together.",
43 "key": JIRA_KEY,
44 "baseUrl": absolute_uri(),
45 "vendor": {"name": "Sentry", "url": "https://sentry.io"},
46 "authentication": {"type": "jwt"},
47 "lifecycle": {
48 "installed": "/extensions/jira/installed/",
49 "uninstalled": "/extensions/jira/uninstalled/",
50 },
51 "apiVersion": 1,
52 "modules": {
53 "postInstallPage": {
54 "url": "/extensions/jira/ui-hook/",
55 "name": {"value": "Configure Sentry Add-on"},
56 "key": "post-install-sentry",
57 },
58 "configurePage": {
59 "url": "/extensions/jira/ui-hook/",
60 "name": {"value": "Configure Sentry Add-on"},
61 "key": "configure-sentry",
62 },
63 "jiraIssueGlances": [
64 {
65 "icon": {"width": 24, "height": 24, "url": sentry_logo},
66 "content": {"type": "label", "label": {"value": "Linked Issues"}},
67 "target": {
68 "type": "web_panel",
69 "url": "/extensions/jira/issue/{issue.key}/",
70 },
71 "name": {"value": "Sentry "},
72 "key": "sentry-issues-glance",
73 }
74 ],
75 "webhooks": [
76 {
77 "event": "jira:issue_updated",
78 "url": reverse("sentry-extensions-jira-issue-updated"),
79 "excludeBody": False,
80 }
81 ],
82 },
83 "apiMigrations": {"gdpr": True, "context-qsh": True, "signed-install": True},
84 "scopes": scopes,
85 }
86 )
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/integrations/jira/endpoints/descriptor.py b/src/sentry/integrations/jira/endpoints/descriptor.py
--- a/src/sentry/integrations/jira/endpoints/descriptor.py
+++ b/src/sentry/integrations/jira/endpoints/descriptor.py
@@ -60,7 +60,7 @@
"name": {"value": "Configure Sentry Add-on"},
"key": "configure-sentry",
},
- "jiraIssueGlances": [
+ "jiraIssueContexts": [
{
"icon": {"width": 24, "height": 24, "url": sentry_logo},
"content": {"type": "label", "label": {"value": "Linked Issues"}},
| {"golden_diff": "diff --git a/src/sentry/integrations/jira/endpoints/descriptor.py b/src/sentry/integrations/jira/endpoints/descriptor.py\n--- a/src/sentry/integrations/jira/endpoints/descriptor.py\n+++ b/src/sentry/integrations/jira/endpoints/descriptor.py\n@@ -60,7 +60,7 @@\n \"name\": {\"value\": \"Configure Sentry Add-on\"},\n \"key\": \"configure-sentry\",\n },\n- \"jiraIssueGlances\": [\n+ \"jiraIssueContexts\": [\n {\n \"icon\": {\"width\": 24, \"height\": 24, \"url\": sentry_logo},\n \"content\": {\"type\": \"label\", \"label\": {\"value\": \"Linked Issues\"}},\n", "issue": "Jira deprecation of glance panels\nNotice from Atlassian Support team about glance panel deprecation. \r\n\r\nAC:\r\n- Review the deprecation plan\r\n- Build a recommendation based on how we're impacted. If minor development work is required, complete that with this ticket. If significant work is required, notify EM/PM to share impact and come up with next steps together.\r\n\r\nEmail from Atlassian:\r\n```\r\nHope you are having a good day!\r\nAs part of this deprecation notice (https://developer.atlassian.com/cloud/jira/platform/changelog/#CHANGE-897), we are reaching out because we have identified that your app, \u201cSentry,\u201d will be affected by the deprecation of glance panels. \r\nThis was initially scheduled for the 6th of October, but we have delayed it until the 30th of November.\r\nThe jiraIssueGlances and jira:issueGlance modules in Forge (https://developer.atlassian.com/platform/forge/manifest-reference/modules/jira-issue-glance/) and Connect (https://developer.atlassian.com/cloud/jira/platform/modules/issue-glance/) are being deprecated and replaced with the issueContext module. \r\nWe recommend transitioning from the glance panel to the new issue context module before the 30th of November. \r\nPlease note, we will not be extending this deprecation date as we announced it on the 30th of March.\r\nLet me know if you need any further assistance,\r\nAhmud\r\nProduct Manager-Jira Cloud\r\n```\n", "before_files": [{"content": "from django.conf import settings\nfrom django.urls import reverse\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\n\nfrom sentry.api.api_publish_status import ApiPublishStatus\nfrom sentry.api.base import Endpoint, control_silo_endpoint\nfrom sentry.utils.assets import get_frontend_app_asset_url\nfrom sentry.utils.http import absolute_uri\n\nfrom .. import JIRA_KEY\n\nscopes = [\"read\", \"write\", \"act_as_user\"]\n# For Jira, only approved apps can use the access_email_addresses scope\n# This scope allows Sentry to use the email endpoint (https://developer.atlassian.com/cloud/jira/platform/rest/v3/#api-rest-api-3-user-email-get)\n# We use the email with Jira 2-way sync in order to match the user\nif settings.JIRA_USE_EMAIL_SCOPE:\n scopes.append(\"access_email_addresses\")\n\n\n@control_silo_endpoint\nclass JiraDescriptorEndpoint(Endpoint):\n publish_status = {\n \"GET\": ApiPublishStatus.UNKNOWN,\n }\n \"\"\"\n Provides the metadata needed by Jira to setup an instance of the Sentry integration within Jira.\n Only used by on-prem orgs and devs setting up local instances of the integration. (Sentry SAAS\n already has an established, official instance of the Sentry integration registered with Jira.)\n \"\"\"\n\n authentication_classes = ()\n permission_classes = ()\n\n def get(self, request: Request) -> Response:\n sentry_logo = absolute_uri(\n get_frontend_app_asset_url(\"sentry\", \"entrypoints/logo-sentry.svg\")\n )\n return self.respond(\n {\n \"name\": \"Sentry\",\n \"description\": \"Connect your Sentry organization to one or more of your Jira cloud instances. Get started streamlining your bug-squashing workflow by allowing your Sentry and Jira instances to work together.\",\n \"key\": JIRA_KEY,\n \"baseUrl\": absolute_uri(),\n \"vendor\": {\"name\": \"Sentry\", \"url\": \"https://sentry.io\"},\n \"authentication\": {\"type\": \"jwt\"},\n \"lifecycle\": {\n \"installed\": \"/extensions/jira/installed/\",\n \"uninstalled\": \"/extensions/jira/uninstalled/\",\n },\n \"apiVersion\": 1,\n \"modules\": {\n \"postInstallPage\": {\n \"url\": \"/extensions/jira/ui-hook/\",\n \"name\": {\"value\": \"Configure Sentry Add-on\"},\n \"key\": \"post-install-sentry\",\n },\n \"configurePage\": {\n \"url\": \"/extensions/jira/ui-hook/\",\n \"name\": {\"value\": \"Configure Sentry Add-on\"},\n \"key\": \"configure-sentry\",\n },\n \"jiraIssueGlances\": [\n {\n \"icon\": {\"width\": 24, \"height\": 24, \"url\": sentry_logo},\n \"content\": {\"type\": \"label\", \"label\": {\"value\": \"Linked Issues\"}},\n \"target\": {\n \"type\": \"web_panel\",\n \"url\": \"/extensions/jira/issue/{issue.key}/\",\n },\n \"name\": {\"value\": \"Sentry \"},\n \"key\": \"sentry-issues-glance\",\n }\n ],\n \"webhooks\": [\n {\n \"event\": \"jira:issue_updated\",\n \"url\": reverse(\"sentry-extensions-jira-issue-updated\"),\n \"excludeBody\": False,\n }\n ],\n },\n \"apiMigrations\": {\"gdpr\": True, \"context-qsh\": True, \"signed-install\": True},\n \"scopes\": scopes,\n }\n )\n", "path": "src/sentry/integrations/jira/endpoints/descriptor.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.urls import reverse\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\n\nfrom sentry.api.api_publish_status import ApiPublishStatus\nfrom sentry.api.base import Endpoint, control_silo_endpoint\nfrom sentry.utils.assets import get_frontend_app_asset_url\nfrom sentry.utils.http import absolute_uri\n\nfrom .. import JIRA_KEY\n\nscopes = [\"read\", \"write\", \"act_as_user\"]\n# For Jira, only approved apps can use the access_email_addresses scope\n# This scope allows Sentry to use the email endpoint (https://developer.atlassian.com/cloud/jira/platform/rest/v3/#api-rest-api-3-user-email-get)\n# We use the email with Jira 2-way sync in order to match the user\nif settings.JIRA_USE_EMAIL_SCOPE:\n scopes.append(\"access_email_addresses\")\n\n\n@control_silo_endpoint\nclass JiraDescriptorEndpoint(Endpoint):\n publish_status = {\n \"GET\": ApiPublishStatus.UNKNOWN,\n }\n \"\"\"\n Provides the metadata needed by Jira to setup an instance of the Sentry integration within Jira.\n Only used by on-prem orgs and devs setting up local instances of the integration. (Sentry SAAS\n already has an established, official instance of the Sentry integration registered with Jira.)\n \"\"\"\n\n authentication_classes = ()\n permission_classes = ()\n\n def get(self, request: Request) -> Response:\n sentry_logo = absolute_uri(\n get_frontend_app_asset_url(\"sentry\", \"entrypoints/logo-sentry.svg\")\n )\n return self.respond(\n {\n \"name\": \"Sentry\",\n \"description\": \"Connect your Sentry organization to one or more of your Jira cloud instances. Get started streamlining your bug-squashing workflow by allowing your Sentry and Jira instances to work together.\",\n \"key\": JIRA_KEY,\n \"baseUrl\": absolute_uri(),\n \"vendor\": {\"name\": \"Sentry\", \"url\": \"https://sentry.io\"},\n \"authentication\": {\"type\": \"jwt\"},\n \"lifecycle\": {\n \"installed\": \"/extensions/jira/installed/\",\n \"uninstalled\": \"/extensions/jira/uninstalled/\",\n },\n \"apiVersion\": 1,\n \"modules\": {\n \"postInstallPage\": {\n \"url\": \"/extensions/jira/ui-hook/\",\n \"name\": {\"value\": \"Configure Sentry Add-on\"},\n \"key\": \"post-install-sentry\",\n },\n \"configurePage\": {\n \"url\": \"/extensions/jira/ui-hook/\",\n \"name\": {\"value\": \"Configure Sentry Add-on\"},\n \"key\": \"configure-sentry\",\n },\n \"jiraIssueContexts\": [\n {\n \"icon\": {\"width\": 24, \"height\": 24, \"url\": sentry_logo},\n \"content\": {\"type\": \"label\", \"label\": {\"value\": \"Linked Issues\"}},\n \"target\": {\n \"type\": \"web_panel\",\n \"url\": \"/extensions/jira/issue/{issue.key}/\",\n },\n \"name\": {\"value\": \"Sentry \"},\n \"key\": \"sentry-issues-glance\",\n }\n ],\n \"webhooks\": [\n {\n \"event\": \"jira:issue_updated\",\n \"url\": reverse(\"sentry-extensions-jira-issue-updated\"),\n \"excludeBody\": False,\n }\n ],\n },\n \"apiMigrations\": {\"gdpr\": True, \"context-qsh\": True, \"signed-install\": True},\n \"scopes\": scopes,\n }\n )\n", "path": "src/sentry/integrations/jira/endpoints/descriptor.py"}]} | 1,499 | 171 |
gh_patches_debug_21950 | rasdani/github-patches | git_diff | cornellius-gp__gpytorch-1670 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] The Added Loss term for InducingKernel seems flipped in sign?
# 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
```
def loss(self, *params):
prior_covar = self.prior_dist.lazy_covariance_matrix
variational_covar = self.variational_dist.lazy_covariance_matrix
diag = prior_covar.diag() - variational_covar.diag()
shape = prior_covar.shape[:-1]
noise_diag = self.likelihood._shaped_noise_covar(shape, *params).diag()
return 0.5 * (diag / noise_diag).sum()
```
This is the current code for InducingPointKernelAddedLossTerm.loss
From what I see, this "loss term" is added into the mll that is returned by the `ExactMarginalLogLikelihood` class. This in itself is misleading as the loss is usually the negative of the mll.
Moreover, the variational negative loss used to evaluate inducing points is given below

As can be seen, the above is the expression for the pseudo-mll that is maximized when optimizing the inducing points. in this, the component of `InducingPointKernelAddedLossTerm` is negative to the value that is being added into the loss.
This is quite likely a significant bug. Please fix (just invert the sign of `diag` above)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gpytorch/mlls/inducing_point_kernel_added_loss_term.py`
Content:
```
1 #!/usr/bin/env python3
2
3 from .added_loss_term import AddedLossTerm
4
5
6 class InducingPointKernelAddedLossTerm(AddedLossTerm):
7 def __init__(self, variational_dist, prior_dist, likelihood):
8 self.prior_dist = prior_dist
9 self.variational_dist = variational_dist
10 self.likelihood = likelihood
11
12 def loss(self, *params):
13 prior_covar = self.prior_dist.lazy_covariance_matrix
14 variational_covar = self.variational_dist.lazy_covariance_matrix
15 diag = prior_covar.diag() - variational_covar.diag()
16 shape = prior_covar.shape[:-1]
17 noise_diag = self.likelihood._shaped_noise_covar(shape, *params).diag()
18 return 0.5 * (diag / noise_diag).sum()
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gpytorch/mlls/inducing_point_kernel_added_loss_term.py b/gpytorch/mlls/inducing_point_kernel_added_loss_term.py
--- a/gpytorch/mlls/inducing_point_kernel_added_loss_term.py
+++ b/gpytorch/mlls/inducing_point_kernel_added_loss_term.py
@@ -4,7 +4,7 @@
class InducingPointKernelAddedLossTerm(AddedLossTerm):
- def __init__(self, variational_dist, prior_dist, likelihood):
+ def __init__(self, prior_dist, variational_dist, likelihood):
self.prior_dist = prior_dist
self.variational_dist = variational_dist
self.likelihood = likelihood
@@ -12,7 +12,7 @@
def loss(self, *params):
prior_covar = self.prior_dist.lazy_covariance_matrix
variational_covar = self.variational_dist.lazy_covariance_matrix
- diag = prior_covar.diag() - variational_covar.diag()
+ diag = variational_covar.diag() - prior_covar.diag()
shape = prior_covar.shape[:-1]
noise_diag = self.likelihood._shaped_noise_covar(shape, *params).diag()
return 0.5 * (diag / noise_diag).sum()
| {"golden_diff": "diff --git a/gpytorch/mlls/inducing_point_kernel_added_loss_term.py b/gpytorch/mlls/inducing_point_kernel_added_loss_term.py\n--- a/gpytorch/mlls/inducing_point_kernel_added_loss_term.py\n+++ b/gpytorch/mlls/inducing_point_kernel_added_loss_term.py\n@@ -4,7 +4,7 @@\n \n \n class InducingPointKernelAddedLossTerm(AddedLossTerm):\n- def __init__(self, variational_dist, prior_dist, likelihood):\n+ def __init__(self, prior_dist, variational_dist, likelihood):\n self.prior_dist = prior_dist\n self.variational_dist = variational_dist\n self.likelihood = likelihood\n@@ -12,7 +12,7 @@\n def loss(self, *params):\n prior_covar = self.prior_dist.lazy_covariance_matrix\n variational_covar = self.variational_dist.lazy_covariance_matrix\n- diag = prior_covar.diag() - variational_covar.diag()\n+ diag = variational_covar.diag() - prior_covar.diag()\n shape = prior_covar.shape[:-1]\n noise_diag = self.likelihood._shaped_noise_covar(shape, *params).diag()\n return 0.5 * (diag / noise_diag).sum()\n", "issue": "[Bug] The Added Loss term for InducingKernel seems flipped in sign?\n# \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n```\r\n def loss(self, *params):\r\n prior_covar = self.prior_dist.lazy_covariance_matrix\r\n variational_covar = self.variational_dist.lazy_covariance_matrix\r\n diag = prior_covar.diag() - variational_covar.diag()\r\n shape = prior_covar.shape[:-1]\r\n noise_diag = self.likelihood._shaped_noise_covar(shape, *params).diag()\r\n return 0.5 * (diag / noise_diag).sum()\r\n```\r\nThis is the current code for InducingPointKernelAddedLossTerm.loss\r\n\r\nFrom what I see, this \"loss term\" is added into the mll that is returned by the `ExactMarginalLogLikelihood` class. This in itself is misleading as the loss is usually the negative of the mll.\r\n\r\nMoreover, the variational negative loss used to evaluate inducing points is given below\r\n\r\n\r\n\r\nAs can be seen, the above is the expression for the pseudo-mll that is maximized when optimizing the inducing points. in this, the component of `InducingPointKernelAddedLossTerm` is negative to the value that is being added into the loss.\r\n\r\nThis is quite likely a significant bug. Please fix (just invert the sign of `diag` above)\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom .added_loss_term import AddedLossTerm\n\n\nclass InducingPointKernelAddedLossTerm(AddedLossTerm):\n def __init__(self, variational_dist, prior_dist, likelihood):\n self.prior_dist = prior_dist\n self.variational_dist = variational_dist\n self.likelihood = likelihood\n\n def loss(self, *params):\n prior_covar = self.prior_dist.lazy_covariance_matrix\n variational_covar = self.variational_dist.lazy_covariance_matrix\n diag = prior_covar.diag() - variational_covar.diag()\n shape = prior_covar.shape[:-1]\n noise_diag = self.likelihood._shaped_noise_covar(shape, *params).diag()\n return 0.5 * (diag / noise_diag).sum()\n", "path": "gpytorch/mlls/inducing_point_kernel_added_loss_term.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nfrom .added_loss_term import AddedLossTerm\n\n\nclass InducingPointKernelAddedLossTerm(AddedLossTerm):\n def __init__(self, prior_dist, variational_dist, likelihood):\n self.prior_dist = prior_dist\n self.variational_dist = variational_dist\n self.likelihood = likelihood\n\n def loss(self, *params):\n prior_covar = self.prior_dist.lazy_covariance_matrix\n variational_covar = self.variational_dist.lazy_covariance_matrix\n diag = variational_covar.diag() - prior_covar.diag()\n shape = prior_covar.shape[:-1]\n noise_diag = self.likelihood._shaped_noise_covar(shape, *params).diag()\n return 0.5 * (diag / noise_diag).sum()\n", "path": "gpytorch/mlls/inducing_point_kernel_added_loss_term.py"}]} | 824 | 284 |
gh_patches_debug_3955 | rasdani/github-patches | git_diff | facebookresearch__CompilerGym-656 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cache path inconsistency in cc and python?
I noticed that theses two files are using different conventions for creating cache paths. The cc file says the python file should align with the cc file, but they are still different. Is this intended?
https://github.com/facebookresearch/CompilerGym/blob/61f460fadee2454ff8fca3bbd5a5d338854cc4a2/compiler_gym/util/runfiles_path.py#L101-L105
https://github.com/facebookresearch/CompilerGym/blob/1596776ad35a7aeca45ed85b2e073af824844e29/compiler_gym/util/RunfilesPath.cc#L61-L65
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `compiler_gym/util/runfiles_path.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
5 """Module for resolving a runfiles path."""
6 import os
7 from datetime import datetime
8 from getpass import getuser
9 from pathlib import Path
10 from threading import Lock
11 from time import sleep
12 from typing import Optional
13
14 # NOTE(cummins): Moving this file may require updating this relative path.
15 _PACKAGE_ROOT = Path(os.path.join(os.path.dirname(__file__), "../../")).resolve(
16 strict=True
17 )
18
19 _CREATE_LOGGING_DIR_LOCK = Lock()
20
21
22 def runfiles_path(relpath: str) -> Path:
23 """Resolve the path to a runfiles data path.
24
25 No checks are to made to ensure that the path, or the containing directory,
26 exist.
27
28 Use environment variable COMPILER_GYM_RUNFILES=/path/to/runfiles if running
29 outside of bazel.
30
31 :param relpath: The relative path within the runfiles tree.
32
33 :return: An absolute path.
34 """
35 # There are three ways of determining a runfiles path:
36 # 1. Set the COMPILER_GYM_RUNFILES environment variable.
37 # 2. Using the rules_python library that is provided by bazel. This will
38 # fail if not being executed within a bazel sandbox.
39 # 3. Computing the path relative to the location of this file. This is the
40 # fallback approach that is used for when the code has been installed
41 # by setuptools.
42 runfiles_path = os.environ.get("COMPILER_GYM_RUNFILES")
43 if runfiles_path:
44 return Path(runfiles_path) / relpath
45 else:
46 try:
47 from rules_python.python.runfiles import runfiles
48
49 return Path(
50 runfiles.Create().Rlocation(
51 "CompilerGym" if relpath == "." else f"CompilerGym/{relpath}"
52 )
53 )
54 except (ModuleNotFoundError, TypeError):
55 return _PACKAGE_ROOT / relpath
56
57
58 def site_data_path(relpath: str) -> Path:
59 """Return a path within the site data directory.
60
61 CompilerGym uses a directory to store persistent site data files in, such as benchmark datasets.
62 The default location is :code:`~/.local/share/compiler_gym`. Set the environment variable
63 :code:`$COMPILER_GYM_SITE_DATA` to override this default location.
64
65 No checks are to made to ensure that the path, or the containing directory,
66 exist.
67
68 :param relpath: The relative path within the site data tree.
69
70 :return: An absolute path.
71 """
72 # NOTE(cummins): This function has a matching implementation in the C++
73 # sources, compiler_gym::service::getSiteDataPath(). Any change to behavior
74 # here must be reflected in the C++ version.
75 forced = os.environ.get("COMPILER_GYM_SITE_DATA")
76 if forced:
77 return Path(forced) / relpath
78 elif os.environ.get("HOME"):
79 return Path("~/.local/share/compiler_gym").expanduser() / relpath
80 else:
81 return Path(f"/tmp/compiler_gym_{getuser()}/site_data") / relpath
82
83
84 def cache_path(relpath: str) -> Path:
85 """Return a path within the cache directory.
86
87 CompilerGym uses a directory to cache files in, such as downloaded content.
88 The default location for this cache is :code:`~/.cache/compiler_gym`. Set
89 the environment variable :code:`$COMPILER_GYM_CACHE` to override this
90 default location.
91
92 No checks are to made to ensure that the path, or the containing directory,
93 exist.
94
95 :param relpath: The relative path within the cache tree.
96
97 :return: An absolute path.
98 """
99 forced = os.environ.get("COMPILER_GYM_CACHE")
100 if forced:
101 return Path(forced) / relpath
102 elif os.environ.get("HOME"):
103 return Path("~/.cache/compiler_gym").expanduser() / relpath
104 else:
105 return Path(f"/tmp/compiler_gym_{getuser()}/cache") / relpath
106
107
108 def transient_cache_path(relpath: str) -> Path:
109 """Return a path within the transient cache directory.
110
111 The transient cache is a directory used to store files that do not need to
112 persist beyond the lifetime of the current process. When available, the
113 temporary filesystem :code:`/dev/shm` will be used. Else,
114 :meth:`cache_path() <compiler_gym.cache_path>` is used as a fallback. Set
115 the environment variable :code:`$COMPILER_GYM_TRANSIENT_CACHE` to override
116 the default location.
117
118 No checks are to made to ensure that the path, or the containing directory,
119 exist.
120
121 :param relpath: The relative path within the cache tree.
122
123 :return: An absolute path.
124 """
125 forced = os.environ.get("COMPILER_GYM_TRANSIENT_CACHE")
126 if forced:
127 return Path(forced) / relpath
128 elif Path("/dev/shm").is_dir():
129 return Path(f"/dev/shm/compiler_gym_{getuser()}") / relpath
130 else:
131 # Fallback to using the regular cache.
132 return cache_path(relpath)
133
134
135 def create_user_logs_dir(name: str, dir: Optional[Path] = None) -> Path:
136 """Create a directory for writing logs to.
137
138 Defaults to ~/logs/compiler_gym base directory, set the
139 :code:`COMPILER_GYM_LOGS` environment variable to override this.
140
141 Example use:
142
143 >>> create_user_logs_dir("my_experiment")
144 Path("~/logs/compiler_gym/my_experiment/2020-11-03T11:00:00")
145
146 :param name: The grouping name for the logs.
147
148 :return: A unique timestamped directory for logging. This directory exists.
149 """
150 base_dir = Path(
151 os.environ.get("COMPILER_GYM_LOGS", dir or "~/logs/compiler_gym")
152 ).expanduser()
153 group_dir = base_dir / name
154
155 with _CREATE_LOGGING_DIR_LOCK:
156 # Require that logging directory timestamps are unique by waiting until
157 # a unique timestamp is generated.
158 while True:
159 now = datetime.now()
160 subdirs = now.strftime("%Y-%m-%d/%H-%M-%S")
161
162 logs_dir = group_dir / subdirs
163 if logs_dir.is_dir():
164 sleep(0.3)
165 continue
166
167 logs_dir.mkdir(parents=True, exist_ok=False)
168
169 # Create a symlink to the "latest" logs results.
170 if (group_dir / "latest").exists():
171 os.unlink(group_dir / "latest")
172 os.symlink(subdirs, group_dir / "latest")
173
174 return logs_dir
175
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/compiler_gym/util/runfiles_path.py b/compiler_gym/util/runfiles_path.py
--- a/compiler_gym/util/runfiles_path.py
+++ b/compiler_gym/util/runfiles_path.py
@@ -100,7 +100,7 @@
if forced:
return Path(forced) / relpath
elif os.environ.get("HOME"):
- return Path("~/.cache/compiler_gym").expanduser() / relpath
+ return Path("~/.local/cache/compiler_gym").expanduser() / relpath
else:
return Path(f"/tmp/compiler_gym_{getuser()}/cache") / relpath
| {"golden_diff": "diff --git a/compiler_gym/util/runfiles_path.py b/compiler_gym/util/runfiles_path.py\n--- a/compiler_gym/util/runfiles_path.py\n+++ b/compiler_gym/util/runfiles_path.py\n@@ -100,7 +100,7 @@\n if forced:\n return Path(forced) / relpath\n elif os.environ.get(\"HOME\"):\n- return Path(\"~/.cache/compiler_gym\").expanduser() / relpath\n+ return Path(\"~/.local/cache/compiler_gym\").expanduser() / relpath\n else:\n return Path(f\"/tmp/compiler_gym_{getuser()}/cache\") / relpath\n", "issue": "Cache path inconsistency in cc and python?\nI noticed that theses two files are using different conventions for creating cache paths. The cc file says the python file should align with the cc file, but they are still different. Is this intended?\r\n\r\nhttps://github.com/facebookresearch/CompilerGym/blob/61f460fadee2454ff8fca3bbd5a5d338854cc4a2/compiler_gym/util/runfiles_path.py#L101-L105\r\n\r\nhttps://github.com/facebookresearch/CompilerGym/blob/1596776ad35a7aeca45ed85b2e073af824844e29/compiler_gym/util/RunfilesPath.cc#L61-L65\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"Module for resolving a runfiles path.\"\"\"\nimport os\nfrom datetime import datetime\nfrom getpass import getuser\nfrom pathlib import Path\nfrom threading import Lock\nfrom time import sleep\nfrom typing import Optional\n\n# NOTE(cummins): Moving this file may require updating this relative path.\n_PACKAGE_ROOT = Path(os.path.join(os.path.dirname(__file__), \"../../\")).resolve(\n strict=True\n)\n\n_CREATE_LOGGING_DIR_LOCK = Lock()\n\n\ndef runfiles_path(relpath: str) -> Path:\n \"\"\"Resolve the path to a runfiles data path.\n\n No checks are to made to ensure that the path, or the containing directory,\n exist.\n\n Use environment variable COMPILER_GYM_RUNFILES=/path/to/runfiles if running\n outside of bazel.\n\n :param relpath: The relative path within the runfiles tree.\n\n :return: An absolute path.\n \"\"\"\n # There are three ways of determining a runfiles path:\n # 1. Set the COMPILER_GYM_RUNFILES environment variable.\n # 2. Using the rules_python library that is provided by bazel. This will\n # fail if not being executed within a bazel sandbox.\n # 3. Computing the path relative to the location of this file. This is the\n # fallback approach that is used for when the code has been installed\n # by setuptools.\n runfiles_path = os.environ.get(\"COMPILER_GYM_RUNFILES\")\n if runfiles_path:\n return Path(runfiles_path) / relpath\n else:\n try:\n from rules_python.python.runfiles import runfiles\n\n return Path(\n runfiles.Create().Rlocation(\n \"CompilerGym\" if relpath == \".\" else f\"CompilerGym/{relpath}\"\n )\n )\n except (ModuleNotFoundError, TypeError):\n return _PACKAGE_ROOT / relpath\n\n\ndef site_data_path(relpath: str) -> Path:\n \"\"\"Return a path within the site data directory.\n\n CompilerGym uses a directory to store persistent site data files in, such as benchmark datasets.\n The default location is :code:`~/.local/share/compiler_gym`. Set the environment variable\n :code:`$COMPILER_GYM_SITE_DATA` to override this default location.\n\n No checks are to made to ensure that the path, or the containing directory,\n exist.\n\n :param relpath: The relative path within the site data tree.\n\n :return: An absolute path.\n \"\"\"\n # NOTE(cummins): This function has a matching implementation in the C++\n # sources, compiler_gym::service::getSiteDataPath(). Any change to behavior\n # here must be reflected in the C++ version.\n forced = os.environ.get(\"COMPILER_GYM_SITE_DATA\")\n if forced:\n return Path(forced) / relpath\n elif os.environ.get(\"HOME\"):\n return Path(\"~/.local/share/compiler_gym\").expanduser() / relpath\n else:\n return Path(f\"/tmp/compiler_gym_{getuser()}/site_data\") / relpath\n\n\ndef cache_path(relpath: str) -> Path:\n \"\"\"Return a path within the cache directory.\n\n CompilerGym uses a directory to cache files in, such as downloaded content.\n The default location for this cache is :code:`~/.cache/compiler_gym`. Set\n the environment variable :code:`$COMPILER_GYM_CACHE` to override this\n default location.\n\n No checks are to made to ensure that the path, or the containing directory,\n exist.\n\n :param relpath: The relative path within the cache tree.\n\n :return: An absolute path.\n \"\"\"\n forced = os.environ.get(\"COMPILER_GYM_CACHE\")\n if forced:\n return Path(forced) / relpath\n elif os.environ.get(\"HOME\"):\n return Path(\"~/.cache/compiler_gym\").expanduser() / relpath\n else:\n return Path(f\"/tmp/compiler_gym_{getuser()}/cache\") / relpath\n\n\ndef transient_cache_path(relpath: str) -> Path:\n \"\"\"Return a path within the transient cache directory.\n\n The transient cache is a directory used to store files that do not need to\n persist beyond the lifetime of the current process. When available, the\n temporary filesystem :code:`/dev/shm` will be used. Else,\n :meth:`cache_path() <compiler_gym.cache_path>` is used as a fallback. Set\n the environment variable :code:`$COMPILER_GYM_TRANSIENT_CACHE` to override\n the default location.\n\n No checks are to made to ensure that the path, or the containing directory,\n exist.\n\n :param relpath: The relative path within the cache tree.\n\n :return: An absolute path.\n \"\"\"\n forced = os.environ.get(\"COMPILER_GYM_TRANSIENT_CACHE\")\n if forced:\n return Path(forced) / relpath\n elif Path(\"/dev/shm\").is_dir():\n return Path(f\"/dev/shm/compiler_gym_{getuser()}\") / relpath\n else:\n # Fallback to using the regular cache.\n return cache_path(relpath)\n\n\ndef create_user_logs_dir(name: str, dir: Optional[Path] = None) -> Path:\n \"\"\"Create a directory for writing logs to.\n\n Defaults to ~/logs/compiler_gym base directory, set the\n :code:`COMPILER_GYM_LOGS` environment variable to override this.\n\n Example use:\n\n >>> create_user_logs_dir(\"my_experiment\")\n Path(\"~/logs/compiler_gym/my_experiment/2020-11-03T11:00:00\")\n\n :param name: The grouping name for the logs.\n\n :return: A unique timestamped directory for logging. This directory exists.\n \"\"\"\n base_dir = Path(\n os.environ.get(\"COMPILER_GYM_LOGS\", dir or \"~/logs/compiler_gym\")\n ).expanduser()\n group_dir = base_dir / name\n\n with _CREATE_LOGGING_DIR_LOCK:\n # Require that logging directory timestamps are unique by waiting until\n # a unique timestamp is generated.\n while True:\n now = datetime.now()\n subdirs = now.strftime(\"%Y-%m-%d/%H-%M-%S\")\n\n logs_dir = group_dir / subdirs\n if logs_dir.is_dir():\n sleep(0.3)\n continue\n\n logs_dir.mkdir(parents=True, exist_ok=False)\n\n # Create a symlink to the \"latest\" logs results.\n if (group_dir / \"latest\").exists():\n os.unlink(group_dir / \"latest\")\n os.symlink(subdirs, group_dir / \"latest\")\n\n return logs_dir\n", "path": "compiler_gym/util/runfiles_path.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"Module for resolving a runfiles path.\"\"\"\nimport os\nfrom datetime import datetime\nfrom getpass import getuser\nfrom pathlib import Path\nfrom threading import Lock\nfrom time import sleep\nfrom typing import Optional\n\n# NOTE(cummins): Moving this file may require updating this relative path.\n_PACKAGE_ROOT = Path(os.path.join(os.path.dirname(__file__), \"../../\")).resolve(\n strict=True\n)\n\n_CREATE_LOGGING_DIR_LOCK = Lock()\n\n\ndef runfiles_path(relpath: str) -> Path:\n \"\"\"Resolve the path to a runfiles data path.\n\n No checks are to made to ensure that the path, or the containing directory,\n exist.\n\n Use environment variable COMPILER_GYM_RUNFILES=/path/to/runfiles if running\n outside of bazel.\n\n :param relpath: The relative path within the runfiles tree.\n\n :return: An absolute path.\n \"\"\"\n # There are three ways of determining a runfiles path:\n # 1. Set the COMPILER_GYM_RUNFILES environment variable.\n # 2. Using the rules_python library that is provided by bazel. This will\n # fail if not being executed within a bazel sandbox.\n # 3. Computing the path relative to the location of this file. This is the\n # fallback approach that is used for when the code has been installed\n # by setuptools.\n runfiles_path = os.environ.get(\"COMPILER_GYM_RUNFILES\")\n if runfiles_path:\n return Path(runfiles_path) / relpath\n else:\n try:\n from rules_python.python.runfiles import runfiles\n\n return Path(\n runfiles.Create().Rlocation(\n \"CompilerGym\" if relpath == \".\" else f\"CompilerGym/{relpath}\"\n )\n )\n except (ModuleNotFoundError, TypeError):\n return _PACKAGE_ROOT / relpath\n\n\ndef site_data_path(relpath: str) -> Path:\n \"\"\"Return a path within the site data directory.\n\n CompilerGym uses a directory to store persistent site data files in, such as benchmark datasets.\n The default location is :code:`~/.local/share/compiler_gym`. Set the environment variable\n :code:`$COMPILER_GYM_SITE_DATA` to override this default location.\n\n No checks are to made to ensure that the path, or the containing directory,\n exist.\n\n :param relpath: The relative path within the site data tree.\n\n :return: An absolute path.\n \"\"\"\n # NOTE(cummins): This function has a matching implementation in the C++\n # sources, compiler_gym::service::getSiteDataPath(). Any change to behavior\n # here must be reflected in the C++ version.\n forced = os.environ.get(\"COMPILER_GYM_SITE_DATA\")\n if forced:\n return Path(forced) / relpath\n elif os.environ.get(\"HOME\"):\n return Path(\"~/.local/share/compiler_gym\").expanduser() / relpath\n else:\n return Path(f\"/tmp/compiler_gym_{getuser()}/site_data\") / relpath\n\n\ndef cache_path(relpath: str) -> Path:\n \"\"\"Return a path within the cache directory.\n\n CompilerGym uses a directory to cache files in, such as downloaded content.\n The default location for this cache is :code:`~/.cache/compiler_gym`. Set\n the environment variable :code:`$COMPILER_GYM_CACHE` to override this\n default location.\n\n No checks are to made to ensure that the path, or the containing directory,\n exist.\n\n :param relpath: The relative path within the cache tree.\n\n :return: An absolute path.\n \"\"\"\n forced = os.environ.get(\"COMPILER_GYM_CACHE\")\n if forced:\n return Path(forced) / relpath\n elif os.environ.get(\"HOME\"):\n return Path(\"~/.local/cache/compiler_gym\").expanduser() / relpath\n else:\n return Path(f\"/tmp/compiler_gym_{getuser()}/cache\") / relpath\n\n\ndef transient_cache_path(relpath: str) -> Path:\n \"\"\"Return a path within the transient cache directory.\n\n The transient cache is a directory used to store files that do not need to\n persist beyond the lifetime of the current process. When available, the\n temporary filesystem :code:`/dev/shm` will be used. Else,\n :meth:`cache_path() <compiler_gym.cache_path>` is used as a fallback. Set\n the environment variable :code:`$COMPILER_GYM_TRANSIENT_CACHE` to override\n the default location.\n\n No checks are to made to ensure that the path, or the containing directory,\n exist.\n\n :param relpath: The relative path within the cache tree.\n\n :return: An absolute path.\n \"\"\"\n forced = os.environ.get(\"COMPILER_GYM_TRANSIENT_CACHE\")\n if forced:\n return Path(forced) / relpath\n elif Path(\"/dev/shm\").is_dir():\n return Path(f\"/dev/shm/compiler_gym_{getuser()}\") / relpath\n else:\n # Fallback to using the regular cache.\n return cache_path(relpath)\n\n\ndef create_user_logs_dir(name: str, dir: Optional[Path] = None) -> Path:\n \"\"\"Create a directory for writing logs to.\n\n Defaults to ~/logs/compiler_gym base directory, set the\n :code:`COMPILER_GYM_LOGS` environment variable to override this.\n\n Example use:\n\n >>> create_user_logs_dir(\"my_experiment\")\n Path(\"~/logs/compiler_gym/my_experiment/2020-11-03T11:00:00\")\n\n :param name: The grouping name for the logs.\n\n :return: A unique timestamped directory for logging. This directory exists.\n \"\"\"\n base_dir = Path(\n os.environ.get(\"COMPILER_GYM_LOGS\", dir or \"~/logs/compiler_gym\")\n ).expanduser()\n group_dir = base_dir / name\n\n with _CREATE_LOGGING_DIR_LOCK:\n # Require that logging directory timestamps are unique by waiting until\n # a unique timestamp is generated.\n while True:\n now = datetime.now()\n subdirs = now.strftime(\"%Y-%m-%d/%H-%M-%S\")\n\n logs_dir = group_dir / subdirs\n if logs_dir.is_dir():\n sleep(0.3)\n continue\n\n logs_dir.mkdir(parents=True, exist_ok=False)\n\n # Create a symlink to the \"latest\" logs results.\n if (group_dir / \"latest\").exists():\n os.unlink(group_dir / \"latest\")\n os.symlink(subdirs, group_dir / \"latest\")\n\n return logs_dir\n", "path": "compiler_gym/util/runfiles_path.py"}]} | 2,349 | 140 |
gh_patches_debug_35035 | rasdani/github-patches | git_diff | python-pillow__Pillow-220 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for RGBA webp image encoding and decoding
Would it be possible to wrap the `WebPEncodeRGBA` and `WebPDecodeRGBA` functionality of the webp library inside Pillow?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `selftest.py`
Content:
```
1 # minimal sanity check
2 from __future__ import print_function
3 ROOT = "."
4
5 import os, sys
6 sys.path.insert(0, ROOT)
7
8 from PIL import Image
9 from PIL import ImageDraw
10 from PIL import ImageFilter
11 from PIL import ImageMath
12
13 try:
14 Image.core.ping
15 except ImportError as v:
16 print("***", v)
17 sys.exit()
18 except AttributeError:
19 pass
20
21 def _info(im):
22 im.load()
23 return im.format, im.mode, im.size
24
25 def testimage():
26 """
27 PIL lets you create in-memory images with various pixel types:
28
29 >>> im = Image.new("1", (128, 128)) # monochrome
30 >>> _info(im)
31 (None, '1', (128, 128))
32 >>> _info(Image.new("L", (128, 128))) # grayscale (luminance)
33 (None, 'L', (128, 128))
34 >>> _info(Image.new("P", (128, 128))) # palette
35 (None, 'P', (128, 128))
36 >>> _info(Image.new("RGB", (128, 128))) # truecolor
37 (None, 'RGB', (128, 128))
38 >>> _info(Image.new("I", (128, 128))) # 32-bit integer
39 (None, 'I', (128, 128))
40 >>> _info(Image.new("F", (128, 128))) # 32-bit floating point
41 (None, 'F', (128, 128))
42
43 Or open existing files:
44
45 >>> im = Image.open(os.path.join(ROOT, "Images/lena.gif"))
46 >>> _info(im)
47 ('GIF', 'P', (128, 128))
48 >>> _info(Image.open(os.path.join(ROOT, "Images/lena.ppm")))
49 ('PPM', 'RGB', (128, 128))
50 >>> try:
51 ... _info(Image.open(os.path.join(ROOT, "Images/lena.jpg")))
52 ... except IOError as v:
53 ... print(v)
54 ('JPEG', 'RGB', (128, 128))
55
56 PIL doesn't actually load the image data until it's needed,
57 or you call the "load" method:
58
59 >>> im = Image.open(os.path.join(ROOT, "Images/lena.ppm"))
60 >>> print(im.im) # internal image attribute
61 None
62 >>> a = im.load()
63 >>> type(im.im) # doctest: +ELLIPSIS
64 <... '...ImagingCore'>
65
66 You can apply many different operations on images. Most
67 operations return a new image:
68
69 >>> im = Image.open(os.path.join(ROOT, "Images/lena.ppm"))
70 >>> _info(im.convert("L"))
71 (None, 'L', (128, 128))
72 >>> _info(im.copy())
73 (None, 'RGB', (128, 128))
74 >>> _info(im.crop((32, 32, 96, 96)))
75 (None, 'RGB', (64, 64))
76 >>> _info(im.filter(ImageFilter.BLUR))
77 (None, 'RGB', (128, 128))
78 >>> im.getbands()
79 ('R', 'G', 'B')
80 >>> im.getbbox()
81 (0, 0, 128, 128)
82 >>> len(im.getdata())
83 16384
84 >>> im.getextrema()
85 ((61, 255), (26, 234), (44, 223))
86 >>> im.getpixel((0, 0))
87 (223, 162, 133)
88 >>> len(im.getprojection())
89 2
90 >>> len(im.histogram())
91 768
92 >>> _info(im.point(list(range(256))*3))
93 (None, 'RGB', (128, 128))
94 >>> _info(im.resize((64, 64)))
95 (None, 'RGB', (64, 64))
96 >>> _info(im.rotate(45))
97 (None, 'RGB', (128, 128))
98 >>> [_info(ch) for ch in im.split()]
99 [(None, 'L', (128, 128)), (None, 'L', (128, 128)), (None, 'L', (128, 128))]
100 >>> len(im.convert("1").tobitmap())
101 10456
102 >>> len(im.tobytes())
103 49152
104 >>> _info(im.transform((512, 512), Image.AFFINE, (1,0,0,0,1,0)))
105 (None, 'RGB', (512, 512))
106 >>> _info(im.transform((512, 512), Image.EXTENT, (32,32,96,96)))
107 (None, 'RGB', (512, 512))
108
109 The ImageDraw module lets you draw stuff in raster images:
110
111 >>> im = Image.new("L", (128, 128), 64)
112 >>> d = ImageDraw.ImageDraw(im)
113 >>> d.line((0, 0, 128, 128), fill=128)
114 >>> d.line((0, 128, 128, 0), fill=128)
115 >>> im.getextrema()
116 (64, 128)
117
118 In 1.1.4, you can specify colors in a number of ways:
119
120 >>> xy = 0, 0, 128, 128
121 >>> im = Image.new("RGB", (128, 128), 0)
122 >>> d = ImageDraw.ImageDraw(im)
123 >>> d.rectangle(xy, "#f00")
124 >>> im.getpixel((0, 0))
125 (255, 0, 0)
126 >>> d.rectangle(xy, "#ff0000")
127 >>> im.getpixel((0, 0))
128 (255, 0, 0)
129 >>> d.rectangle(xy, "rgb(255,0,0)")
130 >>> im.getpixel((0, 0))
131 (255, 0, 0)
132 >>> d.rectangle(xy, "rgb(100%,0%,0%)")
133 >>> im.getpixel((0, 0))
134 (255, 0, 0)
135 >>> d.rectangle(xy, "hsl(0, 100%, 50%)")
136 >>> im.getpixel((0, 0))
137 (255, 0, 0)
138 >>> d.rectangle(xy, "red")
139 >>> im.getpixel((0, 0))
140 (255, 0, 0)
141
142 In 1.1.6, you can use the ImageMath module to do image
143 calculations.
144
145 >>> im = ImageMath.eval("float(im + 20)", im=im.convert("L"))
146 >>> im.mode, im.size
147 ('F', (128, 128))
148
149 PIL can do many other things, but I'll leave that for another
150 day. If you're curious, check the handbook, available from:
151
152 http://www.pythonware.com
153
154 Cheers /F
155 """
156
157
158 def check_module(feature, module):
159 try:
160 __import__(module)
161 except ImportError:
162 print("***", feature, "support not installed")
163 else:
164 print("---", feature, "support ok")
165
166 def check_codec(feature, codec):
167 if codec + "_encoder" not in dir(Image.core):
168 print("***", feature, "support not installed")
169 else:
170 print("---", feature, "support ok")
171
172
173 if __name__ == "__main__":
174 # check build sanity
175
176 exit_status = 0
177
178 print("-"*68)
179 #print("PIL", Image.VERSION, "TEST SUMMARY ")
180 print("PIL (Pillow) TEST SUMMARY ")
181 print("-"*68)
182 print("Python modules loaded from", os.path.dirname(Image.__file__))
183 print("Binary modules loaded from", os.path.dirname(Image.core.__file__))
184 print("-"*68)
185 check_module("PIL CORE", "_imaging")
186 check_module("TKINTER", "_imagingtk")
187 check_codec("JPEG", "jpeg")
188 check_codec("ZLIB (PNG/ZIP)", "zip")
189 check_codec("G4 TIFF", "group4")
190 check_module("FREETYPE2", "_imagingft")
191 check_module("LITTLECMS", "_imagingcms")
192 check_module("WEBP", "_webp")
193 print("-"*68)
194
195 # use doctest to make sure the test program behaves as documented!
196 import doctest, selftest
197 print("Running selftest:")
198 status = doctest.testmod(selftest)
199 if status[0]:
200 print("*** %s tests of %d failed." % status)
201 exit_status = 1
202 else:
203 print("--- %s tests passed." % status[1])
204
205 sys.exit(exit_status)
206
207
```
Path: `PIL/WebPImagePlugin.py`
Content:
```
1 from PIL import Image
2 from PIL import ImageFile
3 from io import BytesIO
4 import _webp
5
6 def _accept(prefix):
7 return prefix[:4] == b"RIFF" and prefix[8:16] == b"WEBPVP8 "
8
9 class WebPImageFile(ImageFile.ImageFile):
10
11 format = "WEBP"
12 format_description = "WebP image"
13
14 def _open(self):
15 self.mode = "RGB"
16 data, width, height = _webp.WebPDecodeRGB(self.fp.read())
17 self.size = width, height
18 self.fp = BytesIO(data)
19 self.tile = [("raw", (0, 0) + self.size, 0, 'RGB')]
20
21 def _save(im, fp, filename):
22 if im.mode != "RGB":
23 raise IOError("cannot write mode %s as WEBP" % im.mode)
24 quality = im.encoderinfo.get("quality", 80)
25
26 data = _webp.WebPEncodeRGB(im.tobytes(), im.size[0], im.size[1], im.size[0] * 3, float(quality))
27 fp.write(data)
28
29 Image.register_open("WEBP", WebPImageFile, _accept)
30 Image.register_save("WEBP", _save)
31
32 Image.register_extension("WEBP", ".webp")
33 Image.register_mime("WEBP", "image/webp")
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/PIL/WebPImagePlugin.py b/PIL/WebPImagePlugin.py
--- a/PIL/WebPImagePlugin.py
+++ b/PIL/WebPImagePlugin.py
@@ -3,29 +3,55 @@
from io import BytesIO
import _webp
+
+_VALID_WEBP_MODES = {
+ "RGB": True,
+ "RGBA": True,
+ }
+
+_VP8_MODES_BY_IDENTIFIER = {
+ b"VP8 ": "RGB",
+ b"VP8X": "RGBA",
+ }
+
+
def _accept(prefix):
- return prefix[:4] == b"RIFF" and prefix[8:16] == b"WEBPVP8 "
+ is_riff_file_format = prefix[:4] == b"RIFF"
+ is_webp_file = prefix[8:12] == b"WEBP"
+ is_valid_vp8_mode = prefix[12:16] in _VP8_MODES_BY_IDENTIFIER
+
+ return is_riff_file_format and is_webp_file and is_valid_vp8_mode
+
class WebPImageFile(ImageFile.ImageFile):
format = "WEBP"
format_description = "WebP image"
- def _open(self):
- self.mode = "RGB"
- data, width, height = _webp.WebPDecodeRGB(self.fp.read())
+ def _open(self):
+ data, width, height, self.mode = _webp.WebPDecode(self.fp.read())
self.size = width, height
self.fp = BytesIO(data)
- self.tile = [("raw", (0, 0) + self.size, 0, 'RGB')]
+ self.tile = [("raw", (0, 0) + self.size, 0, self.mode)]
+
def _save(im, fp, filename):
- if im.mode != "RGB":
- raise IOError("cannot write mode %s as WEBP" % im.mode)
+ image_mode = im.mode
+ if im.mode not in _VALID_WEBP_MODES:
+ raise IOError("cannot write mode %s as WEBP" % image_mode)
+
quality = im.encoderinfo.get("quality", 80)
- data = _webp.WebPEncodeRGB(im.tobytes(), im.size[0], im.size[1], im.size[0] * 3, float(quality))
+ data = _webp.WebPEncode(
+ im.tobytes(),
+ im.size[0],
+ im.size[1],
+ float(quality),
+ im.mode
+ )
fp.write(data)
+
Image.register_open("WEBP", WebPImageFile, _accept)
Image.register_save("WEBP", _save)
diff --git a/selftest.py b/selftest.py
--- a/selftest.py
+++ b/selftest.py
@@ -190,6 +190,14 @@
check_module("FREETYPE2", "_imagingft")
check_module("LITTLECMS", "_imagingcms")
check_module("WEBP", "_webp")
+ try:
+ import _webp
+ if _webp.WebPDecoderBuggyAlpha():
+ print("***", "Transparent WEBP", "support not installed")
+ else:
+ print("---", "Transparent WEBP", "support ok")
+ except Exception:
+ pass
print("-"*68)
# use doctest to make sure the test program behaves as documented!
| {"golden_diff": "diff --git a/PIL/WebPImagePlugin.py b/PIL/WebPImagePlugin.py\n--- a/PIL/WebPImagePlugin.py\n+++ b/PIL/WebPImagePlugin.py\n@@ -3,29 +3,55 @@\n from io import BytesIO\n import _webp\n \n+\n+_VALID_WEBP_MODES = {\n+ \"RGB\": True,\n+ \"RGBA\": True,\n+ }\n+\n+_VP8_MODES_BY_IDENTIFIER = {\n+ b\"VP8 \": \"RGB\",\n+ b\"VP8X\": \"RGBA\",\n+ } \n+\n+\n def _accept(prefix):\n- return prefix[:4] == b\"RIFF\" and prefix[8:16] == b\"WEBPVP8 \"\n+ is_riff_file_format = prefix[:4] == b\"RIFF\"\n+ is_webp_file = prefix[8:12] == b\"WEBP\"\n+ is_valid_vp8_mode = prefix[12:16] in _VP8_MODES_BY_IDENTIFIER\n+ \n+ return is_riff_file_format and is_webp_file and is_valid_vp8_mode\n+\n \n class WebPImageFile(ImageFile.ImageFile):\n \n format = \"WEBP\"\n format_description = \"WebP image\"\n \n- def _open(self):\n- self.mode = \"RGB\"\n- data, width, height = _webp.WebPDecodeRGB(self.fp.read())\n+ def _open(self): \n+ data, width, height, self.mode = _webp.WebPDecode(self.fp.read())\n self.size = width, height\n self.fp = BytesIO(data)\n- self.tile = [(\"raw\", (0, 0) + self.size, 0, 'RGB')]\n+ self.tile = [(\"raw\", (0, 0) + self.size, 0, self.mode)]\n+\n \n def _save(im, fp, filename):\n- if im.mode != \"RGB\":\n- raise IOError(\"cannot write mode %s as WEBP\" % im.mode)\n+ image_mode = im.mode\n+ if im.mode not in _VALID_WEBP_MODES:\n+ raise IOError(\"cannot write mode %s as WEBP\" % image_mode)\n+ \n quality = im.encoderinfo.get(\"quality\", 80)\n \n- data = _webp.WebPEncodeRGB(im.tobytes(), im.size[0], im.size[1], im.size[0] * 3, float(quality))\n+ data = _webp.WebPEncode(\n+ im.tobytes(),\n+ im.size[0],\n+ im.size[1],\n+ float(quality),\n+\t\tim.mode\n+ )\n fp.write(data)\n \n+\n Image.register_open(\"WEBP\", WebPImageFile, _accept)\n Image.register_save(\"WEBP\", _save)\n \ndiff --git a/selftest.py b/selftest.py\n--- a/selftest.py\n+++ b/selftest.py\n@@ -190,6 +190,14 @@\n check_module(\"FREETYPE2\", \"_imagingft\")\n check_module(\"LITTLECMS\", \"_imagingcms\")\n check_module(\"WEBP\", \"_webp\")\n+ try:\n+ import _webp\n+ if _webp.WebPDecoderBuggyAlpha():\n+ print(\"***\", \"Transparent WEBP\", \"support not installed\")\n+ else:\n+ print(\"---\", \"Transparent WEBP\", \"support ok\")\n+ except Exception:\n+ pass\n print(\"-\"*68)\n \n # use doctest to make sure the test program behaves as documented!\n", "issue": "Add support for RGBA webp image encoding and decoding\nWould it be possible to wrap the `WebPEncodeRGBA` and `WebPDecodeRGBA` functionality of the webp library inside Pillow?\n\n", "before_files": [{"content": "# minimal sanity check\nfrom __future__ import print_function\nROOT = \".\"\n\nimport os, sys\nsys.path.insert(0, ROOT)\n\nfrom PIL import Image\nfrom PIL import ImageDraw\nfrom PIL import ImageFilter\nfrom PIL import ImageMath\n\ntry:\n Image.core.ping\nexcept ImportError as v:\n print(\"***\", v)\n sys.exit()\nexcept AttributeError:\n pass\n\ndef _info(im):\n im.load()\n return im.format, im.mode, im.size\n\ndef testimage():\n \"\"\"\n PIL lets you create in-memory images with various pixel types:\n\n >>> im = Image.new(\"1\", (128, 128)) # monochrome\n >>> _info(im)\n (None, '1', (128, 128))\n >>> _info(Image.new(\"L\", (128, 128))) # grayscale (luminance)\n (None, 'L', (128, 128))\n >>> _info(Image.new(\"P\", (128, 128))) # palette\n (None, 'P', (128, 128))\n >>> _info(Image.new(\"RGB\", (128, 128))) # truecolor\n (None, 'RGB', (128, 128))\n >>> _info(Image.new(\"I\", (128, 128))) # 32-bit integer\n (None, 'I', (128, 128))\n >>> _info(Image.new(\"F\", (128, 128))) # 32-bit floating point\n (None, 'F', (128, 128))\n\n Or open existing files:\n\n >>> im = Image.open(os.path.join(ROOT, \"Images/lena.gif\"))\n >>> _info(im)\n ('GIF', 'P', (128, 128))\n >>> _info(Image.open(os.path.join(ROOT, \"Images/lena.ppm\")))\n ('PPM', 'RGB', (128, 128))\n >>> try:\n ... _info(Image.open(os.path.join(ROOT, \"Images/lena.jpg\")))\n ... except IOError as v:\n ... print(v)\n ('JPEG', 'RGB', (128, 128))\n\n PIL doesn't actually load the image data until it's needed,\n or you call the \"load\" method:\n\n >>> im = Image.open(os.path.join(ROOT, \"Images/lena.ppm\"))\n >>> print(im.im) # internal image attribute\n None\n >>> a = im.load()\n >>> type(im.im) # doctest: +ELLIPSIS\n <... '...ImagingCore'>\n\n You can apply many different operations on images. Most\n operations return a new image:\n\n >>> im = Image.open(os.path.join(ROOT, \"Images/lena.ppm\"))\n >>> _info(im.convert(\"L\"))\n (None, 'L', (128, 128))\n >>> _info(im.copy())\n (None, 'RGB', (128, 128))\n >>> _info(im.crop((32, 32, 96, 96)))\n (None, 'RGB', (64, 64))\n >>> _info(im.filter(ImageFilter.BLUR))\n (None, 'RGB', (128, 128))\n >>> im.getbands()\n ('R', 'G', 'B')\n >>> im.getbbox()\n (0, 0, 128, 128)\n >>> len(im.getdata())\n 16384\n >>> im.getextrema()\n ((61, 255), (26, 234), (44, 223))\n >>> im.getpixel((0, 0))\n (223, 162, 133)\n >>> len(im.getprojection())\n 2\n >>> len(im.histogram())\n 768\n >>> _info(im.point(list(range(256))*3))\n (None, 'RGB', (128, 128))\n >>> _info(im.resize((64, 64)))\n (None, 'RGB', (64, 64))\n >>> _info(im.rotate(45))\n (None, 'RGB', (128, 128))\n >>> [_info(ch) for ch in im.split()]\n [(None, 'L', (128, 128)), (None, 'L', (128, 128)), (None, 'L', (128, 128))]\n >>> len(im.convert(\"1\").tobitmap())\n 10456\n >>> len(im.tobytes())\n 49152\n >>> _info(im.transform((512, 512), Image.AFFINE, (1,0,0,0,1,0)))\n (None, 'RGB', (512, 512))\n >>> _info(im.transform((512, 512), Image.EXTENT, (32,32,96,96)))\n (None, 'RGB', (512, 512))\n\n The ImageDraw module lets you draw stuff in raster images:\n\n >>> im = Image.new(\"L\", (128, 128), 64)\n >>> d = ImageDraw.ImageDraw(im)\n >>> d.line((0, 0, 128, 128), fill=128)\n >>> d.line((0, 128, 128, 0), fill=128)\n >>> im.getextrema()\n (64, 128)\n\n In 1.1.4, you can specify colors in a number of ways:\n\n >>> xy = 0, 0, 128, 128\n >>> im = Image.new(\"RGB\", (128, 128), 0)\n >>> d = ImageDraw.ImageDraw(im)\n >>> d.rectangle(xy, \"#f00\")\n >>> im.getpixel((0, 0))\n (255, 0, 0)\n >>> d.rectangle(xy, \"#ff0000\")\n >>> im.getpixel((0, 0))\n (255, 0, 0)\n >>> d.rectangle(xy, \"rgb(255,0,0)\")\n >>> im.getpixel((0, 0))\n (255, 0, 0)\n >>> d.rectangle(xy, \"rgb(100%,0%,0%)\")\n >>> im.getpixel((0, 0))\n (255, 0, 0)\n >>> d.rectangle(xy, \"hsl(0, 100%, 50%)\")\n >>> im.getpixel((0, 0))\n (255, 0, 0)\n >>> d.rectangle(xy, \"red\")\n >>> im.getpixel((0, 0))\n (255, 0, 0)\n\n In 1.1.6, you can use the ImageMath module to do image\n calculations.\n\n >>> im = ImageMath.eval(\"float(im + 20)\", im=im.convert(\"L\"))\n >>> im.mode, im.size\n ('F', (128, 128))\n\n PIL can do many other things, but I'll leave that for another\n day. If you're curious, check the handbook, available from:\n\n http://www.pythonware.com\n\n Cheers /F\n \"\"\"\n\n\ndef check_module(feature, module):\n try:\n __import__(module)\n except ImportError:\n print(\"***\", feature, \"support not installed\")\n else:\n print(\"---\", feature, \"support ok\")\n\ndef check_codec(feature, codec):\n if codec + \"_encoder\" not in dir(Image.core):\n print(\"***\", feature, \"support not installed\")\n else:\n print(\"---\", feature, \"support ok\")\n\n\nif __name__ == \"__main__\":\n # check build sanity\n\n exit_status = 0\n\n print(\"-\"*68)\n #print(\"PIL\", Image.VERSION, \"TEST SUMMARY \")\n print(\"PIL (Pillow) TEST SUMMARY \")\n print(\"-\"*68)\n print(\"Python modules loaded from\", os.path.dirname(Image.__file__))\n print(\"Binary modules loaded from\", os.path.dirname(Image.core.__file__))\n print(\"-\"*68)\n check_module(\"PIL CORE\", \"_imaging\")\n check_module(\"TKINTER\", \"_imagingtk\")\n check_codec(\"JPEG\", \"jpeg\")\n check_codec(\"ZLIB (PNG/ZIP)\", \"zip\")\n check_codec(\"G4 TIFF\", \"group4\")\n check_module(\"FREETYPE2\", \"_imagingft\")\n check_module(\"LITTLECMS\", \"_imagingcms\")\n check_module(\"WEBP\", \"_webp\")\n print(\"-\"*68)\n\n # use doctest to make sure the test program behaves as documented!\n import doctest, selftest\n print(\"Running selftest:\")\n status = doctest.testmod(selftest)\n if status[0]:\n print(\"*** %s tests of %d failed.\" % status)\n exit_status = 1\n else:\n print(\"--- %s tests passed.\" % status[1])\n\n sys.exit(exit_status)\n\n", "path": "selftest.py"}, {"content": "from PIL import Image\nfrom PIL import ImageFile\nfrom io import BytesIO\nimport _webp\n\ndef _accept(prefix):\n return prefix[:4] == b\"RIFF\" and prefix[8:16] == b\"WEBPVP8 \"\n\nclass WebPImageFile(ImageFile.ImageFile):\n\n format = \"WEBP\"\n format_description = \"WebP image\"\n\n def _open(self):\n self.mode = \"RGB\"\n data, width, height = _webp.WebPDecodeRGB(self.fp.read())\n self.size = width, height\n self.fp = BytesIO(data)\n self.tile = [(\"raw\", (0, 0) + self.size, 0, 'RGB')]\n\ndef _save(im, fp, filename):\n if im.mode != \"RGB\":\n raise IOError(\"cannot write mode %s as WEBP\" % im.mode)\n quality = im.encoderinfo.get(\"quality\", 80)\n \n data = _webp.WebPEncodeRGB(im.tobytes(), im.size[0], im.size[1], im.size[0] * 3, float(quality))\n fp.write(data)\n\nImage.register_open(\"WEBP\", WebPImageFile, _accept)\nImage.register_save(\"WEBP\", _save)\n\nImage.register_extension(\"WEBP\", \".webp\")\nImage.register_mime(\"WEBP\", \"image/webp\")\n", "path": "PIL/WebPImagePlugin.py"}], "after_files": [{"content": "# minimal sanity check\nfrom __future__ import print_function\nROOT = \".\"\n\nimport os, sys\nsys.path.insert(0, ROOT)\n\nfrom PIL import Image\nfrom PIL import ImageDraw\nfrom PIL import ImageFilter\nfrom PIL import ImageMath\n\ntry:\n Image.core.ping\nexcept ImportError as v:\n print(\"***\", v)\n sys.exit()\nexcept AttributeError:\n pass\n\ndef _info(im):\n im.load()\n return im.format, im.mode, im.size\n\ndef testimage():\n \"\"\"\n PIL lets you create in-memory images with various pixel types:\n\n >>> im = Image.new(\"1\", (128, 128)) # monochrome\n >>> _info(im)\n (None, '1', (128, 128))\n >>> _info(Image.new(\"L\", (128, 128))) # grayscale (luminance)\n (None, 'L', (128, 128))\n >>> _info(Image.new(\"P\", (128, 128))) # palette\n (None, 'P', (128, 128))\n >>> _info(Image.new(\"RGB\", (128, 128))) # truecolor\n (None, 'RGB', (128, 128))\n >>> _info(Image.new(\"I\", (128, 128))) # 32-bit integer\n (None, 'I', (128, 128))\n >>> _info(Image.new(\"F\", (128, 128))) # 32-bit floating point\n (None, 'F', (128, 128))\n\n Or open existing files:\n\n >>> im = Image.open(os.path.join(ROOT, \"Images/lena.gif\"))\n >>> _info(im)\n ('GIF', 'P', (128, 128))\n >>> _info(Image.open(os.path.join(ROOT, \"Images/lena.ppm\")))\n ('PPM', 'RGB', (128, 128))\n >>> try:\n ... _info(Image.open(os.path.join(ROOT, \"Images/lena.jpg\")))\n ... except IOError as v:\n ... print(v)\n ('JPEG', 'RGB', (128, 128))\n\n PIL doesn't actually load the image data until it's needed,\n or you call the \"load\" method:\n\n >>> im = Image.open(os.path.join(ROOT, \"Images/lena.ppm\"))\n >>> print(im.im) # internal image attribute\n None\n >>> a = im.load()\n >>> type(im.im) # doctest: +ELLIPSIS\n <... '...ImagingCore'>\n\n You can apply many different operations on images. Most\n operations return a new image:\n\n >>> im = Image.open(os.path.join(ROOT, \"Images/lena.ppm\"))\n >>> _info(im.convert(\"L\"))\n (None, 'L', (128, 128))\n >>> _info(im.copy())\n (None, 'RGB', (128, 128))\n >>> _info(im.crop((32, 32, 96, 96)))\n (None, 'RGB', (64, 64))\n >>> _info(im.filter(ImageFilter.BLUR))\n (None, 'RGB', (128, 128))\n >>> im.getbands()\n ('R', 'G', 'B')\n >>> im.getbbox()\n (0, 0, 128, 128)\n >>> len(im.getdata())\n 16384\n >>> im.getextrema()\n ((61, 255), (26, 234), (44, 223))\n >>> im.getpixel((0, 0))\n (223, 162, 133)\n >>> len(im.getprojection())\n 2\n >>> len(im.histogram())\n 768\n >>> _info(im.point(list(range(256))*3))\n (None, 'RGB', (128, 128))\n >>> _info(im.resize((64, 64)))\n (None, 'RGB', (64, 64))\n >>> _info(im.rotate(45))\n (None, 'RGB', (128, 128))\n >>> [_info(ch) for ch in im.split()]\n [(None, 'L', (128, 128)), (None, 'L', (128, 128)), (None, 'L', (128, 128))]\n >>> len(im.convert(\"1\").tobitmap())\n 10456\n >>> len(im.tobytes())\n 49152\n >>> _info(im.transform((512, 512), Image.AFFINE, (1,0,0,0,1,0)))\n (None, 'RGB', (512, 512))\n >>> _info(im.transform((512, 512), Image.EXTENT, (32,32,96,96)))\n (None, 'RGB', (512, 512))\n\n The ImageDraw module lets you draw stuff in raster images:\n\n >>> im = Image.new(\"L\", (128, 128), 64)\n >>> d = ImageDraw.ImageDraw(im)\n >>> d.line((0, 0, 128, 128), fill=128)\n >>> d.line((0, 128, 128, 0), fill=128)\n >>> im.getextrema()\n (64, 128)\n\n In 1.1.4, you can specify colors in a number of ways:\n\n >>> xy = 0, 0, 128, 128\n >>> im = Image.new(\"RGB\", (128, 128), 0)\n >>> d = ImageDraw.ImageDraw(im)\n >>> d.rectangle(xy, \"#f00\")\n >>> im.getpixel((0, 0))\n (255, 0, 0)\n >>> d.rectangle(xy, \"#ff0000\")\n >>> im.getpixel((0, 0))\n (255, 0, 0)\n >>> d.rectangle(xy, \"rgb(255,0,0)\")\n >>> im.getpixel((0, 0))\n (255, 0, 0)\n >>> d.rectangle(xy, \"rgb(100%,0%,0%)\")\n >>> im.getpixel((0, 0))\n (255, 0, 0)\n >>> d.rectangle(xy, \"hsl(0, 100%, 50%)\")\n >>> im.getpixel((0, 0))\n (255, 0, 0)\n >>> d.rectangle(xy, \"red\")\n >>> im.getpixel((0, 0))\n (255, 0, 0)\n\n In 1.1.6, you can use the ImageMath module to do image\n calculations.\n\n >>> im = ImageMath.eval(\"float(im + 20)\", im=im.convert(\"L\"))\n >>> im.mode, im.size\n ('F', (128, 128))\n\n PIL can do many other things, but I'll leave that for another\n day. If you're curious, check the handbook, available from:\n\n http://www.pythonware.com\n\n Cheers /F\n \"\"\"\n\n\ndef check_module(feature, module):\n try:\n __import__(module)\n except ImportError:\n print(\"***\", feature, \"support not installed\")\n else:\n print(\"---\", feature, \"support ok\")\n\ndef check_codec(feature, codec):\n if codec + \"_encoder\" not in dir(Image.core):\n print(\"***\", feature, \"support not installed\")\n else:\n print(\"---\", feature, \"support ok\")\n\n\nif __name__ == \"__main__\":\n # check build sanity\n\n exit_status = 0\n\n print(\"-\"*68)\n #print(\"PIL\", Image.VERSION, \"TEST SUMMARY \")\n print(\"PIL (Pillow) TEST SUMMARY \")\n print(\"-\"*68)\n print(\"Python modules loaded from\", os.path.dirname(Image.__file__))\n print(\"Binary modules loaded from\", os.path.dirname(Image.core.__file__))\n print(\"-\"*68)\n check_module(\"PIL CORE\", \"_imaging\")\n check_module(\"TKINTER\", \"_imagingtk\")\n check_codec(\"JPEG\", \"jpeg\")\n check_codec(\"ZLIB (PNG/ZIP)\", \"zip\")\n check_codec(\"G4 TIFF\", \"group4\")\n check_module(\"FREETYPE2\", \"_imagingft\")\n check_module(\"LITTLECMS\", \"_imagingcms\")\n check_module(\"WEBP\", \"_webp\")\n try:\n import _webp\n if _webp.WebPDecoderBuggyAlpha():\n print(\"***\", \"Transparent WEBP\", \"support not installed\")\n else:\n print(\"---\", \"Transparent WEBP\", \"support ok\")\n except Exception:\n pass\n print(\"-\"*68)\n\n # use doctest to make sure the test program behaves as documented!\n import doctest, selftest\n print(\"Running selftest:\")\n status = doctest.testmod(selftest)\n if status[0]:\n print(\"*** %s tests of %d failed.\" % status)\n exit_status = 1\n else:\n print(\"--- %s tests passed.\" % status[1])\n\n sys.exit(exit_status)\n\n", "path": "selftest.py"}, {"content": "from PIL import Image\nfrom PIL import ImageFile\nfrom io import BytesIO\nimport _webp\n\n\n_VALID_WEBP_MODES = {\n \"RGB\": True,\n \"RGBA\": True,\n }\n\n_VP8_MODES_BY_IDENTIFIER = {\n b\"VP8 \": \"RGB\",\n b\"VP8X\": \"RGBA\",\n } \n\n\ndef _accept(prefix):\n is_riff_file_format = prefix[:4] == b\"RIFF\"\n is_webp_file = prefix[8:12] == b\"WEBP\"\n is_valid_vp8_mode = prefix[12:16] in _VP8_MODES_BY_IDENTIFIER\n \n return is_riff_file_format and is_webp_file and is_valid_vp8_mode\n\n\nclass WebPImageFile(ImageFile.ImageFile):\n\n format = \"WEBP\"\n format_description = \"WebP image\"\n\n def _open(self): \n data, width, height, self.mode = _webp.WebPDecode(self.fp.read())\n self.size = width, height\n self.fp = BytesIO(data)\n self.tile = [(\"raw\", (0, 0) + self.size, 0, self.mode)]\n\n\ndef _save(im, fp, filename):\n image_mode = im.mode\n if im.mode not in _VALID_WEBP_MODES:\n raise IOError(\"cannot write mode %s as WEBP\" % image_mode)\n \n quality = im.encoderinfo.get(\"quality\", 80)\n \n data = _webp.WebPEncode(\n im.tobytes(),\n im.size[0],\n im.size[1],\n float(quality),\n\t\tim.mode\n )\n fp.write(data)\n\n\nImage.register_open(\"WEBP\", WebPImageFile, _accept)\nImage.register_save(\"WEBP\", _save)\n\nImage.register_extension(\"WEBP\", \".webp\")\nImage.register_mime(\"WEBP\", \"image/webp\")\n", "path": "PIL/WebPImagePlugin.py"}]} | 3,302 | 783 |
gh_patches_debug_29548 | rasdani/github-patches | git_diff | translate__pootle-3719 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Running migrate twice gives an error about changed models
If you run `migrate` a second time directly after an initial migration you will get the following error.
```
Running migrations:
No migrations to apply.
Your models have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
```
`makemigrations` produces this file:
``` py
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import pootle.core.markup.fields
class Migration(migrations.Migration):
dependencies = [
('virtualfolder', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='virtualfolder',
name='description',
field=pootle.core.markup.fields.MarkupField(help_text='Use this to provide more information or instructions. Allowed markup: HTML', verbose_name='Description', blank=True),
preserve_default=True,
),
]
```
@unho Why are virtualfolders doing this?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/virtualfolder/migrations/0001_initial.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import unicode_literals
3
4 from django.db import models, migrations
5 import pootle.core.markup.fields
6
7
8 class Migration(migrations.Migration):
9
10 dependencies = [
11 ('pootle_store', '0001_initial'),
12 ]
13
14 operations = [
15 migrations.CreateModel(
16 name='VirtualFolder',
17 fields=[
18 ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
19 ('name', models.CharField(max_length=70, verbose_name='Name')),
20 ('location', models.CharField(help_text='Root path where this virtual folder is applied.', max_length=255, verbose_name='Location')),
21 ('filter_rules', models.TextField(help_text='Filtering rules that tell which stores this virtual folder comprises.', verbose_name='Filter')),
22 ('priority', models.FloatField(default=1, help_text='Number specifying importance. Greater priority means it is more important.', verbose_name='Priority')),
23 ('is_browsable', models.BooleanField(default=True, help_text='Whether this virtual folder is active or not.', verbose_name='Is browsable?')),
24 ('description', pootle.core.markup.fields.MarkupField(help_text='Use this to provide more information or instructions. Allowed markup: HTML', verbose_name='Description', blank=True)),
25 ('units', models.ManyToManyField(related_name='vfolders', to='pootle_store.Unit', db_index=True)),
26 ],
27 options={
28 'ordering': ['-priority', 'name'],
29 },
30 bases=(models.Model,),
31 ),
32 migrations.AlterUniqueTogether(
33 name='virtualfolder',
34 unique_together=set([('name', 'location')]),
35 ),
36 ]
37
```
Path: `pootle/core/markup/fields.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10 import logging
11
12 from django.conf import settings
13 from django.core.cache import cache
14 from django.db import models
15 from django.utils.safestring import mark_safe
16
17 from .filters import apply_markup_filter
18 from .widgets import MarkupTextarea
19
20
21 __all__ = ('Markup', 'MarkupField',)
22
23
24 logger = logging.getLogger('pootle.markup')
25
26
27 _rendered_cache_key = lambda obj, pk, field: '_%s_%s_%s_rendered' % \
28 (obj, pk, field)
29
30
31 class Markup(object):
32
33 def __init__(self, instance, field_name, rendered_cache_key):
34 self.instance = instance
35 self.field_name = field_name
36 self.cache_key = rendered_cache_key
37
38 @property
39 def raw(self):
40 return self.instance.__dict__[self.field_name]
41
42 @raw.setter
43 def raw(self, value):
44 setattr(self.instance, self.field_name, value)
45
46 @property
47 def rendered(self):
48 rendered = cache.get(self.cache_key)
49
50 if not rendered:
51 logger.debug(u'Caching rendered output of %r', self.cache_key)
52 rendered = apply_markup_filter(self.raw)
53 cache.set(self.cache_key, rendered,
54 settings.OBJECT_CACHE_TIMEOUT)
55
56 return rendered
57
58 def __unicode__(self):
59 return mark_safe(self.rendered)
60
61 def __nonzero__(self):
62 return self.raw.strip() != '' and self.raw is not None
63
64
65 class MarkupDescriptor(object):
66
67 def __init__(self, field):
68 self.field = field
69
70 def __get__(self, obj, owner):
71 if obj is None:
72 raise AttributeError('Can only be accessed via an instance.')
73
74 markup = obj.__dict__[self.field.name]
75 if markup is None:
76 return None
77
78 cache_key = _rendered_cache_key(obj.__class__.__name__,
79 obj.pk,
80 self.field.name)
81 return Markup(obj, self.field.name, cache_key)
82
83 def __set__(self, obj, value):
84 if isinstance(value, Markup):
85 obj.__dict__[self.field.name] = value.raw
86 else:
87 obj.__dict__[self.field.name] = value
88
89
90 class MarkupField(models.TextField):
91
92 description = 'Text field supporting different markup formats.'
93
94 def contribute_to_class(self, cls, name):
95 super(MarkupField, self).contribute_to_class(cls, name)
96 setattr(cls, self.name, MarkupDescriptor(self))
97
98 def pre_save(self, model_instance, add):
99 value = super(MarkupField, self).pre_save(model_instance, add)
100
101 if not add:
102 # Invalidate cache to force rendering upon next retrieval
103 cache_key = _rendered_cache_key(model_instance.__class__.__name__,
104 model_instance.pk,
105 self.name)
106 logger.debug('Invalidating cache for %r', cache_key)
107 cache.delete(cache_key)
108
109 return value.raw
110
111 def get_prep_value(self, value):
112 if isinstance(value, Markup):
113 return value.raw
114
115 return value
116
117 def value_to_string(self, obj):
118 value = self._get_val_from_obj(obj)
119 return self.get_prep_value(value)
120
121 def formfield(self, **kwargs):
122 defaults = {'widget': MarkupTextarea}
123 defaults.update(kwargs)
124 return super(MarkupField, self).formfield(**defaults)
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/virtualfolder/migrations/0001_initial.py b/pootle/apps/virtualfolder/migrations/0001_initial.py
--- a/pootle/apps/virtualfolder/migrations/0001_initial.py
+++ b/pootle/apps/virtualfolder/migrations/0001_initial.py
@@ -21,7 +21,7 @@
('filter_rules', models.TextField(help_text='Filtering rules that tell which stores this virtual folder comprises.', verbose_name='Filter')),
('priority', models.FloatField(default=1, help_text='Number specifying importance. Greater priority means it is more important.', verbose_name='Priority')),
('is_browsable', models.BooleanField(default=True, help_text='Whether this virtual folder is active or not.', verbose_name='Is browsable?')),
- ('description', pootle.core.markup.fields.MarkupField(help_text='Use this to provide more information or instructions. Allowed markup: HTML', verbose_name='Description', blank=True)),
+ ('description', pootle.core.markup.fields.MarkupField(verbose_name='Description', blank=True)),
('units', models.ManyToManyField(related_name='vfolders', to='pootle_store.Unit', db_index=True)),
],
options={
diff --git a/pootle/core/markup/fields.py b/pootle/core/markup/fields.py
--- a/pootle/core/markup/fields.py
+++ b/pootle/core/markup/fields.py
@@ -122,3 +122,8 @@
defaults = {'widget': MarkupTextarea}
defaults.update(kwargs)
return super(MarkupField, self).formfield(**defaults)
+
+ def deconstruct(self):
+ name, path, args, kwargs = super(MarkupField, self).deconstruct()
+ kwargs.pop('help_text', None)
+ return name, path, args, kwargs
| {"golden_diff": "diff --git a/pootle/apps/virtualfolder/migrations/0001_initial.py b/pootle/apps/virtualfolder/migrations/0001_initial.py\n--- a/pootle/apps/virtualfolder/migrations/0001_initial.py\n+++ b/pootle/apps/virtualfolder/migrations/0001_initial.py\n@@ -21,7 +21,7 @@\n ('filter_rules', models.TextField(help_text='Filtering rules that tell which stores this virtual folder comprises.', verbose_name='Filter')),\n ('priority', models.FloatField(default=1, help_text='Number specifying importance. Greater priority means it is more important.', verbose_name='Priority')),\n ('is_browsable', models.BooleanField(default=True, help_text='Whether this virtual folder is active or not.', verbose_name='Is browsable?')),\n- ('description', pootle.core.markup.fields.MarkupField(help_text='Use this to provide more information or instructions. Allowed markup: HTML', verbose_name='Description', blank=True)),\n+ ('description', pootle.core.markup.fields.MarkupField(verbose_name='Description', blank=True)),\n ('units', models.ManyToManyField(related_name='vfolders', to='pootle_store.Unit', db_index=True)),\n ],\n options={\ndiff --git a/pootle/core/markup/fields.py b/pootle/core/markup/fields.py\n--- a/pootle/core/markup/fields.py\n+++ b/pootle/core/markup/fields.py\n@@ -122,3 +122,8 @@\n defaults = {'widget': MarkupTextarea}\n defaults.update(kwargs)\n return super(MarkupField, self).formfield(**defaults)\n+\n+ def deconstruct(self):\n+ name, path, args, kwargs = super(MarkupField, self).deconstruct()\n+ kwargs.pop('help_text', None)\n+ return name, path, args, kwargs\n", "issue": "Running migrate twice gives an error about changed models\nIf you run `migrate` a second time directly after an initial migration you will get the following error.\n\n```\nRunning migrations:\n No migrations to apply.\n Your models have changes that are not yet reflected in a migration, and so won't be applied.\n Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.\n```\n\n`makemigrations` produces this file:\n\n``` py\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\nimport pootle.core.markup.fields\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('virtualfolder', '0001_initial'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='virtualfolder',\n name='description',\n field=pootle.core.markup.fields.MarkupField(help_text='Use this to provide more information or instructions. Allowed markup: HTML', verbose_name='Description', blank=True),\n preserve_default=True,\n ),\n ]\n```\n\n@unho Why are virtualfolders doing this?\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\nimport pootle.core.markup.fields\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('pootle_store', '0001_initial'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='VirtualFolder',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('name', models.CharField(max_length=70, verbose_name='Name')),\n ('location', models.CharField(help_text='Root path where this virtual folder is applied.', max_length=255, verbose_name='Location')),\n ('filter_rules', models.TextField(help_text='Filtering rules that tell which stores this virtual folder comprises.', verbose_name='Filter')),\n ('priority', models.FloatField(default=1, help_text='Number specifying importance. Greater priority means it is more important.', verbose_name='Priority')),\n ('is_browsable', models.BooleanField(default=True, help_text='Whether this virtual folder is active or not.', verbose_name='Is browsable?')),\n ('description', pootle.core.markup.fields.MarkupField(help_text='Use this to provide more information or instructions. Allowed markup: HTML', verbose_name='Description', blank=True)),\n ('units', models.ManyToManyField(related_name='vfolders', to='pootle_store.Unit', db_index=True)),\n ],\n options={\n 'ordering': ['-priority', 'name'],\n },\n bases=(models.Model,),\n ),\n migrations.AlterUniqueTogether(\n name='virtualfolder',\n unique_together=set([('name', 'location')]),\n ),\n ]\n", "path": "pootle/apps/virtualfolder/migrations/0001_initial.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport logging\n\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.db import models\nfrom django.utils.safestring import mark_safe\n\nfrom .filters import apply_markup_filter\nfrom .widgets import MarkupTextarea\n\n\n__all__ = ('Markup', 'MarkupField',)\n\n\nlogger = logging.getLogger('pootle.markup')\n\n\n_rendered_cache_key = lambda obj, pk, field: '_%s_%s_%s_rendered' % \\\n (obj, pk, field)\n\n\nclass Markup(object):\n\n def __init__(self, instance, field_name, rendered_cache_key):\n self.instance = instance\n self.field_name = field_name\n self.cache_key = rendered_cache_key\n\n @property\n def raw(self):\n return self.instance.__dict__[self.field_name]\n\n @raw.setter\n def raw(self, value):\n setattr(self.instance, self.field_name, value)\n\n @property\n def rendered(self):\n rendered = cache.get(self.cache_key)\n\n if not rendered:\n logger.debug(u'Caching rendered output of %r', self.cache_key)\n rendered = apply_markup_filter(self.raw)\n cache.set(self.cache_key, rendered,\n settings.OBJECT_CACHE_TIMEOUT)\n\n return rendered\n\n def __unicode__(self):\n return mark_safe(self.rendered)\n\n def __nonzero__(self):\n return self.raw.strip() != '' and self.raw is not None\n\n\nclass MarkupDescriptor(object):\n\n def __init__(self, field):\n self.field = field\n\n def __get__(self, obj, owner):\n if obj is None:\n raise AttributeError('Can only be accessed via an instance.')\n\n markup = obj.__dict__[self.field.name]\n if markup is None:\n return None\n\n cache_key = _rendered_cache_key(obj.__class__.__name__,\n obj.pk,\n self.field.name)\n return Markup(obj, self.field.name, cache_key)\n\n def __set__(self, obj, value):\n if isinstance(value, Markup):\n obj.__dict__[self.field.name] = value.raw\n else:\n obj.__dict__[self.field.name] = value\n\n\nclass MarkupField(models.TextField):\n\n description = 'Text field supporting different markup formats.'\n\n def contribute_to_class(self, cls, name):\n super(MarkupField, self).contribute_to_class(cls, name)\n setattr(cls, self.name, MarkupDescriptor(self))\n\n def pre_save(self, model_instance, add):\n value = super(MarkupField, self).pre_save(model_instance, add)\n\n if not add:\n # Invalidate cache to force rendering upon next retrieval\n cache_key = _rendered_cache_key(model_instance.__class__.__name__,\n model_instance.pk,\n self.name)\n logger.debug('Invalidating cache for %r', cache_key)\n cache.delete(cache_key)\n\n return value.raw\n\n def get_prep_value(self, value):\n if isinstance(value, Markup):\n return value.raw\n\n return value\n\n def value_to_string(self, obj):\n value = self._get_val_from_obj(obj)\n return self.get_prep_value(value)\n\n def formfield(self, **kwargs):\n defaults = {'widget': MarkupTextarea}\n defaults.update(kwargs)\n return super(MarkupField, self).formfield(**defaults)\n", "path": "pootle/core/markup/fields.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\nimport pootle.core.markup.fields\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('pootle_store', '0001_initial'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='VirtualFolder',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('name', models.CharField(max_length=70, verbose_name='Name')),\n ('location', models.CharField(help_text='Root path where this virtual folder is applied.', max_length=255, verbose_name='Location')),\n ('filter_rules', models.TextField(help_text='Filtering rules that tell which stores this virtual folder comprises.', verbose_name='Filter')),\n ('priority', models.FloatField(default=1, help_text='Number specifying importance. Greater priority means it is more important.', verbose_name='Priority')),\n ('is_browsable', models.BooleanField(default=True, help_text='Whether this virtual folder is active or not.', verbose_name='Is browsable?')),\n ('description', pootle.core.markup.fields.MarkupField(verbose_name='Description', blank=True)),\n ('units', models.ManyToManyField(related_name='vfolders', to='pootle_store.Unit', db_index=True)),\n ],\n options={\n 'ordering': ['-priority', 'name'],\n },\n bases=(models.Model,),\n ),\n migrations.AlterUniqueTogether(\n name='virtualfolder',\n unique_together=set([('name', 'location')]),\n ),\n ]\n", "path": "pootle/apps/virtualfolder/migrations/0001_initial.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport logging\n\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.db import models\nfrom django.utils.safestring import mark_safe\n\nfrom .filters import apply_markup_filter\nfrom .widgets import MarkupTextarea\n\n\n__all__ = ('Markup', 'MarkupField',)\n\n\nlogger = logging.getLogger('pootle.markup')\n\n\n_rendered_cache_key = lambda obj, pk, field: '_%s_%s_%s_rendered' % \\\n (obj, pk, field)\n\n\nclass Markup(object):\n\n def __init__(self, instance, field_name, rendered_cache_key):\n self.instance = instance\n self.field_name = field_name\n self.cache_key = rendered_cache_key\n\n @property\n def raw(self):\n return self.instance.__dict__[self.field_name]\n\n @raw.setter\n def raw(self, value):\n setattr(self.instance, self.field_name, value)\n\n @property\n def rendered(self):\n rendered = cache.get(self.cache_key)\n\n if not rendered:\n logger.debug(u'Caching rendered output of %r', self.cache_key)\n rendered = apply_markup_filter(self.raw)\n cache.set(self.cache_key, rendered,\n settings.OBJECT_CACHE_TIMEOUT)\n\n return rendered\n\n def __unicode__(self):\n return mark_safe(self.rendered)\n\n def __nonzero__(self):\n return self.raw.strip() != '' and self.raw is not None\n\n\nclass MarkupDescriptor(object):\n\n def __init__(self, field):\n self.field = field\n\n def __get__(self, obj, owner):\n if obj is None:\n raise AttributeError('Can only be accessed via an instance.')\n\n markup = obj.__dict__[self.field.name]\n if markup is None:\n return None\n\n cache_key = _rendered_cache_key(obj.__class__.__name__,\n obj.pk,\n self.field.name)\n return Markup(obj, self.field.name, cache_key)\n\n def __set__(self, obj, value):\n if isinstance(value, Markup):\n obj.__dict__[self.field.name] = value.raw\n else:\n obj.__dict__[self.field.name] = value\n\n\nclass MarkupField(models.TextField):\n\n description = 'Text field supporting different markup formats.'\n\n def contribute_to_class(self, cls, name):\n super(MarkupField, self).contribute_to_class(cls, name)\n setattr(cls, self.name, MarkupDescriptor(self))\n\n def pre_save(self, model_instance, add):\n value = super(MarkupField, self).pre_save(model_instance, add)\n\n if not add:\n # Invalidate cache to force rendering upon next retrieval\n cache_key = _rendered_cache_key(model_instance.__class__.__name__,\n model_instance.pk,\n self.name)\n logger.debug('Invalidating cache for %r', cache_key)\n cache.delete(cache_key)\n\n return value.raw\n\n def get_prep_value(self, value):\n if isinstance(value, Markup):\n return value.raw\n\n return value\n\n def value_to_string(self, obj):\n value = self._get_val_from_obj(obj)\n return self.get_prep_value(value)\n\n def formfield(self, **kwargs):\n defaults = {'widget': MarkupTextarea}\n defaults.update(kwargs)\n return super(MarkupField, self).formfield(**defaults)\n\n def deconstruct(self):\n name, path, args, kwargs = super(MarkupField, self).deconstruct()\n kwargs.pop('help_text', None)\n return name, path, args, kwargs\n", "path": "pootle/core/markup/fields.py"}]} | 2,022 | 410 |
gh_patches_debug_4294 | rasdani/github-patches | git_diff | open-mmlab__mmpretrain-286 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Feature Request] CPU Testing
Since CPU training is already supported in PR #219, what about also adding the feature of CPU testing.
Besides, it seems there are still some problems with the CPU training feature @wangruohui :
When we set `--device CPU`, the expected behavior is using CPU for training, no matter if there exist GPUs on this machine. However, mmcls will use GPU for training if it exists, even if we set `--device CPU`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmcls/apis/train.py`
Content:
```
1 import random
2 import warnings
3
4 import numpy as np
5 import torch
6 from mmcv.parallel import MMDataParallel, MMDistributedDataParallel
7 from mmcv.runner import DistSamplerSeedHook, build_optimizer, build_runner
8
9 from mmcls.core import DistOptimizerHook
10 from mmcls.datasets import build_dataloader, build_dataset
11 from mmcls.utils import get_root_logger
12
13 # TODO import eval hooks from mmcv and delete them from mmcls
14 try:
15 from mmcv.runner.hooks import EvalHook, DistEvalHook
16 except ImportError:
17 warnings.warn('DeprecationWarning: EvalHook and DistEvalHook from mmcls '
18 'will be deprecated.'
19 'Please install mmcv through master branch.')
20 from mmcls.core import EvalHook, DistEvalHook
21
22 # TODO import optimizer hook from mmcv and delete them from mmcls
23 try:
24 from mmcv.runner import Fp16OptimizerHook
25 except ImportError:
26 warnings.warn('DeprecationWarning: FP16OptimizerHook from mmcls will be '
27 'deprecated. Please install mmcv>=1.1.4.')
28 from mmcls.core import Fp16OptimizerHook
29
30
31 def set_random_seed(seed, deterministic=False):
32 """Set random seed.
33
34 Args:
35 seed (int): Seed to be used.
36 deterministic (bool): Whether to set the deterministic option for
37 CUDNN backend, i.e., set `torch.backends.cudnn.deterministic`
38 to True and `torch.backends.cudnn.benchmark` to False.
39 Default: False.
40 """
41 random.seed(seed)
42 np.random.seed(seed)
43 torch.manual_seed(seed)
44 torch.cuda.manual_seed_all(seed)
45 if deterministic:
46 torch.backends.cudnn.deterministic = True
47 torch.backends.cudnn.benchmark = False
48
49
50 def train_model(model,
51 dataset,
52 cfg,
53 distributed=False,
54 validate=False,
55 timestamp=None,
56 device='cuda',
57 meta=None):
58 logger = get_root_logger(cfg.log_level)
59
60 # prepare data loaders
61 dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]
62
63 data_loaders = [
64 build_dataloader(
65 ds,
66 cfg.data.samples_per_gpu,
67 cfg.data.workers_per_gpu,
68 # cfg.gpus will be ignored if distributed
69 num_gpus=len(cfg.gpu_ids),
70 dist=distributed,
71 round_up=True,
72 seed=cfg.seed) for ds in dataset
73 ]
74
75 # put model on gpus
76 if distributed:
77 find_unused_parameters = cfg.get('find_unused_parameters', False)
78 # Sets the `find_unused_parameters` parameter in
79 # torch.nn.parallel.DistributedDataParallel
80 model = MMDistributedDataParallel(
81 model.cuda(),
82 device_ids=[torch.cuda.current_device()],
83 broadcast_buffers=False,
84 find_unused_parameters=find_unused_parameters)
85 else:
86 if device == 'cuda':
87 model = MMDataParallel(
88 model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)
89 elif device == 'cpu':
90 model = MMDataParallel(model.cpu())
91 else:
92 raise ValueError(F'unsupported device name {device}.')
93
94 # build runner
95 optimizer = build_optimizer(model, cfg.optimizer)
96
97 if cfg.get('runner') is None:
98 cfg.runner = {
99 'type': 'EpochBasedRunner',
100 'max_epochs': cfg.total_epochs
101 }
102 warnings.warn(
103 'config is now expected to have a `runner` section, '
104 'please set `runner` in your config.', UserWarning)
105
106 runner = build_runner(
107 cfg.runner,
108 default_args=dict(
109 model=model,
110 batch_processor=None,
111 optimizer=optimizer,
112 work_dir=cfg.work_dir,
113 logger=logger,
114 meta=meta))
115
116 # an ugly walkaround to make the .log and .log.json filenames the same
117 runner.timestamp = timestamp
118
119 # fp16 setting
120 fp16_cfg = cfg.get('fp16', None)
121 if fp16_cfg is not None:
122 optimizer_config = Fp16OptimizerHook(
123 **cfg.optimizer_config, **fp16_cfg, distributed=distributed)
124 elif distributed and 'type' not in cfg.optimizer_config:
125 optimizer_config = DistOptimizerHook(**cfg.optimizer_config)
126 else:
127 optimizer_config = cfg.optimizer_config
128
129 # register hooks
130 runner.register_training_hooks(cfg.lr_config, optimizer_config,
131 cfg.checkpoint_config, cfg.log_config,
132 cfg.get('momentum_config', None))
133 if distributed:
134 runner.register_hook(DistSamplerSeedHook())
135
136 # register eval hooks
137 if validate:
138 val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))
139 val_dataloader = build_dataloader(
140 val_dataset,
141 samples_per_gpu=cfg.data.samples_per_gpu,
142 workers_per_gpu=cfg.data.workers_per_gpu,
143 dist=distributed,
144 shuffle=False,
145 round_up=True)
146 eval_cfg = cfg.get('evaluation', {})
147 eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner'
148 eval_hook = DistEvalHook if distributed else EvalHook
149 runner.register_hook(eval_hook(val_dataloader, **eval_cfg))
150
151 if cfg.resume_from:
152 runner.resume(cfg.resume_from)
153 elif cfg.load_from:
154 runner.load_checkpoint(cfg.load_from)
155 runner.run(data_loaders, cfg.workflow)
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mmcls/apis/train.py b/mmcls/apis/train.py
--- a/mmcls/apis/train.py
+++ b/mmcls/apis/train.py
@@ -87,7 +87,7 @@
model = MMDataParallel(
model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)
elif device == 'cpu':
- model = MMDataParallel(model.cpu())
+ model = model.cpu()
else:
raise ValueError(F'unsupported device name {device}.')
| {"golden_diff": "diff --git a/mmcls/apis/train.py b/mmcls/apis/train.py\n--- a/mmcls/apis/train.py\n+++ b/mmcls/apis/train.py\n@@ -87,7 +87,7 @@\n model = MMDataParallel(\n model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)\n elif device == 'cpu':\n- model = MMDataParallel(model.cpu())\n+ model = model.cpu()\n else:\n raise ValueError(F'unsupported device name {device}.')\n", "issue": "[Feature Request] CPU Testing\nSince CPU training is already supported in PR #219, what about also adding the feature of CPU testing. \r\n\r\nBesides, it seems there are still some problems with the CPU training feature @wangruohui : \r\nWhen we set `--device CPU`, the expected behavior is using CPU for training, no matter if there exist GPUs on this machine. However, mmcls will use GPU for training if it exists, even if we set `--device CPU`. \n", "before_files": [{"content": "import random\nimport warnings\n\nimport numpy as np\nimport torch\nfrom mmcv.parallel import MMDataParallel, MMDistributedDataParallel\nfrom mmcv.runner import DistSamplerSeedHook, build_optimizer, build_runner\n\nfrom mmcls.core import DistOptimizerHook\nfrom mmcls.datasets import build_dataloader, build_dataset\nfrom mmcls.utils import get_root_logger\n\n# TODO import eval hooks from mmcv and delete them from mmcls\ntry:\n from mmcv.runner.hooks import EvalHook, DistEvalHook\nexcept ImportError:\n warnings.warn('DeprecationWarning: EvalHook and DistEvalHook from mmcls '\n 'will be deprecated.'\n 'Please install mmcv through master branch.')\n from mmcls.core import EvalHook, DistEvalHook\n\n# TODO import optimizer hook from mmcv and delete them from mmcls\ntry:\n from mmcv.runner import Fp16OptimizerHook\nexcept ImportError:\n warnings.warn('DeprecationWarning: FP16OptimizerHook from mmcls will be '\n 'deprecated. Please install mmcv>=1.1.4.')\n from mmcls.core import Fp16OptimizerHook\n\n\ndef set_random_seed(seed, deterministic=False):\n \"\"\"Set random seed.\n\n Args:\n seed (int): Seed to be used.\n deterministic (bool): Whether to set the deterministic option for\n CUDNN backend, i.e., set `torch.backends.cudnn.deterministic`\n to True and `torch.backends.cudnn.benchmark` to False.\n Default: False.\n \"\"\"\n random.seed(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n torch.cuda.manual_seed_all(seed)\n if deterministic:\n torch.backends.cudnn.deterministic = True\n torch.backends.cudnn.benchmark = False\n\n\ndef train_model(model,\n dataset,\n cfg,\n distributed=False,\n validate=False,\n timestamp=None,\n device='cuda',\n meta=None):\n logger = get_root_logger(cfg.log_level)\n\n # prepare data loaders\n dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]\n\n data_loaders = [\n build_dataloader(\n ds,\n cfg.data.samples_per_gpu,\n cfg.data.workers_per_gpu,\n # cfg.gpus will be ignored if distributed\n num_gpus=len(cfg.gpu_ids),\n dist=distributed,\n round_up=True,\n seed=cfg.seed) for ds in dataset\n ]\n\n # put model on gpus\n if distributed:\n find_unused_parameters = cfg.get('find_unused_parameters', False)\n # Sets the `find_unused_parameters` parameter in\n # torch.nn.parallel.DistributedDataParallel\n model = MMDistributedDataParallel(\n model.cuda(),\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n if device == 'cuda':\n model = MMDataParallel(\n model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)\n elif device == 'cpu':\n model = MMDataParallel(model.cpu())\n else:\n raise ValueError(F'unsupported device name {device}.')\n\n # build runner\n optimizer = build_optimizer(model, cfg.optimizer)\n\n if cfg.get('runner') is None:\n cfg.runner = {\n 'type': 'EpochBasedRunner',\n 'max_epochs': cfg.total_epochs\n }\n warnings.warn(\n 'config is now expected to have a `runner` section, '\n 'please set `runner` in your config.', UserWarning)\n\n runner = build_runner(\n cfg.runner,\n default_args=dict(\n model=model,\n batch_processor=None,\n optimizer=optimizer,\n work_dir=cfg.work_dir,\n logger=logger,\n meta=meta))\n\n # an ugly walkaround to make the .log and .log.json filenames the same\n runner.timestamp = timestamp\n\n # fp16 setting\n fp16_cfg = cfg.get('fp16', None)\n if fp16_cfg is not None:\n optimizer_config = Fp16OptimizerHook(\n **cfg.optimizer_config, **fp16_cfg, distributed=distributed)\n elif distributed and 'type' not in cfg.optimizer_config:\n optimizer_config = DistOptimizerHook(**cfg.optimizer_config)\n else:\n optimizer_config = cfg.optimizer_config\n\n # register hooks\n runner.register_training_hooks(cfg.lr_config, optimizer_config,\n cfg.checkpoint_config, cfg.log_config,\n cfg.get('momentum_config', None))\n if distributed:\n runner.register_hook(DistSamplerSeedHook())\n\n # register eval hooks\n if validate:\n val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))\n val_dataloader = build_dataloader(\n val_dataset,\n samples_per_gpu=cfg.data.samples_per_gpu,\n workers_per_gpu=cfg.data.workers_per_gpu,\n dist=distributed,\n shuffle=False,\n round_up=True)\n eval_cfg = cfg.get('evaluation', {})\n eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner'\n eval_hook = DistEvalHook if distributed else EvalHook\n runner.register_hook(eval_hook(val_dataloader, **eval_cfg))\n\n if cfg.resume_from:\n runner.resume(cfg.resume_from)\n elif cfg.load_from:\n runner.load_checkpoint(cfg.load_from)\n runner.run(data_loaders, cfg.workflow)\n", "path": "mmcls/apis/train.py"}], "after_files": [{"content": "import random\nimport warnings\n\nimport numpy as np\nimport torch\nfrom mmcv.parallel import MMDataParallel, MMDistributedDataParallel\nfrom mmcv.runner import DistSamplerSeedHook, build_optimizer, build_runner\n\nfrom mmcls.core import DistOptimizerHook\nfrom mmcls.datasets import build_dataloader, build_dataset\nfrom mmcls.utils import get_root_logger\n\n# TODO import eval hooks from mmcv and delete them from mmcls\ntry:\n from mmcv.runner.hooks import EvalHook, DistEvalHook\nexcept ImportError:\n warnings.warn('DeprecationWarning: EvalHook and DistEvalHook from mmcls '\n 'will be deprecated.'\n 'Please install mmcv through master branch.')\n from mmcls.core import EvalHook, DistEvalHook\n\n# TODO import optimizer hook from mmcv and delete them from mmcls\ntry:\n from mmcv.runner import Fp16OptimizerHook\nexcept ImportError:\n warnings.warn('DeprecationWarning: FP16OptimizerHook from mmcls will be '\n 'deprecated. Please install mmcv>=1.1.4.')\n from mmcls.core import Fp16OptimizerHook\n\n\ndef set_random_seed(seed, deterministic=False):\n \"\"\"Set random seed.\n\n Args:\n seed (int): Seed to be used.\n deterministic (bool): Whether to set the deterministic option for\n CUDNN backend, i.e., set `torch.backends.cudnn.deterministic`\n to True and `torch.backends.cudnn.benchmark` to False.\n Default: False.\n \"\"\"\n random.seed(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n torch.cuda.manual_seed_all(seed)\n if deterministic:\n torch.backends.cudnn.deterministic = True\n torch.backends.cudnn.benchmark = False\n\n\ndef train_model(model,\n dataset,\n cfg,\n distributed=False,\n validate=False,\n timestamp=None,\n device='cuda',\n meta=None):\n logger = get_root_logger(cfg.log_level)\n\n # prepare data loaders\n dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]\n\n data_loaders = [\n build_dataloader(\n ds,\n cfg.data.samples_per_gpu,\n cfg.data.workers_per_gpu,\n # cfg.gpus will be ignored if distributed\n num_gpus=len(cfg.gpu_ids),\n dist=distributed,\n round_up=True,\n seed=cfg.seed) for ds in dataset\n ]\n\n # put model on gpus\n if distributed:\n find_unused_parameters = cfg.get('find_unused_parameters', False)\n # Sets the `find_unused_parameters` parameter in\n # torch.nn.parallel.DistributedDataParallel\n model = MMDistributedDataParallel(\n model.cuda(),\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n if device == 'cuda':\n model = MMDataParallel(\n model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)\n elif device == 'cpu':\n model = model.cpu()\n else:\n raise ValueError(F'unsupported device name {device}.')\n\n # build runner\n optimizer = build_optimizer(model, cfg.optimizer)\n\n if cfg.get('runner') is None:\n cfg.runner = {\n 'type': 'EpochBasedRunner',\n 'max_epochs': cfg.total_epochs\n }\n warnings.warn(\n 'config is now expected to have a `runner` section, '\n 'please set `runner` in your config.', UserWarning)\n\n runner = build_runner(\n cfg.runner,\n default_args=dict(\n model=model,\n batch_processor=None,\n optimizer=optimizer,\n work_dir=cfg.work_dir,\n logger=logger,\n meta=meta))\n\n # an ugly walkaround to make the .log and .log.json filenames the same\n runner.timestamp = timestamp\n\n # fp16 setting\n fp16_cfg = cfg.get('fp16', None)\n if fp16_cfg is not None:\n optimizer_config = Fp16OptimizerHook(\n **cfg.optimizer_config, **fp16_cfg, distributed=distributed)\n elif distributed and 'type' not in cfg.optimizer_config:\n optimizer_config = DistOptimizerHook(**cfg.optimizer_config)\n else:\n optimizer_config = cfg.optimizer_config\n\n # register hooks\n runner.register_training_hooks(cfg.lr_config, optimizer_config,\n cfg.checkpoint_config, cfg.log_config,\n cfg.get('momentum_config', None))\n if distributed:\n runner.register_hook(DistSamplerSeedHook())\n\n # register eval hooks\n if validate:\n val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))\n val_dataloader = build_dataloader(\n val_dataset,\n samples_per_gpu=cfg.data.samples_per_gpu,\n workers_per_gpu=cfg.data.workers_per_gpu,\n dist=distributed,\n shuffle=False,\n round_up=True)\n eval_cfg = cfg.get('evaluation', {})\n eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner'\n eval_hook = DistEvalHook if distributed else EvalHook\n runner.register_hook(eval_hook(val_dataloader, **eval_cfg))\n\n if cfg.resume_from:\n runner.resume(cfg.resume_from)\n elif cfg.load_from:\n runner.load_checkpoint(cfg.load_from)\n runner.run(data_loaders, cfg.workflow)\n", "path": "mmcls/apis/train.py"}]} | 1,869 | 106 |
gh_patches_debug_307 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-1724 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
botocore gets monkey patched before gevent when using pynamoDB
In [0.43.0 ssl libs are patched on import](https://github.com/DataDog/dd-trace-py/pull/1629) to allow `ddtrace-run` and `gevent` to exist in harmony.
`pynamodb` imports `botocore` and PynamoDB is patched by default. The result of this is that `ddtrace-run` ends up monkey patching `botocore` before `gevent` does.
I believe PynamoDB should be listed in the SSL libs that only get patched on import.
### Which version of dd-trace-py are you using?
0.43.0
### Which version of the libraries are you using?
ddtrace==0.43.0
gevent==20.9.0
greenlet==0.4.17
gunicorn==20.0.4
pynamodb==4.3.3
### How can we reproduce your problem?
1. Create new virtualenv
```
$ mkdir temp
$ cd temp
$ virtualenv .
$ . ./bin/active
```
2. Install libs
```
pip install ddtrace gunicorn[gevent] pynamodb
```
3. Create empty `app.py`
```
import time
while True:
time.sleep(1)
```
Run the failing command
```
ddtrace-run gunicorn -k gevent app
```
The following warning is displayed, which will turn into a SSL recursion error if you try and use urllib3.
```
$ ddtrace-run gunicorn -k gevent app
- DATADOG TRACER DIAGNOSTIC - Agent not reachable. Exception raised: [Errno 61] Connection refused
- DATADOG TRACER DIAGNOSTIC - Agent not reachable. Exception raised: [Errno 61] Connection refused
[2020-10-12 16:46:09 +1100] [69996] [INFO] Starting gunicorn 20.0.4
[2020-10-12 16:46:09 +1100] [69996] [INFO] Listening at: http://127.0.0.1:8000 (69996)
[2020-10-12 16:46:09 +1100] [69996] [INFO] Using worker: gevent
[2020-10-12 16:46:09 +1100] [70004] [INFO] Booting worker with pid: 70004
/private/tmp/venv/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:53: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['botocore.httpsession (/private/tmp/venv/lib/python3.7/site-packages/botocore/httpsession.py)', 'urllib3.util.ssl_ (/private/tmp/venv/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/private/tmp/venv/lib/python3.7/site-packages/urllib3/util/__init__.py)'].
monkey.patch_all()
```
Disable pynamodb tracing to fix
```
DD_TRACE_PYNAMODB_ENABLED=False ddtrace-run gunicorn -k gevent app
```
Which gives the following output
```
$ DD_TRACE_PYNAMODB_ENABLED=False ddtrace-run gunicorn -k gevent app
- DATADOG TRACER DIAGNOSTIC - Agent not reachable. Exception raised: [Errno 61] Connection refused
- DATADOG TRACER DIAGNOSTIC - Agent not reachable. Exception raised: [Errno 61] Connection refused
[2020-10-12 16:48:11 +1100] [70038] [INFO] Starting gunicorn 20.0.4
[2020-10-12 16:48:11 +1100] [70038] [INFO] Listening at: http://127.0.0.1:8000 (70038)
[2020-10-12 16:48:11 +1100] [70038] [INFO] Using worker: gevent
[2020-10-12 16:48:11 +1100] [70046] [INFO] Booting worker with pid: 70046
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/monkey.py`
Content:
```
1 """Patch libraries to be automatically instrumented.
2
3 It can monkey patch supported standard libraries and third party modules.
4 A patched module will automatically report spans with its default configuration.
5
6 A library instrumentation can be configured (for instance, to report as another service)
7 using Pin. For that, check its documentation.
8 """
9 import importlib
10 import os
11 import sys
12 import threading
13
14 from ddtrace.vendor.wrapt.importer import when_imported
15
16 from .internal.logger import get_logger
17 from .settings import config
18 from .utils import formats
19
20
21 log = get_logger(__name__)
22
23 # Default set of modules to automatically patch or not
24 PATCH_MODULES = {
25 "asyncio": True,
26 "boto": True,
27 "botocore": True,
28 "bottle": False,
29 "cassandra": True,
30 "celery": True,
31 "consul": True,
32 "django": True,
33 "elasticsearch": True,
34 "algoliasearch": True,
35 "futures": False, # experimental propagation
36 "grpc": True,
37 "mongoengine": True,
38 "mysql": True,
39 "mysqldb": True,
40 "pymysql": True,
41 "psycopg": True,
42 "pylibmc": True,
43 "pymemcache": True,
44 "pymongo": True,
45 "redis": True,
46 "rediscluster": True,
47 "requests": True,
48 "sanic": True,
49 "sqlalchemy": False, # Prefer DB client instrumentation
50 "sqlite3": True,
51 "aiohttp": True, # requires asyncio (Python 3.4+)
52 "aiopg": True,
53 "aiobotocore": False,
54 "httplib": False,
55 "vertica": True,
56 "molten": True,
57 "jinja2": True,
58 "mako": True,
59 "flask": True,
60 "kombu": False,
61 "starlette": True,
62 # Ignore some web framework integrations that might be configured explicitly in code
63 "falcon": False,
64 "pylons": False,
65 "pyramid": False,
66 # Auto-enable logging if the environment variable DD_LOGS_INJECTION is true
67 "logging": config.logs_injection,
68 "pynamodb": True,
69 }
70
71 _LOCK = threading.Lock()
72 _PATCHED_MODULES = set()
73
74 # Modules which are patched on first use
75 # DEV: These modules are patched when the user first imports them, rather than
76 # explicitly importing and patching them on application startup `ddtrace.patch_all(module=True)`
77 # DEV: This ensures we do not patch a module until it is needed
78 # DEV: <contrib name> => <list of module names that trigger a patch>
79 _PATCH_ON_IMPORT = {
80 "aiohttp": ("aiohttp",),
81 "aiobotocore": ("aiobotocore",),
82 "celery": ("celery",),
83 "flask": ("flask, "),
84 "gevent": ("gevent",),
85 "requests": ("requests",),
86 "botocore": ("botocore",),
87 "elasticsearch": ("elasticsearch",),
88 }
89
90
91 class PatchException(Exception):
92 """Wraps regular `Exception` class when patching modules"""
93
94 pass
95
96
97 class ModuleNotFoundException(PatchException):
98 pass
99
100
101 def _on_import_factory(module, raise_errors=True):
102 """Factory to create an import hook for the provided module name"""
103
104 def on_import(hook):
105 # Import and patch module
106 path = "ddtrace.contrib.%s" % module
107 imported_module = importlib.import_module(path)
108 imported_module.patch()
109
110 return on_import
111
112
113 def patch_all(**patch_modules):
114 """Automatically patches all available modules.
115
116 In addition to ``patch_modules``, an override can be specified via an
117 environment variable, ``DD_TRACE_<module>_ENABLED`` for each module.
118
119 ``patch_modules`` have the highest precedence for overriding.
120
121 :param dict patch_modules: Override whether particular modules are patched or not.
122
123 >>> patch_all(redis=False, cassandra=False)
124 """
125 modules = PATCH_MODULES.copy()
126
127 # The enabled setting can be overridden by environment variables
128 for module, enabled in modules.items():
129 env_var = "DD_TRACE_%s_ENABLED" % module.upper()
130 if env_var not in os.environ:
131 continue
132
133 override_enabled = formats.asbool(os.environ[env_var])
134 modules[module] = override_enabled
135
136 # Arguments take precedence over the environment and the defaults.
137 modules.update(patch_modules)
138
139 patch(raise_errors=False, **modules)
140
141
142 def patch(raise_errors=True, **patch_modules):
143 """Patch only a set of given modules.
144
145 :param bool raise_errors: Raise error if one patch fail.
146 :param dict patch_modules: List of modules to patch.
147
148 >>> patch(psycopg=True, elasticsearch=True)
149 """
150 modules = [m for (m, should_patch) in patch_modules.items() if should_patch]
151 for module in modules:
152 if module in _PATCH_ON_IMPORT:
153 # If the module has already been imported then patch immediately
154 if module in sys.modules:
155 patch_module(module, raise_errors=raise_errors)
156
157 # Otherwise, add a hook to patch when it is imported for the first time
158 else:
159 # Use factory to create handler to close over `module` and `raise_errors` values from this loop
160 when_imported(module)(_on_import_factory(module, raise_errors))
161
162 # manually add module to patched modules
163 with _LOCK:
164 _PATCHED_MODULES.add(module)
165 else:
166 patch_module(module, raise_errors=raise_errors)
167
168 patched_modules = get_patched_modules()
169 log.info(
170 "patched %s/%s modules (%s)",
171 len(patched_modules),
172 len(modules),
173 ",".join(patched_modules),
174 )
175
176
177 def patch_module(module, raise_errors=True):
178 """Patch a single module
179
180 Returns if the module got properly patched.
181 """
182 try:
183 return _patch_module(module)
184 except ModuleNotFoundException:
185 if raise_errors:
186 raise
187 return False
188 except Exception:
189 if raise_errors:
190 raise
191 log.debug("failed to patch %s", module, exc_info=True)
192 return False
193
194
195 def get_patched_modules():
196 """Get the list of patched modules"""
197 with _LOCK:
198 return sorted(_PATCHED_MODULES)
199
200
201 def _patch_module(module):
202 """_patch_module will attempt to monkey patch the module.
203
204 Returns if the module got patched.
205 Can also raise errors if it fails.
206 """
207 path = "ddtrace.contrib.%s" % module
208 with _LOCK:
209 if module in _PATCHED_MODULES and module not in _PATCH_ON_IMPORT:
210 log.debug("already patched: %s", path)
211 return False
212
213 try:
214 imported_module = importlib.import_module(path)
215 except ImportError:
216 # if the import fails, the integration is not available
217 raise PatchException("integration '%s' not available" % path)
218 else:
219 # if patch() is not available in the module, it means
220 # that the library is not installed in the environment
221 if not hasattr(imported_module, "patch"):
222 raise ModuleNotFoundException("module '%s' not installed" % module)
223
224 imported_module.patch()
225 _PATCHED_MODULES.add(module)
226 return True
227
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddtrace/monkey.py b/ddtrace/monkey.py
--- a/ddtrace/monkey.py
+++ b/ddtrace/monkey.py
@@ -85,6 +85,7 @@
"requests": ("requests",),
"botocore": ("botocore",),
"elasticsearch": ("elasticsearch",),
+ "pynamodb": ("pynamodb",),
}
| {"golden_diff": "diff --git a/ddtrace/monkey.py b/ddtrace/monkey.py\n--- a/ddtrace/monkey.py\n+++ b/ddtrace/monkey.py\n@@ -85,6 +85,7 @@\n \"requests\": (\"requests\",),\n \"botocore\": (\"botocore\",),\n \"elasticsearch\": (\"elasticsearch\",),\n+ \"pynamodb\": (\"pynamodb\",),\n }\n", "issue": "botocore gets monkey patched before gevent when using pynamoDB\nIn [0.43.0 ssl libs are patched on import](https://github.com/DataDog/dd-trace-py/pull/1629) to allow `ddtrace-run` and `gevent` to exist in harmony.\r\n\r\n`pynamodb` imports `botocore` and PynamoDB is patched by default. The result of this is that `ddtrace-run` ends up monkey patching `botocore` before `gevent` does.\r\n\r\nI believe PynamoDB should be listed in the SSL libs that only get patched on import.\r\n\r\n### Which version of dd-trace-py are you using?\r\n0.43.0\r\n\r\n### Which version of the libraries are you using?\r\n\r\nddtrace==0.43.0\r\ngevent==20.9.0\r\ngreenlet==0.4.17\r\ngunicorn==20.0.4\r\npynamodb==4.3.3\r\n\r\n### How can we reproduce your problem?\r\n\r\n1. Create new virtualenv\r\n```\r\n$ mkdir temp\r\n$ cd temp\r\n$ virtualenv .\r\n$ . ./bin/active\r\n```\r\n\r\n2. Install libs\r\n```\r\npip install ddtrace gunicorn[gevent] pynamodb\r\n```\r\n\r\n3. Create empty `app.py`\r\n```\r\nimport time\r\nwhile True:\r\n time.sleep(1)\r\n```\r\n\r\nRun the failing command\r\n```\r\n ddtrace-run gunicorn -k gevent app\r\n```\r\n\r\nThe following warning is displayed, which will turn into a SSL recursion error if you try and use urllib3.\r\n\r\n```\r\n$ ddtrace-run gunicorn -k gevent app\r\n- DATADOG TRACER DIAGNOSTIC - Agent not reachable. Exception raised: [Errno 61] Connection refused\r\n- DATADOG TRACER DIAGNOSTIC - Agent not reachable. Exception raised: [Errno 61] Connection refused\r\n[2020-10-12 16:46:09 +1100] [69996] [INFO] Starting gunicorn 20.0.4\r\n[2020-10-12 16:46:09 +1100] [69996] [INFO] Listening at: http://127.0.0.1:8000 (69996)\r\n[2020-10-12 16:46:09 +1100] [69996] [INFO] Using worker: gevent\r\n[2020-10-12 16:46:09 +1100] [70004] [INFO] Booting worker with pid: 70004\r\n/private/tmp/venv/lib/python3.7/site-packages/gunicorn/workers/ggevent.py:53: MonkeyPatchWarning: Monkey-patching ssl after ssl has already been imported may lead to errors, including RecursionError on Python 3.6. It may also silently lead to incorrect behaviour on Python 3.7. Please monkey-patch earlier. See https://github.com/gevent/gevent/issues/1016. Modules that had direct imports (NOT patched): ['botocore.httpsession (/private/tmp/venv/lib/python3.7/site-packages/botocore/httpsession.py)', 'urllib3.util.ssl_ (/private/tmp/venv/lib/python3.7/site-packages/urllib3/util/ssl_.py)', 'urllib3.util (/private/tmp/venv/lib/python3.7/site-packages/urllib3/util/__init__.py)'].\r\n monkey.patch_all()\r\n```\r\n\r\nDisable pynamodb tracing to fix\r\n```\r\nDD_TRACE_PYNAMODB_ENABLED=False ddtrace-run gunicorn -k gevent app\r\n```\r\n\r\nWhich gives the following output\r\n```\r\n$ DD_TRACE_PYNAMODB_ENABLED=False ddtrace-run gunicorn -k gevent app\r\n- DATADOG TRACER DIAGNOSTIC - Agent not reachable. Exception raised: [Errno 61] Connection refused\r\n- DATADOG TRACER DIAGNOSTIC - Agent not reachable. Exception raised: [Errno 61] Connection refused\r\n[2020-10-12 16:48:11 +1100] [70038] [INFO] Starting gunicorn 20.0.4\r\n[2020-10-12 16:48:11 +1100] [70038] [INFO] Listening at: http://127.0.0.1:8000 (70038)\r\n[2020-10-12 16:48:11 +1100] [70038] [INFO] Using worker: gevent\r\n[2020-10-12 16:48:11 +1100] [70046] [INFO] Booting worker with pid: 70046\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Patch libraries to be automatically instrumented.\n\nIt can monkey patch supported standard libraries and third party modules.\nA patched module will automatically report spans with its default configuration.\n\nA library instrumentation can be configured (for instance, to report as another service)\nusing Pin. For that, check its documentation.\n\"\"\"\nimport importlib\nimport os\nimport sys\nimport threading\n\nfrom ddtrace.vendor.wrapt.importer import when_imported\n\nfrom .internal.logger import get_logger\nfrom .settings import config\nfrom .utils import formats\n\n\nlog = get_logger(__name__)\n\n# Default set of modules to automatically patch or not\nPATCH_MODULES = {\n \"asyncio\": True,\n \"boto\": True,\n \"botocore\": True,\n \"bottle\": False,\n \"cassandra\": True,\n \"celery\": True,\n \"consul\": True,\n \"django\": True,\n \"elasticsearch\": True,\n \"algoliasearch\": True,\n \"futures\": False, # experimental propagation\n \"grpc\": True,\n \"mongoengine\": True,\n \"mysql\": True,\n \"mysqldb\": True,\n \"pymysql\": True,\n \"psycopg\": True,\n \"pylibmc\": True,\n \"pymemcache\": True,\n \"pymongo\": True,\n \"redis\": True,\n \"rediscluster\": True,\n \"requests\": True,\n \"sanic\": True,\n \"sqlalchemy\": False, # Prefer DB client instrumentation\n \"sqlite3\": True,\n \"aiohttp\": True, # requires asyncio (Python 3.4+)\n \"aiopg\": True,\n \"aiobotocore\": False,\n \"httplib\": False,\n \"vertica\": True,\n \"molten\": True,\n \"jinja2\": True,\n \"mako\": True,\n \"flask\": True,\n \"kombu\": False,\n \"starlette\": True,\n # Ignore some web framework integrations that might be configured explicitly in code\n \"falcon\": False,\n \"pylons\": False,\n \"pyramid\": False,\n # Auto-enable logging if the environment variable DD_LOGS_INJECTION is true\n \"logging\": config.logs_injection,\n \"pynamodb\": True,\n}\n\n_LOCK = threading.Lock()\n_PATCHED_MODULES = set()\n\n# Modules which are patched on first use\n# DEV: These modules are patched when the user first imports them, rather than\n# explicitly importing and patching them on application startup `ddtrace.patch_all(module=True)`\n# DEV: This ensures we do not patch a module until it is needed\n# DEV: <contrib name> => <list of module names that trigger a patch>\n_PATCH_ON_IMPORT = {\n \"aiohttp\": (\"aiohttp\",),\n \"aiobotocore\": (\"aiobotocore\",),\n \"celery\": (\"celery\",),\n \"flask\": (\"flask, \"),\n \"gevent\": (\"gevent\",),\n \"requests\": (\"requests\",),\n \"botocore\": (\"botocore\",),\n \"elasticsearch\": (\"elasticsearch\",),\n}\n\n\nclass PatchException(Exception):\n \"\"\"Wraps regular `Exception` class when patching modules\"\"\"\n\n pass\n\n\nclass ModuleNotFoundException(PatchException):\n pass\n\n\ndef _on_import_factory(module, raise_errors=True):\n \"\"\"Factory to create an import hook for the provided module name\"\"\"\n\n def on_import(hook):\n # Import and patch module\n path = \"ddtrace.contrib.%s\" % module\n imported_module = importlib.import_module(path)\n imported_module.patch()\n\n return on_import\n\n\ndef patch_all(**patch_modules):\n \"\"\"Automatically patches all available modules.\n\n In addition to ``patch_modules``, an override can be specified via an\n environment variable, ``DD_TRACE_<module>_ENABLED`` for each module.\n\n ``patch_modules`` have the highest precedence for overriding.\n\n :param dict patch_modules: Override whether particular modules are patched or not.\n\n >>> patch_all(redis=False, cassandra=False)\n \"\"\"\n modules = PATCH_MODULES.copy()\n\n # The enabled setting can be overridden by environment variables\n for module, enabled in modules.items():\n env_var = \"DD_TRACE_%s_ENABLED\" % module.upper()\n if env_var not in os.environ:\n continue\n\n override_enabled = formats.asbool(os.environ[env_var])\n modules[module] = override_enabled\n\n # Arguments take precedence over the environment and the defaults.\n modules.update(patch_modules)\n\n patch(raise_errors=False, **modules)\n\n\ndef patch(raise_errors=True, **patch_modules):\n \"\"\"Patch only a set of given modules.\n\n :param bool raise_errors: Raise error if one patch fail.\n :param dict patch_modules: List of modules to patch.\n\n >>> patch(psycopg=True, elasticsearch=True)\n \"\"\"\n modules = [m for (m, should_patch) in patch_modules.items() if should_patch]\n for module in modules:\n if module in _PATCH_ON_IMPORT:\n # If the module has already been imported then patch immediately\n if module in sys.modules:\n patch_module(module, raise_errors=raise_errors)\n\n # Otherwise, add a hook to patch when it is imported for the first time\n else:\n # Use factory to create handler to close over `module` and `raise_errors` values from this loop\n when_imported(module)(_on_import_factory(module, raise_errors))\n\n # manually add module to patched modules\n with _LOCK:\n _PATCHED_MODULES.add(module)\n else:\n patch_module(module, raise_errors=raise_errors)\n\n patched_modules = get_patched_modules()\n log.info(\n \"patched %s/%s modules (%s)\",\n len(patched_modules),\n len(modules),\n \",\".join(patched_modules),\n )\n\n\ndef patch_module(module, raise_errors=True):\n \"\"\"Patch a single module\n\n Returns if the module got properly patched.\n \"\"\"\n try:\n return _patch_module(module)\n except ModuleNotFoundException:\n if raise_errors:\n raise\n return False\n except Exception:\n if raise_errors:\n raise\n log.debug(\"failed to patch %s\", module, exc_info=True)\n return False\n\n\ndef get_patched_modules():\n \"\"\"Get the list of patched modules\"\"\"\n with _LOCK:\n return sorted(_PATCHED_MODULES)\n\n\ndef _patch_module(module):\n \"\"\"_patch_module will attempt to monkey patch the module.\n\n Returns if the module got patched.\n Can also raise errors if it fails.\n \"\"\"\n path = \"ddtrace.contrib.%s\" % module\n with _LOCK:\n if module in _PATCHED_MODULES and module not in _PATCH_ON_IMPORT:\n log.debug(\"already patched: %s\", path)\n return False\n\n try:\n imported_module = importlib.import_module(path)\n except ImportError:\n # if the import fails, the integration is not available\n raise PatchException(\"integration '%s' not available\" % path)\n else:\n # if patch() is not available in the module, it means\n # that the library is not installed in the environment\n if not hasattr(imported_module, \"patch\"):\n raise ModuleNotFoundException(\"module '%s' not installed\" % module)\n\n imported_module.patch()\n _PATCHED_MODULES.add(module)\n return True\n", "path": "ddtrace/monkey.py"}], "after_files": [{"content": "\"\"\"Patch libraries to be automatically instrumented.\n\nIt can monkey patch supported standard libraries and third party modules.\nA patched module will automatically report spans with its default configuration.\n\nA library instrumentation can be configured (for instance, to report as another service)\nusing Pin. For that, check its documentation.\n\"\"\"\nimport importlib\nimport os\nimport sys\nimport threading\n\nfrom ddtrace.vendor.wrapt.importer import when_imported\n\nfrom .internal.logger import get_logger\nfrom .settings import config\nfrom .utils import formats\n\n\nlog = get_logger(__name__)\n\n# Default set of modules to automatically patch or not\nPATCH_MODULES = {\n \"asyncio\": True,\n \"boto\": True,\n \"botocore\": True,\n \"bottle\": False,\n \"cassandra\": True,\n \"celery\": True,\n \"consul\": True,\n \"django\": True,\n \"elasticsearch\": True,\n \"algoliasearch\": True,\n \"futures\": False, # experimental propagation\n \"grpc\": True,\n \"mongoengine\": True,\n \"mysql\": True,\n \"mysqldb\": True,\n \"pymysql\": True,\n \"psycopg\": True,\n \"pylibmc\": True,\n \"pymemcache\": True,\n \"pymongo\": True,\n \"redis\": True,\n \"rediscluster\": True,\n \"requests\": True,\n \"sanic\": True,\n \"sqlalchemy\": False, # Prefer DB client instrumentation\n \"sqlite3\": True,\n \"aiohttp\": True, # requires asyncio (Python 3.4+)\n \"aiopg\": True,\n \"aiobotocore\": False,\n \"httplib\": False,\n \"vertica\": True,\n \"molten\": True,\n \"jinja2\": True,\n \"mako\": True,\n \"flask\": True,\n \"kombu\": False,\n \"starlette\": True,\n # Ignore some web framework integrations that might be configured explicitly in code\n \"falcon\": False,\n \"pylons\": False,\n \"pyramid\": False,\n # Auto-enable logging if the environment variable DD_LOGS_INJECTION is true\n \"logging\": config.logs_injection,\n \"pynamodb\": True,\n}\n\n_LOCK = threading.Lock()\n_PATCHED_MODULES = set()\n\n# Modules which are patched on first use\n# DEV: These modules are patched when the user first imports them, rather than\n# explicitly importing and patching them on application startup `ddtrace.patch_all(module=True)`\n# DEV: This ensures we do not patch a module until it is needed\n# DEV: <contrib name> => <list of module names that trigger a patch>\n_PATCH_ON_IMPORT = {\n \"aiohttp\": (\"aiohttp\",),\n \"aiobotocore\": (\"aiobotocore\",),\n \"celery\": (\"celery\",),\n \"flask\": (\"flask, \"),\n \"gevent\": (\"gevent\",),\n \"requests\": (\"requests\",),\n \"botocore\": (\"botocore\",),\n \"elasticsearch\": (\"elasticsearch\",),\n \"pynamodb\": (\"pynamodb\",),\n}\n\n\nclass PatchException(Exception):\n \"\"\"Wraps regular `Exception` class when patching modules\"\"\"\n\n pass\n\n\nclass ModuleNotFoundException(PatchException):\n pass\n\n\ndef _on_import_factory(module, raise_errors=True):\n \"\"\"Factory to create an import hook for the provided module name\"\"\"\n\n def on_import(hook):\n # Import and patch module\n path = \"ddtrace.contrib.%s\" % module\n imported_module = importlib.import_module(path)\n imported_module.patch()\n\n return on_import\n\n\ndef patch_all(**patch_modules):\n \"\"\"Automatically patches all available modules.\n\n In addition to ``patch_modules``, an override can be specified via an\n environment variable, ``DD_TRACE_<module>_ENABLED`` for each module.\n\n ``patch_modules`` have the highest precedence for overriding.\n\n :param dict patch_modules: Override whether particular modules are patched or not.\n\n >>> patch_all(redis=False, cassandra=False)\n \"\"\"\n modules = PATCH_MODULES.copy()\n\n # The enabled setting can be overridden by environment variables\n for module, enabled in modules.items():\n env_var = \"DD_TRACE_%s_ENABLED\" % module.upper()\n if env_var not in os.environ:\n continue\n\n override_enabled = formats.asbool(os.environ[env_var])\n modules[module] = override_enabled\n\n # Arguments take precedence over the environment and the defaults.\n modules.update(patch_modules)\n\n patch(raise_errors=False, **modules)\n\n\ndef patch(raise_errors=True, **patch_modules):\n \"\"\"Patch only a set of given modules.\n\n :param bool raise_errors: Raise error if one patch fail.\n :param dict patch_modules: List of modules to patch.\n\n >>> patch(psycopg=True, elasticsearch=True)\n \"\"\"\n modules = [m for (m, should_patch) in patch_modules.items() if should_patch]\n for module in modules:\n if module in _PATCH_ON_IMPORT:\n # If the module has already been imported then patch immediately\n if module in sys.modules:\n patch_module(module, raise_errors=raise_errors)\n\n # Otherwise, add a hook to patch when it is imported for the first time\n else:\n # Use factory to create handler to close over `module` and `raise_errors` values from this loop\n when_imported(module)(_on_import_factory(module, raise_errors))\n\n # manually add module to patched modules\n with _LOCK:\n _PATCHED_MODULES.add(module)\n else:\n patch_module(module, raise_errors=raise_errors)\n\n patched_modules = get_patched_modules()\n log.info(\n \"patched %s/%s modules (%s)\",\n len(patched_modules),\n len(modules),\n \",\".join(patched_modules),\n )\n\n\ndef patch_module(module, raise_errors=True):\n \"\"\"Patch a single module\n\n Returns if the module got properly patched.\n \"\"\"\n try:\n return _patch_module(module)\n except ModuleNotFoundException:\n if raise_errors:\n raise\n return False\n except Exception:\n if raise_errors:\n raise\n log.debug(\"failed to patch %s\", module, exc_info=True)\n return False\n\n\ndef get_patched_modules():\n \"\"\"Get the list of patched modules\"\"\"\n with _LOCK:\n return sorted(_PATCHED_MODULES)\n\n\ndef _patch_module(module):\n \"\"\"_patch_module will attempt to monkey patch the module.\n\n Returns if the module got patched.\n Can also raise errors if it fails.\n \"\"\"\n path = \"ddtrace.contrib.%s\" % module\n with _LOCK:\n if module in _PATCHED_MODULES and module not in _PATCH_ON_IMPORT:\n log.debug(\"already patched: %s\", path)\n return False\n\n try:\n imported_module = importlib.import_module(path)\n except ImportError:\n # if the import fails, the integration is not available\n raise PatchException(\"integration '%s' not available\" % path)\n else:\n # if patch() is not available in the module, it means\n # that the library is not installed in the environment\n if not hasattr(imported_module, \"patch\"):\n raise ModuleNotFoundException(\"module '%s' not installed\" % module)\n\n imported_module.patch()\n _PATCHED_MODULES.add(module)\n return True\n", "path": "ddtrace/monkey.py"}]} | 3,567 | 86 |
gh_patches_debug_5537 | rasdani/github-patches | git_diff | nextcloud__appstore-619 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Verify email addresses after E-Mail change
When a user changes their email address, it should be verified. allauth provides some views for that which may or may not be useful. Unsure whether email addresses currently are verified at signup, but it would be appropriate for it to use the same mechanism.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nextcloudappstore/user/views.py`
Content:
```
1 from allauth.account.models import EmailAddress
2 from allauth.account.views import PasswordChangeView
3 from django.contrib import messages
4 from django.contrib.auth.mixins import LoginRequiredMixin
5 from django.urls import reverse_lazy
6 from django.shortcuts import redirect, render, get_object_or_404
7 from django.urls import reverse
8 from django.views.generic import TemplateView
9 from django.views.generic import UpdateView
10
11 from nextcloudappstore.core.models import App
12 from nextcloudappstore.user.forms import DeleteAccountForm, AccountForm
13
14
15 class TransferAppsView(LoginRequiredMixin, TemplateView):
16 template_name = 'user/transfer-apps.html'
17
18 def post(self, request, pk):
19 app = get_object_or_404(App, pk=pk, owner=self.request.user)
20 app.ownership_transfer_enabled = not app.ownership_transfer_enabled
21 app.save()
22 return redirect(reverse('user:account-transfer-apps'))
23
24 def get_context_data(self, **kwargs):
25 context = super().get_context_data(**kwargs)
26 context['apps'] = App.objects.filter(owner=self.request.user)
27 context['acc_page'] = 'account-transfer-apps'
28 return context
29
30
31 class ChangeLanguageView(LoginRequiredMixin, TemplateView):
32 template_name = 'user/set-language.html'
33
34 def get_context_data(self, **kwargs):
35 context = super().get_context_data(**kwargs)
36 context['acc_page'] = 'account-change-language'
37 return context
38
39
40 class DeleteAccountView(LoginRequiredMixin, TemplateView):
41 template_name = 'user/delete-account.html'
42
43 def get_context_data(self, **kwargs):
44 context = super().get_context_data(**kwargs)
45 context['form'] = DeleteAccountForm()
46 context['acc_page'] = 'delete-account'
47 return context
48
49 def post(self, request, *args, **kwargs):
50 form = DeleteAccountForm(request.POST, user=request.user)
51 if form.is_valid():
52 request.user.delete()
53 return redirect(reverse_lazy('home'))
54 else:
55 return render(request, self.template_name, {'form': form})
56
57
58 class AccountView(LoginRequiredMixin, UpdateView):
59 """Display and allow changing of the user's name."""
60
61 template_name = 'user/account.html'
62 template_name_suffix = ''
63 form_class = AccountForm
64 success_url = reverse_lazy('user:account')
65
66 def get_context_data(self, **kwargs):
67 context = super().get_context_data(**kwargs)
68 context['acc_page'] = 'account'
69 return context
70
71 def form_valid(self, form):
72 email = EmailAddress.objects.get_primary(user=self.request.user)
73 email.email = form.cleaned_data['email']
74 email.save()
75 messages.success(self.request, 'Account details saved.')
76 return super().form_valid(form)
77
78 def get_object(self, queryset=None):
79 return self.request.user
80
81
82 class PasswordView(LoginRequiredMixin, PasswordChangeView):
83 """Allow the user to change their password."""
84
85 template_name = 'user/password.html'
86 success_url = reverse_lazy('user:account-password')
87
88 def get_context_data(self, **kwargs):
89 context = super().get_context_data(**kwargs)
90 context['acc_page'] = 'password'
91 return context
92
93
94 class APITokenView(LoginRequiredMixin, TemplateView):
95 """Display the user's API token, and allow it to be regenerated."""
96
97 template_name = 'user/api-token.html'
98
99 def get_context_data(self, **kwargs):
100 context = super().get_context_data(**kwargs)
101 context['acc_page'] = 'api-token'
102 return context
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nextcloudappstore/user/views.py b/nextcloudappstore/user/views.py
--- a/nextcloudappstore/user/views.py
+++ b/nextcloudappstore/user/views.py
@@ -70,8 +70,7 @@
def form_valid(self, form):
email = EmailAddress.objects.get_primary(user=self.request.user)
- email.email = form.cleaned_data['email']
- email.save()
+ email.change(None, form.cleaned_data['email'])
messages.success(self.request, 'Account details saved.')
return super().form_valid(form)
| {"golden_diff": "diff --git a/nextcloudappstore/user/views.py b/nextcloudappstore/user/views.py\n--- a/nextcloudappstore/user/views.py\n+++ b/nextcloudappstore/user/views.py\n@@ -70,8 +70,7 @@\n \n def form_valid(self, form):\n email = EmailAddress.objects.get_primary(user=self.request.user)\n- email.email = form.cleaned_data['email']\n- email.save()\n+ email.change(None, form.cleaned_data['email'])\n messages.success(self.request, 'Account details saved.')\n return super().form_valid(form)\n", "issue": "Verify email addresses after E-Mail change\nWhen a user changes their email address, it should be verified. allauth provides some views for that which may or may not be useful. Unsure whether email addresses currently are verified at signup, but it would be appropriate for it to use the same mechanism.\n\n", "before_files": [{"content": "from allauth.account.models import EmailAddress\nfrom allauth.account.views import PasswordChangeView\nfrom django.contrib import messages\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.urls import reverse_lazy\nfrom django.shortcuts import redirect, render, get_object_or_404\nfrom django.urls import reverse\nfrom django.views.generic import TemplateView\nfrom django.views.generic import UpdateView\n\nfrom nextcloudappstore.core.models import App\nfrom nextcloudappstore.user.forms import DeleteAccountForm, AccountForm\n\n\nclass TransferAppsView(LoginRequiredMixin, TemplateView):\n template_name = 'user/transfer-apps.html'\n\n def post(self, request, pk):\n app = get_object_or_404(App, pk=pk, owner=self.request.user)\n app.ownership_transfer_enabled = not app.ownership_transfer_enabled\n app.save()\n return redirect(reverse('user:account-transfer-apps'))\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['apps'] = App.objects.filter(owner=self.request.user)\n context['acc_page'] = 'account-transfer-apps'\n return context\n\n\nclass ChangeLanguageView(LoginRequiredMixin, TemplateView):\n template_name = 'user/set-language.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'account-change-language'\n return context\n\n\nclass DeleteAccountView(LoginRequiredMixin, TemplateView):\n template_name = 'user/delete-account.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['form'] = DeleteAccountForm()\n context['acc_page'] = 'delete-account'\n return context\n\n def post(self, request, *args, **kwargs):\n form = DeleteAccountForm(request.POST, user=request.user)\n if form.is_valid():\n request.user.delete()\n return redirect(reverse_lazy('home'))\n else:\n return render(request, self.template_name, {'form': form})\n\n\nclass AccountView(LoginRequiredMixin, UpdateView):\n \"\"\"Display and allow changing of the user's name.\"\"\"\n\n template_name = 'user/account.html'\n template_name_suffix = ''\n form_class = AccountForm\n success_url = reverse_lazy('user:account')\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'account'\n return context\n\n def form_valid(self, form):\n email = EmailAddress.objects.get_primary(user=self.request.user)\n email.email = form.cleaned_data['email']\n email.save()\n messages.success(self.request, 'Account details saved.')\n return super().form_valid(form)\n\n def get_object(self, queryset=None):\n return self.request.user\n\n\nclass PasswordView(LoginRequiredMixin, PasswordChangeView):\n \"\"\"Allow the user to change their password.\"\"\"\n\n template_name = 'user/password.html'\n success_url = reverse_lazy('user:account-password')\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'password'\n return context\n\n\nclass APITokenView(LoginRequiredMixin, TemplateView):\n \"\"\"Display the user's API token, and allow it to be regenerated.\"\"\"\n\n template_name = 'user/api-token.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'api-token'\n return context\n", "path": "nextcloudappstore/user/views.py"}], "after_files": [{"content": "from allauth.account.models import EmailAddress\nfrom allauth.account.views import PasswordChangeView\nfrom django.contrib import messages\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.urls import reverse_lazy\nfrom django.shortcuts import redirect, render, get_object_or_404\nfrom django.urls import reverse\nfrom django.views.generic import TemplateView\nfrom django.views.generic import UpdateView\n\nfrom nextcloudappstore.core.models import App\nfrom nextcloudappstore.user.forms import DeleteAccountForm, AccountForm\n\n\nclass TransferAppsView(LoginRequiredMixin, TemplateView):\n template_name = 'user/transfer-apps.html'\n\n def post(self, request, pk):\n app = get_object_or_404(App, pk=pk, owner=self.request.user)\n app.ownership_transfer_enabled = not app.ownership_transfer_enabled\n app.save()\n return redirect(reverse('user:account-transfer-apps'))\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['apps'] = App.objects.filter(owner=self.request.user)\n context['acc_page'] = 'account-transfer-apps'\n return context\n\n\nclass ChangeLanguageView(LoginRequiredMixin, TemplateView):\n template_name = 'user/set-language.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'account-change-language'\n return context\n\n\nclass DeleteAccountView(LoginRequiredMixin, TemplateView):\n template_name = 'user/delete-account.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['form'] = DeleteAccountForm()\n context['acc_page'] = 'delete-account'\n return context\n\n def post(self, request, *args, **kwargs):\n form = DeleteAccountForm(request.POST, user=request.user)\n if form.is_valid():\n request.user.delete()\n return redirect(reverse_lazy('home'))\n else:\n return render(request, self.template_name, {'form': form})\n\n\nclass AccountView(LoginRequiredMixin, UpdateView):\n \"\"\"Display and allow changing of the user's name.\"\"\"\n\n template_name = 'user/account.html'\n template_name_suffix = ''\n form_class = AccountForm\n success_url = reverse_lazy('user:account')\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'account'\n return context\n\n def form_valid(self, form):\n email = EmailAddress.objects.get_primary(user=self.request.user)\n email.change(None, form.cleaned_data['email'])\n messages.success(self.request, 'Account details saved.')\n return super().form_valid(form)\n\n def get_object(self, queryset=None):\n return self.request.user\n\n\nclass PasswordView(LoginRequiredMixin, PasswordChangeView):\n \"\"\"Allow the user to change their password.\"\"\"\n\n template_name = 'user/password.html'\n success_url = reverse_lazy('user:account-password')\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'password'\n return context\n\n\nclass APITokenView(LoginRequiredMixin, TemplateView):\n \"\"\"Display the user's API token, and allow it to be regenerated.\"\"\"\n\n template_name = 'user/api-token.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['acc_page'] = 'api-token'\n return context\n", "path": "nextcloudappstore/user/views.py"}]} | 1,277 | 125 |
gh_patches_debug_31147 | rasdani/github-patches | git_diff | onnx__onnx-5757 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
check_function requires contexts as arguments which breaks backward compatibility
https://github.com/onnx/onnx/pull/5693 added required parameters to the `check_function` function in checker which breaks backward compatibility. Should we provide default contexts to `check_function` as well?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `onnx/checker.py`
Content:
```
1 # Copyright (c) ONNX Project Contributors
2 #
3 # SPDX-License-Identifier: Apache-2.0
4 """Graph utilities for checking whether an ONNX proto message is legal."""
5
6 from __future__ import annotations
7
8 __all__ = [
9 "check_attribute",
10 "check_function",
11 "check_graph",
12 "check_model",
13 "check_node",
14 "check_sparse_tensor",
15 "check_tensor",
16 "check_value_info",
17 "DEFAULT_CONTEXT",
18 "LEXICAL_SCOPE_CONTEXT",
19 "ValidationError",
20 "C",
21 "MAXIMUM_PROTOBUF",
22 ]
23
24 import os
25 import sys
26 from typing import Any, Callable, TypeVar
27
28 from google.protobuf.message import Message
29
30 import onnx.defs
31 import onnx.onnx_cpp2py_export.checker as C # noqa: N812
32 import onnx.shape_inference
33 from onnx import (
34 IR_VERSION,
35 AttributeProto,
36 FunctionProto,
37 GraphProto,
38 ModelProto,
39 NodeProto,
40 SparseTensorProto,
41 TensorProto,
42 ValueInfoProto,
43 )
44
45 # Limitation of single protobuf file is 2GB
46 MAXIMUM_PROTOBUF = 2000000000
47
48 # TODO: This thing where we reserialize the protobuf back into the
49 # string, only to deserialize it at the call site, is really goofy.
50 # Stop doing that.
51
52
53 # NB: Please don't edit this context!
54 DEFAULT_CONTEXT = C.CheckerContext()
55 DEFAULT_CONTEXT.ir_version = IR_VERSION
56 # TODO: Maybe ONNX-ML should also be defaulted?
57 DEFAULT_CONTEXT.opset_imports = {"": onnx.defs.onnx_opset_version()}
58
59 LEXICAL_SCOPE_CONTEXT = C.LexicalScopeContext()
60
61
62 FuncType = TypeVar("FuncType", bound=Callable[..., Any])
63
64
65 def _ensure_proto_type(proto: Message, proto_type: type[Message]) -> None:
66 if not isinstance(proto, proto_type):
67 raise TypeError(
68 f"The proto message needs to be of type '{proto_type.__name__}'"
69 )
70
71
72 def check_value_info(
73 value_info: ValueInfoProto, ctx: C.CheckerContext = DEFAULT_CONTEXT
74 ) -> None:
75 _ensure_proto_type(value_info, ValueInfoProto)
76 return C.check_value_info(value_info.SerializeToString(), ctx)
77
78
79 def check_tensor(tensor: TensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:
80 _ensure_proto_type(tensor, TensorProto)
81 return C.check_tensor(tensor.SerializeToString(), ctx)
82
83
84 def check_attribute(
85 attr: AttributeProto,
86 ctx: C.CheckerContext = DEFAULT_CONTEXT,
87 lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
88 ) -> None:
89 _ensure_proto_type(attr, AttributeProto)
90 return C.check_attribute(attr.SerializeToString(), ctx, lex_ctx)
91
92
93 def check_node(
94 node: NodeProto,
95 ctx: C.CheckerContext = DEFAULT_CONTEXT,
96 lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
97 ) -> None:
98 _ensure_proto_type(node, NodeProto)
99 return C.check_node(node.SerializeToString(), ctx, lex_ctx)
100
101
102 def check_function(
103 function: FunctionProto,
104 ctx: C.CheckerContext,
105 lex_ctx: C.LexicalScopeContext,
106 ) -> None:
107 _ensure_proto_type(function, FunctionProto)
108 C.check_function(function.SerializeToString(), ctx, lex_ctx)
109
110
111 def check_graph(
112 graph: GraphProto,
113 ctx: C.CheckerContext = DEFAULT_CONTEXT,
114 lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
115 ) -> None:
116 _ensure_proto_type(graph, GraphProto)
117 return C.check_graph(graph.SerializeToString(), ctx, lex_ctx)
118
119
120 def check_sparse_tensor(
121 sparse: SparseTensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT
122 ) -> None:
123 _ensure_proto_type(sparse, SparseTensorProto)
124 C.check_sparse_tensor(sparse.SerializeToString(), ctx)
125
126
127 def check_model(
128 model: ModelProto | str | bytes | os.PathLike,
129 full_check: bool = False,
130 skip_opset_compatibility_check: bool = False,
131 ) -> None:
132 """Check the consistency of a model.
133
134 An exception will be raised if the model's ir_version is not set
135 properly or is higher than checker's ir_version, or if the model
136 has duplicate keys in metadata_props.
137
138 If IR version >= 3, the model must specify opset_import.
139 If IR version < 3, the model cannot have any opset_import specified.
140
141 Args:
142 model: Model to check. If model is a path, the function checks model
143 path first. If the model bytes size is larger than 2GB, function
144 should be called using model path.
145 full_check: If True, the function also runs shape inference check.
146 skip_opset_compatibility_check: If True, the function skips the check for
147 opset compatibility.
148 """
149 # If model is a path instead of ModelProto
150 if isinstance(model, (str, os.PathLike)):
151 C.check_model_path(os.fspath(model), full_check, skip_opset_compatibility_check)
152 else:
153 protobuf_string = (
154 model if isinstance(model, bytes) else model.SerializeToString()
155 )
156 # If the protobuf is larger than 2GB,
157 # remind users should use the model path to check
158 if sys.getsizeof(protobuf_string) > MAXIMUM_PROTOBUF:
159 raise ValueError(
160 "This protobuf of onnx model is too large (>2GB). Call check_model with model path instead."
161 )
162 C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)
163
164
165 ValidationError = C.ValidationError
166
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/onnx/checker.py b/onnx/checker.py
--- a/onnx/checker.py
+++ b/onnx/checker.py
@@ -84,37 +84,37 @@
def check_attribute(
attr: AttributeProto,
ctx: C.CheckerContext = DEFAULT_CONTEXT,
- lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
) -> None:
_ensure_proto_type(attr, AttributeProto)
- return C.check_attribute(attr.SerializeToString(), ctx, lex_ctx)
+ return C.check_attribute(attr.SerializeToString(), ctx, lexical_scope_ctx)
def check_node(
node: NodeProto,
ctx: C.CheckerContext = DEFAULT_CONTEXT,
- lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
) -> None:
_ensure_proto_type(node, NodeProto)
- return C.check_node(node.SerializeToString(), ctx, lex_ctx)
+ return C.check_node(node.SerializeToString(), ctx, lexical_scope_ctx)
def check_function(
function: FunctionProto,
- ctx: C.CheckerContext,
- lex_ctx: C.LexicalScopeContext,
+ ctx: C.CheckerContext = DEFAULT_CONTEXT,
+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
) -> None:
_ensure_proto_type(function, FunctionProto)
- C.check_function(function.SerializeToString(), ctx, lex_ctx)
+ C.check_function(function.SerializeToString(), ctx, lexical_scope_ctx)
def check_graph(
graph: GraphProto,
ctx: C.CheckerContext = DEFAULT_CONTEXT,
- lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,
) -> None:
_ensure_proto_type(graph, GraphProto)
- return C.check_graph(graph.SerializeToString(), ctx, lex_ctx)
+ return C.check_graph(graph.SerializeToString(), ctx, lexical_scope_ctx)
def check_sparse_tensor(
| {"golden_diff": "diff --git a/onnx/checker.py b/onnx/checker.py\n--- a/onnx/checker.py\n+++ b/onnx/checker.py\n@@ -84,37 +84,37 @@\n def check_attribute(\n attr: AttributeProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n- lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n ) -> None:\n _ensure_proto_type(attr, AttributeProto)\n- return C.check_attribute(attr.SerializeToString(), ctx, lex_ctx)\n+ return C.check_attribute(attr.SerializeToString(), ctx, lexical_scope_ctx)\n \n \n def check_node(\n node: NodeProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n- lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n ) -> None:\n _ensure_proto_type(node, NodeProto)\n- return C.check_node(node.SerializeToString(), ctx, lex_ctx)\n+ return C.check_node(node.SerializeToString(), ctx, lexical_scope_ctx)\n \n \n def check_function(\n function: FunctionProto,\n- ctx: C.CheckerContext,\n- lex_ctx: C.LexicalScopeContext,\n+ ctx: C.CheckerContext = DEFAULT_CONTEXT,\n+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n ) -> None:\n _ensure_proto_type(function, FunctionProto)\n- C.check_function(function.SerializeToString(), ctx, lex_ctx)\n+ C.check_function(function.SerializeToString(), ctx, lexical_scope_ctx)\n \n \n def check_graph(\n graph: GraphProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n- lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n+ lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n ) -> None:\n _ensure_proto_type(graph, GraphProto)\n- return C.check_graph(graph.SerializeToString(), ctx, lex_ctx)\n+ return C.check_graph(graph.SerializeToString(), ctx, lexical_scope_ctx)\n \n \n def check_sparse_tensor(\n", "issue": "check_function requires contexts as arguments which breaks backward compatibility\nhttps://github.com/onnx/onnx/pull/5693 added required parameters to the `check_function` function in checker which breaks backward compatibility. Should we provide default contexts to `check_function` as well?\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) ONNX Project Contributors\n#\n# SPDX-License-Identifier: Apache-2.0\n\"\"\"Graph utilities for checking whether an ONNX proto message is legal.\"\"\"\n\nfrom __future__ import annotations\n\n__all__ = [\n \"check_attribute\",\n \"check_function\",\n \"check_graph\",\n \"check_model\",\n \"check_node\",\n \"check_sparse_tensor\",\n \"check_tensor\",\n \"check_value_info\",\n \"DEFAULT_CONTEXT\",\n \"LEXICAL_SCOPE_CONTEXT\",\n \"ValidationError\",\n \"C\",\n \"MAXIMUM_PROTOBUF\",\n]\n\nimport os\nimport sys\nfrom typing import Any, Callable, TypeVar\n\nfrom google.protobuf.message import Message\n\nimport onnx.defs\nimport onnx.onnx_cpp2py_export.checker as C # noqa: N812\nimport onnx.shape_inference\nfrom onnx import (\n IR_VERSION,\n AttributeProto,\n FunctionProto,\n GraphProto,\n ModelProto,\n NodeProto,\n SparseTensorProto,\n TensorProto,\n ValueInfoProto,\n)\n\n# Limitation of single protobuf file is 2GB\nMAXIMUM_PROTOBUF = 2000000000\n\n# TODO: This thing where we reserialize the protobuf back into the\n# string, only to deserialize it at the call site, is really goofy.\n# Stop doing that.\n\n\n# NB: Please don't edit this context!\nDEFAULT_CONTEXT = C.CheckerContext()\nDEFAULT_CONTEXT.ir_version = IR_VERSION\n# TODO: Maybe ONNX-ML should also be defaulted?\nDEFAULT_CONTEXT.opset_imports = {\"\": onnx.defs.onnx_opset_version()}\n\nLEXICAL_SCOPE_CONTEXT = C.LexicalScopeContext()\n\n\nFuncType = TypeVar(\"FuncType\", bound=Callable[..., Any])\n\n\ndef _ensure_proto_type(proto: Message, proto_type: type[Message]) -> None:\n if not isinstance(proto, proto_type):\n raise TypeError(\n f\"The proto message needs to be of type '{proto_type.__name__}'\"\n )\n\n\ndef check_value_info(\n value_info: ValueInfoProto, ctx: C.CheckerContext = DEFAULT_CONTEXT\n) -> None:\n _ensure_proto_type(value_info, ValueInfoProto)\n return C.check_value_info(value_info.SerializeToString(), ctx)\n\n\ndef check_tensor(tensor: TensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:\n _ensure_proto_type(tensor, TensorProto)\n return C.check_tensor(tensor.SerializeToString(), ctx)\n\n\ndef check_attribute(\n attr: AttributeProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n) -> None:\n _ensure_proto_type(attr, AttributeProto)\n return C.check_attribute(attr.SerializeToString(), ctx, lex_ctx)\n\n\ndef check_node(\n node: NodeProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n) -> None:\n _ensure_proto_type(node, NodeProto)\n return C.check_node(node.SerializeToString(), ctx, lex_ctx)\n\n\ndef check_function(\n function: FunctionProto,\n ctx: C.CheckerContext,\n lex_ctx: C.LexicalScopeContext,\n) -> None:\n _ensure_proto_type(function, FunctionProto)\n C.check_function(function.SerializeToString(), ctx, lex_ctx)\n\n\ndef check_graph(\n graph: GraphProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n lex_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n) -> None:\n _ensure_proto_type(graph, GraphProto)\n return C.check_graph(graph.SerializeToString(), ctx, lex_ctx)\n\n\ndef check_sparse_tensor(\n sparse: SparseTensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT\n) -> None:\n _ensure_proto_type(sparse, SparseTensorProto)\n C.check_sparse_tensor(sparse.SerializeToString(), ctx)\n\n\ndef check_model(\n model: ModelProto | str | bytes | os.PathLike,\n full_check: bool = False,\n skip_opset_compatibility_check: bool = False,\n) -> None:\n \"\"\"Check the consistency of a model.\n\n An exception will be raised if the model's ir_version is not set\n properly or is higher than checker's ir_version, or if the model\n has duplicate keys in metadata_props.\n\n If IR version >= 3, the model must specify opset_import.\n If IR version < 3, the model cannot have any opset_import specified.\n\n Args:\n model: Model to check. If model is a path, the function checks model\n path first. If the model bytes size is larger than 2GB, function\n should be called using model path.\n full_check: If True, the function also runs shape inference check.\n skip_opset_compatibility_check: If True, the function skips the check for\n opset compatibility.\n \"\"\"\n # If model is a path instead of ModelProto\n if isinstance(model, (str, os.PathLike)):\n C.check_model_path(os.fspath(model), full_check, skip_opset_compatibility_check)\n else:\n protobuf_string = (\n model if isinstance(model, bytes) else model.SerializeToString()\n )\n # If the protobuf is larger than 2GB,\n # remind users should use the model path to check\n if sys.getsizeof(protobuf_string) > MAXIMUM_PROTOBUF:\n raise ValueError(\n \"This protobuf of onnx model is too large (>2GB). Call check_model with model path instead.\"\n )\n C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)\n\n\nValidationError = C.ValidationError\n", "path": "onnx/checker.py"}], "after_files": [{"content": "# Copyright (c) ONNX Project Contributors\n#\n# SPDX-License-Identifier: Apache-2.0\n\"\"\"Graph utilities for checking whether an ONNX proto message is legal.\"\"\"\n\nfrom __future__ import annotations\n\n__all__ = [\n \"check_attribute\",\n \"check_function\",\n \"check_graph\",\n \"check_model\",\n \"check_node\",\n \"check_sparse_tensor\",\n \"check_tensor\",\n \"check_value_info\",\n \"DEFAULT_CONTEXT\",\n \"LEXICAL_SCOPE_CONTEXT\",\n \"ValidationError\",\n \"C\",\n \"MAXIMUM_PROTOBUF\",\n]\n\nimport os\nimport sys\nfrom typing import Any, Callable, TypeVar\n\nfrom google.protobuf.message import Message\n\nimport onnx.defs\nimport onnx.onnx_cpp2py_export.checker as C # noqa: N812\nimport onnx.shape_inference\nfrom onnx import (\n IR_VERSION,\n AttributeProto,\n FunctionProto,\n GraphProto,\n ModelProto,\n NodeProto,\n SparseTensorProto,\n TensorProto,\n ValueInfoProto,\n)\n\n# Limitation of single protobuf file is 2GB\nMAXIMUM_PROTOBUF = 2000000000\n\n# TODO: This thing where we reserialize the protobuf back into the\n# string, only to deserialize it at the call site, is really goofy.\n# Stop doing that.\n\n\n# NB: Please don't edit this context!\nDEFAULT_CONTEXT = C.CheckerContext()\nDEFAULT_CONTEXT.ir_version = IR_VERSION\n# TODO: Maybe ONNX-ML should also be defaulted?\nDEFAULT_CONTEXT.opset_imports = {\"\": onnx.defs.onnx_opset_version()}\n\nLEXICAL_SCOPE_CONTEXT = C.LexicalScopeContext()\n\n\nFuncType = TypeVar(\"FuncType\", bound=Callable[..., Any])\n\n\ndef _ensure_proto_type(proto: Message, proto_type: type[Message]) -> None:\n if not isinstance(proto, proto_type):\n raise TypeError(\n f\"The proto message needs to be of type '{proto_type.__name__}'\"\n )\n\n\ndef check_value_info(\n value_info: ValueInfoProto, ctx: C.CheckerContext = DEFAULT_CONTEXT\n) -> None:\n _ensure_proto_type(value_info, ValueInfoProto)\n return C.check_value_info(value_info.SerializeToString(), ctx)\n\n\ndef check_tensor(tensor: TensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT) -> None:\n _ensure_proto_type(tensor, TensorProto)\n return C.check_tensor(tensor.SerializeToString(), ctx)\n\n\ndef check_attribute(\n attr: AttributeProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n) -> None:\n _ensure_proto_type(attr, AttributeProto)\n return C.check_attribute(attr.SerializeToString(), ctx, lexical_scope_ctx)\n\n\ndef check_node(\n node: NodeProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n) -> None:\n _ensure_proto_type(node, NodeProto)\n return C.check_node(node.SerializeToString(), ctx, lexical_scope_ctx)\n\n\ndef check_function(\n function: FunctionProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n) -> None:\n _ensure_proto_type(function, FunctionProto)\n C.check_function(function.SerializeToString(), ctx, lexical_scope_ctx)\n\n\ndef check_graph(\n graph: GraphProto,\n ctx: C.CheckerContext = DEFAULT_CONTEXT,\n lexical_scope_ctx: C.LexicalScopeContext = LEXICAL_SCOPE_CONTEXT,\n) -> None:\n _ensure_proto_type(graph, GraphProto)\n return C.check_graph(graph.SerializeToString(), ctx, lexical_scope_ctx)\n\n\ndef check_sparse_tensor(\n sparse: SparseTensorProto, ctx: C.CheckerContext = DEFAULT_CONTEXT\n) -> None:\n _ensure_proto_type(sparse, SparseTensorProto)\n C.check_sparse_tensor(sparse.SerializeToString(), ctx)\n\n\ndef check_model(\n model: ModelProto | str | bytes | os.PathLike,\n full_check: bool = False,\n skip_opset_compatibility_check: bool = False,\n) -> None:\n \"\"\"Check the consistency of a model.\n\n An exception will be raised if the model's ir_version is not set\n properly or is higher than checker's ir_version, or if the model\n has duplicate keys in metadata_props.\n\n If IR version >= 3, the model must specify opset_import.\n If IR version < 3, the model cannot have any opset_import specified.\n\n Args:\n model: Model to check. If model is a path, the function checks model\n path first. If the model bytes size is larger than 2GB, function\n should be called using model path.\n full_check: If True, the function also runs shape inference check.\n skip_opset_compatibility_check: If True, the function skips the check for\n opset compatibility.\n \"\"\"\n # If model is a path instead of ModelProto\n if isinstance(model, (str, os.PathLike)):\n C.check_model_path(os.fspath(model), full_check, skip_opset_compatibility_check)\n else:\n protobuf_string = (\n model if isinstance(model, bytes) else model.SerializeToString()\n )\n # If the protobuf is larger than 2GB,\n # remind users should use the model path to check\n if sys.getsizeof(protobuf_string) > MAXIMUM_PROTOBUF:\n raise ValueError(\n \"This protobuf of onnx model is too large (>2GB). Call check_model with model path instead.\"\n )\n C.check_model(protobuf_string, full_check, skip_opset_compatibility_check)\n\n\nValidationError = C.ValidationError\n", "path": "onnx/checker.py"}]} | 1,933 | 469 |
gh_patches_debug_33978 | rasdani/github-patches | git_diff | matrix-org__synapse-3136 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add six as dependency
Just a quick tracking issue to remember to add six as dependency. It is currently used, but it's just an indirect dependency of many other packages. For clarity, it would be good to add it to the dependencies. I'm not sure how to do it myself, the file is non-standard.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `synapse/python_dependencies.py`
Content:
```
1 # Copyright 2015, 2016 OpenMarket Ltd
2 # Copyright 2017 Vector Creations Ltd
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import logging
17 from distutils.version import LooseVersion
18
19 logger = logging.getLogger(__name__)
20
21 REQUIREMENTS = {
22 "jsonschema>=2.5.1": ["jsonschema>=2.5.1"],
23 "frozendict>=0.4": ["frozendict"],
24 "unpaddedbase64>=1.1.0": ["unpaddedbase64>=1.1.0"],
25 "canonicaljson>=1.1.3": ["canonicaljson>=1.1.3"],
26 "signedjson>=1.0.0": ["signedjson>=1.0.0"],
27 "pynacl>=1.2.1": ["nacl>=1.2.1", "nacl.bindings"],
28 "service_identity>=1.0.0": ["service_identity>=1.0.0"],
29 "Twisted>=16.0.0": ["twisted>=16.0.0"],
30 "pyopenssl>=0.14": ["OpenSSL>=0.14"],
31 "pyyaml": ["yaml"],
32 "pyasn1": ["pyasn1"],
33 "daemonize": ["daemonize"],
34 "bcrypt": ["bcrypt>=3.1.0"],
35 "pillow": ["PIL"],
36 "pydenticon": ["pydenticon"],
37 "blist": ["blist"],
38 "pysaml2>=3.0.0": ["saml2>=3.0.0"],
39 "pymacaroons-pynacl": ["pymacaroons"],
40 "msgpack-python>=0.3.0": ["msgpack"],
41 "phonenumbers>=8.2.0": ["phonenumbers"],
42 }
43 CONDITIONAL_REQUIREMENTS = {
44 "web_client": {
45 "matrix_angular_sdk>=0.6.8": ["syweb>=0.6.8"],
46 },
47 "preview_url": {
48 "netaddr>=0.7.18": ["netaddr"],
49 },
50 "email.enable_notifs": {
51 "Jinja2>=2.8": ["Jinja2>=2.8"],
52 "bleach>=1.4.2": ["bleach>=1.4.2"],
53 },
54 "matrix-synapse-ldap3": {
55 "matrix-synapse-ldap3>=0.1": ["ldap_auth_provider"],
56 },
57 "psutil": {
58 "psutil>=2.0.0": ["psutil>=2.0.0"],
59 },
60 "affinity": {
61 "affinity": ["affinity"],
62 },
63 }
64
65
66 def requirements(config=None, include_conditional=False):
67 reqs = REQUIREMENTS.copy()
68 if include_conditional:
69 for _, req in CONDITIONAL_REQUIREMENTS.items():
70 reqs.update(req)
71 return reqs
72
73
74 def github_link(project, version, egg):
75 return "https://github.com/%s/tarball/%s/#egg=%s" % (project, version, egg)
76
77
78 DEPENDENCY_LINKS = {
79 }
80
81
82 class MissingRequirementError(Exception):
83 def __init__(self, message, module_name, dependency):
84 super(MissingRequirementError, self).__init__(message)
85 self.module_name = module_name
86 self.dependency = dependency
87
88
89 def check_requirements(config=None):
90 """Checks that all the modules needed by synapse have been correctly
91 installed and are at the correct version"""
92 for dependency, module_requirements in (
93 requirements(config, include_conditional=False).items()):
94 for module_requirement in module_requirements:
95 if ">=" in module_requirement:
96 module_name, required_version = module_requirement.split(">=")
97 version_test = ">="
98 elif "==" in module_requirement:
99 module_name, required_version = module_requirement.split("==")
100 version_test = "=="
101 else:
102 module_name = module_requirement
103 version_test = None
104
105 try:
106 module = __import__(module_name)
107 except ImportError:
108 logging.exception(
109 "Can't import %r which is part of %r",
110 module_name, dependency
111 )
112 raise MissingRequirementError(
113 "Can't import %r which is part of %r"
114 % (module_name, dependency), module_name, dependency
115 )
116 version = getattr(module, "__version__", None)
117 file_path = getattr(module, "__file__", None)
118 logger.info(
119 "Using %r version %r from %r to satisfy %r",
120 module_name, version, file_path, dependency
121 )
122
123 if version_test == ">=":
124 if version is None:
125 raise MissingRequirementError(
126 "Version of %r isn't set as __version__ of module %r"
127 % (dependency, module_name), module_name, dependency
128 )
129 if LooseVersion(version) < LooseVersion(required_version):
130 raise MissingRequirementError(
131 "Version of %r in %r is too old. %r < %r"
132 % (dependency, file_path, version, required_version),
133 module_name, dependency
134 )
135 elif version_test == "==":
136 if version is None:
137 raise MissingRequirementError(
138 "Version of %r isn't set as __version__ of module %r"
139 % (dependency, module_name), module_name, dependency
140 )
141 if LooseVersion(version) != LooseVersion(required_version):
142 raise MissingRequirementError(
143 "Unexpected version of %r in %r. %r != %r"
144 % (dependency, file_path, version, required_version),
145 module_name, dependency
146 )
147
148
149 def list_requirements():
150 result = []
151 linked = []
152 for link in DEPENDENCY_LINKS.values():
153 egg = link.split("#egg=")[1]
154 linked.append(egg.split('-')[0])
155 result.append(link)
156 for requirement in requirements(include_conditional=True):
157 is_linked = False
158 for link in linked:
159 if requirement.replace('-', '_').startswith(link):
160 is_linked = True
161 if not is_linked:
162 result.append(requirement)
163 return result
164
165
166 if __name__ == "__main__":
167 import sys
168 sys.stdout.writelines(req + "\n" for req in list_requirements())
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/synapse/python_dependencies.py b/synapse/python_dependencies.py
--- a/synapse/python_dependencies.py
+++ b/synapse/python_dependencies.py
@@ -1,5 +1,6 @@
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd
+# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -18,6 +19,18 @@
logger = logging.getLogger(__name__)
+# this dict maps from python package name to a list of modules we expect it to
+# provide.
+#
+# the key is a "requirement specifier", as used as a parameter to `pip
+# install`[1], or an `install_requires` argument to `setuptools.setup` [2].
+#
+# the value is a sequence of strings; each entry should be the name of the
+# python module, optionally followed by a version assertion which can be either
+# ">=<ver>" or "==<ver>".
+#
+# [1] https://pip.pypa.io/en/stable/reference/pip_install/#requirement-specifiers.
+# [2] https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-dependencies
REQUIREMENTS = {
"jsonschema>=2.5.1": ["jsonschema>=2.5.1"],
"frozendict>=0.4": ["frozendict"],
@@ -26,7 +39,11 @@
"signedjson>=1.0.0": ["signedjson>=1.0.0"],
"pynacl>=1.2.1": ["nacl>=1.2.1", "nacl.bindings"],
"service_identity>=1.0.0": ["service_identity>=1.0.0"],
- "Twisted>=16.0.0": ["twisted>=16.0.0"],
+
+ # we break under Twisted 18.4
+ # (https://github.com/matrix-org/synapse/issues/3135)
+ "Twisted>=16.0.0,<18.4": ["twisted>=16.0.0"],
+
"pyopenssl>=0.14": ["OpenSSL>=0.14"],
"pyyaml": ["yaml"],
"pyasn1": ["pyasn1"],
@@ -39,6 +56,7 @@
"pymacaroons-pynacl": ["pymacaroons"],
"msgpack-python>=0.3.0": ["msgpack"],
"phonenumbers>=8.2.0": ["phonenumbers"],
+ "six": ["six"],
}
CONDITIONAL_REQUIREMENTS = {
"web_client": {
| {"golden_diff": "diff --git a/synapse/python_dependencies.py b/synapse/python_dependencies.py\n--- a/synapse/python_dependencies.py\n+++ b/synapse/python_dependencies.py\n@@ -1,5 +1,6 @@\n # Copyright 2015, 2016 OpenMarket Ltd\n # Copyright 2017 Vector Creations Ltd\n+# Copyright 2018 New Vector Ltd\n #\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n@@ -18,6 +19,18 @@\n \n logger = logging.getLogger(__name__)\n \n+# this dict maps from python package name to a list of modules we expect it to\n+# provide.\n+#\n+# the key is a \"requirement specifier\", as used as a parameter to `pip\n+# install`[1], or an `install_requires` argument to `setuptools.setup` [2].\n+#\n+# the value is a sequence of strings; each entry should be the name of the\n+# python module, optionally followed by a version assertion which can be either\n+# \">=<ver>\" or \"==<ver>\".\n+#\n+# [1] https://pip.pypa.io/en/stable/reference/pip_install/#requirement-specifiers.\n+# [2] https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-dependencies\n REQUIREMENTS = {\n \"jsonschema>=2.5.1\": [\"jsonschema>=2.5.1\"],\n \"frozendict>=0.4\": [\"frozendict\"],\n@@ -26,7 +39,11 @@\n \"signedjson>=1.0.0\": [\"signedjson>=1.0.0\"],\n \"pynacl>=1.2.1\": [\"nacl>=1.2.1\", \"nacl.bindings\"],\n \"service_identity>=1.0.0\": [\"service_identity>=1.0.0\"],\n- \"Twisted>=16.0.0\": [\"twisted>=16.0.0\"],\n+\n+ # we break under Twisted 18.4\n+ # (https://github.com/matrix-org/synapse/issues/3135)\n+ \"Twisted>=16.0.0,<18.4\": [\"twisted>=16.0.0\"],\n+\n \"pyopenssl>=0.14\": [\"OpenSSL>=0.14\"],\n \"pyyaml\": [\"yaml\"],\n \"pyasn1\": [\"pyasn1\"],\n@@ -39,6 +56,7 @@\n \"pymacaroons-pynacl\": [\"pymacaroons\"],\n \"msgpack-python>=0.3.0\": [\"msgpack\"],\n \"phonenumbers>=8.2.0\": [\"phonenumbers\"],\n+ \"six\": [\"six\"],\n }\n CONDITIONAL_REQUIREMENTS = {\n \"web_client\": {\n", "issue": "Add six as dependency\nJust a quick tracking issue to remember to add six as dependency. It is currently used, but it's just an indirect dependency of many other packages. For clarity, it would be good to add it to the dependencies. I'm not sure how to do it myself, the file is non-standard.\n", "before_files": [{"content": "# Copyright 2015, 2016 OpenMarket Ltd\n# Copyright 2017 Vector Creations Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nfrom distutils.version import LooseVersion\n\nlogger = logging.getLogger(__name__)\n\nREQUIREMENTS = {\n \"jsonschema>=2.5.1\": [\"jsonschema>=2.5.1\"],\n \"frozendict>=0.4\": [\"frozendict\"],\n \"unpaddedbase64>=1.1.0\": [\"unpaddedbase64>=1.1.0\"],\n \"canonicaljson>=1.1.3\": [\"canonicaljson>=1.1.3\"],\n \"signedjson>=1.0.0\": [\"signedjson>=1.0.0\"],\n \"pynacl>=1.2.1\": [\"nacl>=1.2.1\", \"nacl.bindings\"],\n \"service_identity>=1.0.0\": [\"service_identity>=1.0.0\"],\n \"Twisted>=16.0.0\": [\"twisted>=16.0.0\"],\n \"pyopenssl>=0.14\": [\"OpenSSL>=0.14\"],\n \"pyyaml\": [\"yaml\"],\n \"pyasn1\": [\"pyasn1\"],\n \"daemonize\": [\"daemonize\"],\n \"bcrypt\": [\"bcrypt>=3.1.0\"],\n \"pillow\": [\"PIL\"],\n \"pydenticon\": [\"pydenticon\"],\n \"blist\": [\"blist\"],\n \"pysaml2>=3.0.0\": [\"saml2>=3.0.0\"],\n \"pymacaroons-pynacl\": [\"pymacaroons\"],\n \"msgpack-python>=0.3.0\": [\"msgpack\"],\n \"phonenumbers>=8.2.0\": [\"phonenumbers\"],\n}\nCONDITIONAL_REQUIREMENTS = {\n \"web_client\": {\n \"matrix_angular_sdk>=0.6.8\": [\"syweb>=0.6.8\"],\n },\n \"preview_url\": {\n \"netaddr>=0.7.18\": [\"netaddr\"],\n },\n \"email.enable_notifs\": {\n \"Jinja2>=2.8\": [\"Jinja2>=2.8\"],\n \"bleach>=1.4.2\": [\"bleach>=1.4.2\"],\n },\n \"matrix-synapse-ldap3\": {\n \"matrix-synapse-ldap3>=0.1\": [\"ldap_auth_provider\"],\n },\n \"psutil\": {\n \"psutil>=2.0.0\": [\"psutil>=2.0.0\"],\n },\n \"affinity\": {\n \"affinity\": [\"affinity\"],\n },\n}\n\n\ndef requirements(config=None, include_conditional=False):\n reqs = REQUIREMENTS.copy()\n if include_conditional:\n for _, req in CONDITIONAL_REQUIREMENTS.items():\n reqs.update(req)\n return reqs\n\n\ndef github_link(project, version, egg):\n return \"https://github.com/%s/tarball/%s/#egg=%s\" % (project, version, egg)\n\n\nDEPENDENCY_LINKS = {\n}\n\n\nclass MissingRequirementError(Exception):\n def __init__(self, message, module_name, dependency):\n super(MissingRequirementError, self).__init__(message)\n self.module_name = module_name\n self.dependency = dependency\n\n\ndef check_requirements(config=None):\n \"\"\"Checks that all the modules needed by synapse have been correctly\n installed and are at the correct version\"\"\"\n for dependency, module_requirements in (\n requirements(config, include_conditional=False).items()):\n for module_requirement in module_requirements:\n if \">=\" in module_requirement:\n module_name, required_version = module_requirement.split(\">=\")\n version_test = \">=\"\n elif \"==\" in module_requirement:\n module_name, required_version = module_requirement.split(\"==\")\n version_test = \"==\"\n else:\n module_name = module_requirement\n version_test = None\n\n try:\n module = __import__(module_name)\n except ImportError:\n logging.exception(\n \"Can't import %r which is part of %r\",\n module_name, dependency\n )\n raise MissingRequirementError(\n \"Can't import %r which is part of %r\"\n % (module_name, dependency), module_name, dependency\n )\n version = getattr(module, \"__version__\", None)\n file_path = getattr(module, \"__file__\", None)\n logger.info(\n \"Using %r version %r from %r to satisfy %r\",\n module_name, version, file_path, dependency\n )\n\n if version_test == \">=\":\n if version is None:\n raise MissingRequirementError(\n \"Version of %r isn't set as __version__ of module %r\"\n % (dependency, module_name), module_name, dependency\n )\n if LooseVersion(version) < LooseVersion(required_version):\n raise MissingRequirementError(\n \"Version of %r in %r is too old. %r < %r\"\n % (dependency, file_path, version, required_version),\n module_name, dependency\n )\n elif version_test == \"==\":\n if version is None:\n raise MissingRequirementError(\n \"Version of %r isn't set as __version__ of module %r\"\n % (dependency, module_name), module_name, dependency\n )\n if LooseVersion(version) != LooseVersion(required_version):\n raise MissingRequirementError(\n \"Unexpected version of %r in %r. %r != %r\"\n % (dependency, file_path, version, required_version),\n module_name, dependency\n )\n\n\ndef list_requirements():\n result = []\n linked = []\n for link in DEPENDENCY_LINKS.values():\n egg = link.split(\"#egg=\")[1]\n linked.append(egg.split('-')[0])\n result.append(link)\n for requirement in requirements(include_conditional=True):\n is_linked = False\n for link in linked:\n if requirement.replace('-', '_').startswith(link):\n is_linked = True\n if not is_linked:\n result.append(requirement)\n return result\n\n\nif __name__ == \"__main__\":\n import sys\n sys.stdout.writelines(req + \"\\n\" for req in list_requirements())\n", "path": "synapse/python_dependencies.py"}], "after_files": [{"content": "# Copyright 2015, 2016 OpenMarket Ltd\n# Copyright 2017 Vector Creations Ltd\n# Copyright 2018 New Vector Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nfrom distutils.version import LooseVersion\n\nlogger = logging.getLogger(__name__)\n\n# this dict maps from python package name to a list of modules we expect it to\n# provide.\n#\n# the key is a \"requirement specifier\", as used as a parameter to `pip\n# install`[1], or an `install_requires` argument to `setuptools.setup` [2].\n#\n# the value is a sequence of strings; each entry should be the name of the\n# python module, optionally followed by a version assertion which can be either\n# \">=<ver>\" or \"==<ver>\".\n#\n# [1] https://pip.pypa.io/en/stable/reference/pip_install/#requirement-specifiers.\n# [2] https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-dependencies\nREQUIREMENTS = {\n \"jsonschema>=2.5.1\": [\"jsonschema>=2.5.1\"],\n \"frozendict>=0.4\": [\"frozendict\"],\n \"unpaddedbase64>=1.1.0\": [\"unpaddedbase64>=1.1.0\"],\n \"canonicaljson>=1.1.3\": [\"canonicaljson>=1.1.3\"],\n \"signedjson>=1.0.0\": [\"signedjson>=1.0.0\"],\n \"pynacl>=1.2.1\": [\"nacl>=1.2.1\", \"nacl.bindings\"],\n \"service_identity>=1.0.0\": [\"service_identity>=1.0.0\"],\n\n # we break under Twisted 18.4\n # (https://github.com/matrix-org/synapse/issues/3135)\n \"Twisted>=16.0.0,<18.4\": [\"twisted>=16.0.0\"],\n\n \"pyopenssl>=0.14\": [\"OpenSSL>=0.14\"],\n \"pyyaml\": [\"yaml\"],\n \"pyasn1\": [\"pyasn1\"],\n \"daemonize\": [\"daemonize\"],\n \"bcrypt\": [\"bcrypt>=3.1.0\"],\n \"pillow\": [\"PIL\"],\n \"pydenticon\": [\"pydenticon\"],\n \"blist\": [\"blist\"],\n \"pysaml2>=3.0.0\": [\"saml2>=3.0.0\"],\n \"pymacaroons-pynacl\": [\"pymacaroons\"],\n \"msgpack-python>=0.3.0\": [\"msgpack\"],\n \"phonenumbers>=8.2.0\": [\"phonenumbers\"],\n \"six\": [\"six\"],\n}\nCONDITIONAL_REQUIREMENTS = {\n \"web_client\": {\n \"matrix_angular_sdk>=0.6.8\": [\"syweb>=0.6.8\"],\n },\n \"preview_url\": {\n \"netaddr>=0.7.18\": [\"netaddr\"],\n },\n \"email.enable_notifs\": {\n \"Jinja2>=2.8\": [\"Jinja2>=2.8\"],\n \"bleach>=1.4.2\": [\"bleach>=1.4.2\"],\n },\n \"matrix-synapse-ldap3\": {\n \"matrix-synapse-ldap3>=0.1\": [\"ldap_auth_provider\"],\n },\n \"psutil\": {\n \"psutil>=2.0.0\": [\"psutil>=2.0.0\"],\n },\n \"affinity\": {\n \"affinity\": [\"affinity\"],\n },\n}\n\n\ndef requirements(config=None, include_conditional=False):\n reqs = REQUIREMENTS.copy()\n if include_conditional:\n for _, req in CONDITIONAL_REQUIREMENTS.items():\n reqs.update(req)\n return reqs\n\n\ndef github_link(project, version, egg):\n return \"https://github.com/%s/tarball/%s/#egg=%s\" % (project, version, egg)\n\n\nDEPENDENCY_LINKS = {\n}\n\n\nclass MissingRequirementError(Exception):\n def __init__(self, message, module_name, dependency):\n super(MissingRequirementError, self).__init__(message)\n self.module_name = module_name\n self.dependency = dependency\n\n\ndef check_requirements(config=None):\n \"\"\"Checks that all the modules needed by synapse have been correctly\n installed and are at the correct version\"\"\"\n for dependency, module_requirements in (\n requirements(config, include_conditional=False).items()):\n for module_requirement in module_requirements:\n if \">=\" in module_requirement:\n module_name, required_version = module_requirement.split(\">=\")\n version_test = \">=\"\n elif \"==\" in module_requirement:\n module_name, required_version = module_requirement.split(\"==\")\n version_test = \"==\"\n else:\n module_name = module_requirement\n version_test = None\n\n try:\n module = __import__(module_name)\n except ImportError:\n logging.exception(\n \"Can't import %r which is part of %r\",\n module_name, dependency\n )\n raise MissingRequirementError(\n \"Can't import %r which is part of %r\"\n % (module_name, dependency), module_name, dependency\n )\n version = getattr(module, \"__version__\", None)\n file_path = getattr(module, \"__file__\", None)\n logger.info(\n \"Using %r version %r from %r to satisfy %r\",\n module_name, version, file_path, dependency\n )\n\n if version_test == \">=\":\n if version is None:\n raise MissingRequirementError(\n \"Version of %r isn't set as __version__ of module %r\"\n % (dependency, module_name), module_name, dependency\n )\n if LooseVersion(version) < LooseVersion(required_version):\n raise MissingRequirementError(\n \"Version of %r in %r is too old. %r < %r\"\n % (dependency, file_path, version, required_version),\n module_name, dependency\n )\n elif version_test == \"==\":\n if version is None:\n raise MissingRequirementError(\n \"Version of %r isn't set as __version__ of module %r\"\n % (dependency, module_name), module_name, dependency\n )\n if LooseVersion(version) != LooseVersion(required_version):\n raise MissingRequirementError(\n \"Unexpected version of %r in %r. %r != %r\"\n % (dependency, file_path, version, required_version),\n module_name, dependency\n )\n\n\ndef list_requirements():\n result = []\n linked = []\n for link in DEPENDENCY_LINKS.values():\n egg = link.split(\"#egg=\")[1]\n linked.append(egg.split('-')[0])\n result.append(link)\n for requirement in requirements(include_conditional=True):\n is_linked = False\n for link in linked:\n if requirement.replace('-', '_').startswith(link):\n is_linked = True\n if not is_linked:\n result.append(requirement)\n return result\n\n\nif __name__ == \"__main__\":\n import sys\n sys.stdout.writelines(req + \"\\n\" for req in list_requirements())\n", "path": "synapse/python_dependencies.py"}]} | 2,205 | 636 |
gh_patches_debug_800 | rasdani/github-patches | git_diff | spyder-ide__spyder-4602 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Move to support only Rope 0.10.5+
That's because 0.10.5 is the first version to support Python 2 and 3 in the same package.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright © Spyder Project Contributors
4 # Licensed under the terms of the MIT License
5 # (see spyder/__init__.py for details)
6
7 """
8 Spyder
9 ======
10
11 The Scientific PYthon Development EnviRonment
12 """
13
14 from __future__ import print_function
15
16 import os
17 import os.path as osp
18 import subprocess
19 import sys
20 import shutil
21
22 from distutils.core import setup
23 from distutils.command.build import build
24 from distutils.command.install import install
25 from distutils.command.install_data import install_data
26
27
28 #==============================================================================
29 # Check for Python 3
30 #==============================================================================
31 PY3 = sys.version_info[0] == 3
32
33
34 #==============================================================================
35 # Minimal Python version sanity check
36 # Taken from the notebook setup.py -- Modified BSD License
37 #==============================================================================
38 v = sys.version_info
39 if v[:2] < (2,7) or (v[0] >= 3 and v[:2] < (3,3)):
40 error = "ERROR: Spyder requires Python version 2.7 or 3.3 or above."
41 print(error, file=sys.stderr)
42 sys.exit(1)
43
44
45 #==============================================================================
46 # Constants
47 #==============================================================================
48 NAME = 'spyder'
49 LIBNAME = 'spyder'
50 from spyder import __version__, __project_url__
51
52
53 #==============================================================================
54 # Auxiliary functions
55 #==============================================================================
56 def get_package_data(name, extlist):
57 """Return data files for package *name* with extensions in *extlist*"""
58 flist = []
59 # Workaround to replace os.path.relpath (not available until Python 2.6):
60 offset = len(name)+len(os.pathsep)
61 for dirpath, _dirnames, filenames in os.walk(name):
62 for fname in filenames:
63 if not fname.startswith('.') and osp.splitext(fname)[1] in extlist:
64 flist.append(osp.join(dirpath, fname)[offset:])
65 return flist
66
67
68 def get_subpackages(name):
69 """Return subpackages of package *name*"""
70 splist = []
71 for dirpath, _dirnames, _filenames in os.walk(name):
72 if osp.isfile(osp.join(dirpath, '__init__.py')):
73 splist.append(".".join(dirpath.split(os.sep)))
74 return splist
75
76
77 def get_data_files():
78 """Return data_files in a platform dependent manner"""
79 if sys.platform.startswith('linux'):
80 if PY3:
81 data_files = [('share/applications', ['scripts/spyder3.desktop']),
82 ('share/pixmaps', ['img_src/spyder3.png']),
83 ('share/metainfo', ['scripts/spyder3.appdata.xml'])]
84 else:
85 data_files = [('share/applications', ['scripts/spyder.desktop']),
86 ('share/pixmaps', ['img_src/spyder.png'])]
87 elif os.name == 'nt':
88 data_files = [('scripts', ['img_src/spyder.ico',
89 'img_src/spyder_reset.ico'])]
90 else:
91 data_files = []
92 return data_files
93
94
95 def get_packages():
96 """Return package list"""
97 packages = (
98 get_subpackages(LIBNAME)
99 + get_subpackages('spyder_breakpoints')
100 + get_subpackages('spyder_profiler')
101 + get_subpackages('spyder_pylint')
102 + get_subpackages('spyder_io_dcm')
103 + get_subpackages('spyder_io_hdf5')
104 )
105 return packages
106
107
108 #==============================================================================
109 # Make Linux detect Spyder desktop file
110 #==============================================================================
111 class MyInstallData(install_data):
112 def run(self):
113 install_data.run(self)
114 if sys.platform.startswith('linux'):
115 try:
116 subprocess.call(['update-desktop-database'])
117 except:
118 print("ERROR: unable to update desktop database",
119 file=sys.stderr)
120 CMDCLASS = {'install_data': MyInstallData}
121
122
123 #==============================================================================
124 # Sphinx build (documentation)
125 #==============================================================================
126 def get_html_help_exe():
127 """Return HTML Help Workshop executable path (Windows only)"""
128 if os.name == 'nt':
129 hhc_base = r'C:\Program Files%s\HTML Help Workshop\hhc.exe'
130 for hhc_exe in (hhc_base % '', hhc_base % ' (x86)'):
131 if osp.isfile(hhc_exe):
132 return hhc_exe
133 else:
134 return
135
136 try:
137 from sphinx import setup_command
138
139 class MyBuild(build):
140 user_options = [('no-doc', None, "Don't build Spyder documentation")] \
141 + build.user_options
142 def __init__(self, *args, **kwargs):
143 build.__init__(self, *args, **kwargs)
144 self.no_doc = False
145 def with_doc(self):
146 setup_dir = os.path.dirname(os.path.abspath(__file__))
147 is_doc_dir = os.path.isdir(os.path.join(setup_dir, 'doc'))
148 install_obj = self.distribution.get_command_obj('install')
149 return (is_doc_dir and not self.no_doc and not install_obj.no_doc)
150 sub_commands = build.sub_commands + [('build_doc', with_doc)]
151 CMDCLASS['build'] = MyBuild
152
153
154 class MyInstall(install):
155 user_options = [('no-doc', None, "Don't build Spyder documentation")] \
156 + install.user_options
157 def __init__(self, *args, **kwargs):
158 install.__init__(self, *args, **kwargs)
159 self.no_doc = False
160 CMDCLASS['install'] = MyInstall
161
162
163 class MyBuildDoc(setup_command.BuildDoc):
164 def run(self):
165 build = self.get_finalized_command('build')
166 sys.path.insert(0, os.path.abspath(build.build_lib))
167 dirname = self.distribution.get_command_obj('build').build_purelib
168 self.builder_target_dir = osp.join(dirname, 'spyder', 'doc')
169
170 if not osp.exists(self.builder_target_dir):
171 os.mkdir(self.builder_target_dir)
172
173 hhc_exe = get_html_help_exe()
174 self.builder = "html" if hhc_exe is None else "htmlhelp"
175
176 try:
177 setup_command.BuildDoc.run(self)
178 except UnicodeDecodeError:
179 print("ERROR: unable to build documentation because Sphinx "\
180 "do not handle source path with non-ASCII characters. "\
181 "Please try to move the source package to another "\
182 "location (path with *only* ASCII characters).",
183 file=sys.stderr)
184 sys.path.pop(0)
185
186 # Building chm doc, if HTML Help Workshop is installed
187 if hhc_exe is not None:
188 fname = osp.join(self.builder_target_dir, 'Spyderdoc.chm')
189 subprocess.call('"%s" %s' % (hhc_exe, fname), shell=True)
190 if osp.isfile(fname):
191 dest = osp.join(dirname, 'spyder')
192 try:
193 shutil.move(fname, dest)
194 except shutil.Error:
195 print("Unable to replace %s" % dest)
196 shutil.rmtree(self.builder_target_dir)
197
198 CMDCLASS['build_doc'] = MyBuildDoc
199 except ImportError:
200 print('WARNING: unable to build documentation because Sphinx '\
201 'is not installed', file=sys.stderr)
202
203
204 #==============================================================================
205 # Main scripts
206 #==============================================================================
207 # NOTE: the '[...]_win_post_install.py' script is installed even on non-Windows
208 # platforms due to a bug in pip installation process (see Issue 1158)
209 SCRIPTS = ['%s_win_post_install.py' % NAME]
210 if PY3 and sys.platform.startswith('linux'):
211 SCRIPTS.append('spyder3')
212 else:
213 SCRIPTS.append('spyder')
214
215
216 #==============================================================================
217 # Files added to the package
218 #==============================================================================
219 EXTLIST = ['.mo', '.svg', '.png', '.css', '.html', '.js', '.chm', '.ini',
220 '.txt', '.rst', '.qss', '.ttf', '.json', '.c', '.cpp', '.java',
221 '.md', '.R', '.csv', '.pyx', '.ipynb']
222 if os.name == 'nt':
223 SCRIPTS += ['spyder.bat']
224 EXTLIST += ['.ico']
225
226
227 #==============================================================================
228 # Setup arguments
229 #==============================================================================
230 setup_args = dict(name=NAME,
231 version=__version__,
232 description='Scientific PYthon Development EnviRonment',
233 long_description=
234 """Spyder is an interactive Python development environment providing
235 MATLAB-like features in a simple and light-weighted software.
236 It also provides ready-to-use pure-Python widgets to your PyQt5 or
237 PyQt4 application: source code editor with syntax highlighting and
238 code introspection/analysis features, NumPy array editor, dictionary
239 editor, Python console, etc.""",
240 download_url='%s/files/%s-%s.zip' % (__project_url__, NAME, __version__),
241 author="The Spyder Project Contributors",
242 url=__project_url__,
243 license='MIT',
244 keywords='PyQt5 PyQt4 editor shell console widgets IDE',
245 platforms=['any'],
246 packages=get_packages(),
247 package_data={LIBNAME: get_package_data(LIBNAME, EXTLIST),
248 'spyder_breakpoints': get_package_data('spyder_breakpoints', EXTLIST),
249 'spyder_profiler': get_package_data('spyder_profiler', EXTLIST),
250 'spyder_pylint': get_package_data('spyder_pylint', EXTLIST),
251 'spyder_io_dcm': get_package_data('spyder_io_dcm', EXTLIST),
252 'spyder_io_hdf5': get_package_data('spyder_io_hdf5', EXTLIST),
253 },
254 scripts=[osp.join('scripts', fname) for fname in SCRIPTS],
255 data_files=get_data_files(),
256 classifiers=['License :: OSI Approved :: MIT License',
257 'Operating System :: MacOS',
258 'Operating System :: Microsoft :: Windows',
259 'Operating System :: POSIX :: Linux',
260 'Programming Language :: Python :: 2.7',
261 'Programming Language :: Python :: 3',
262 'Development Status :: 5 - Production/Stable',
263 'Topic :: Scientific/Engineering',
264 'Topic :: Software Development :: Widget Sets'],
265 cmdclass=CMDCLASS)
266
267
268 #==============================================================================
269 # Setuptools deps
270 #==============================================================================
271 if any(arg == 'bdist_wheel' for arg in sys.argv):
272 import setuptools # analysis:ignore
273
274 install_requires = [
275 'rope_py3k' if PY3 else 'rope>=0.9.4',
276 'jedi>=0.9.0',
277 'pyflakes',
278 'pygments>=2.0',
279 'qtconsole>=4.2.0',
280 'nbconvert',
281 'sphinx',
282 'pycodestyle',
283 'pylint',
284 'psutil',
285 'qtawesome>=0.4.1',
286 'qtpy>=1.1.0',
287 'pickleshare',
288 'pyzmq',
289 'chardet>=2.0.0',
290 'numpydoc',
291 ]
292
293 extras_require = {
294 'test:python_version == "2.7"': ['mock'],
295 'test': ['pytest',
296 'pytest-qt',
297 'pytest-cov',
298 'pytest-xvfb',
299 'mock',
300 'flaky',
301 'pandas',
302 'scipy',
303 'sympy',
304 'pillow',
305 'matplotlib',
306 'cython'],
307 }
308
309 if 'setuptools' in sys.modules:
310 setup_args['install_requires'] = install_requires
311 setup_args['extras_require'] = extras_require
312
313 setup_args['entry_points'] = {
314 'gui_scripts': [
315 '{} = spyder.app.start:main'.format(
316 'spyder3' if PY3 else 'spyder')
317 ]
318 }
319
320 setup_args.pop('scripts', None)
321
322
323 #==============================================================================
324 # Main setup
325 #==============================================================================
326 setup(**setup_args)
327
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -272,7 +272,7 @@
import setuptools # analysis:ignore
install_requires = [
- 'rope_py3k' if PY3 else 'rope>=0.9.4',
+ 'rope>=0.10.5',
'jedi>=0.9.0',
'pyflakes',
'pygments>=2.0',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -272,7 +272,7 @@\n import setuptools # analysis:ignore\n \n install_requires = [\n- 'rope_py3k' if PY3 else 'rope>=0.9.4',\n+ 'rope>=0.10.5',\n 'jedi>=0.9.0',\n 'pyflakes',\n 'pygments>=2.0',\n", "issue": "Move to support only Rope 0.10.5+\nThat's because 0.10.5 is the first version to support Python 2 and 3 in the same package.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright \u00a9 Spyder Project Contributors\n# Licensed under the terms of the MIT License\n# (see spyder/__init__.py for details)\n\n\"\"\"\nSpyder\n======\n\nThe Scientific PYthon Development EnviRonment\n\"\"\"\n\nfrom __future__ import print_function\n\nimport os\nimport os.path as osp\nimport subprocess\nimport sys\nimport shutil\n\nfrom distutils.core import setup\nfrom distutils.command.build import build\nfrom distutils.command.install import install\nfrom distutils.command.install_data import install_data\n\n\n#==============================================================================\n# Check for Python 3\n#==============================================================================\nPY3 = sys.version_info[0] == 3\n\n\n#==============================================================================\n# Minimal Python version sanity check\n# Taken from the notebook setup.py -- Modified BSD License\n#==============================================================================\nv = sys.version_info\nif v[:2] < (2,7) or (v[0] >= 3 and v[:2] < (3,3)):\n error = \"ERROR: Spyder requires Python version 2.7 or 3.3 or above.\"\n print(error, file=sys.stderr)\n sys.exit(1)\n\n\n#==============================================================================\n# Constants\n#==============================================================================\nNAME = 'spyder'\nLIBNAME = 'spyder'\nfrom spyder import __version__, __project_url__\n\n\n#==============================================================================\n# Auxiliary functions\n#==============================================================================\ndef get_package_data(name, extlist):\n \"\"\"Return data files for package *name* with extensions in *extlist*\"\"\"\n flist = []\n # Workaround to replace os.path.relpath (not available until Python 2.6):\n offset = len(name)+len(os.pathsep)\n for dirpath, _dirnames, filenames in os.walk(name):\n for fname in filenames:\n if not fname.startswith('.') and osp.splitext(fname)[1] in extlist:\n flist.append(osp.join(dirpath, fname)[offset:])\n return flist\n\n\ndef get_subpackages(name):\n \"\"\"Return subpackages of package *name*\"\"\"\n splist = []\n for dirpath, _dirnames, _filenames in os.walk(name):\n if osp.isfile(osp.join(dirpath, '__init__.py')):\n splist.append(\".\".join(dirpath.split(os.sep)))\n return splist\n\n\ndef get_data_files():\n \"\"\"Return data_files in a platform dependent manner\"\"\"\n if sys.platform.startswith('linux'):\n if PY3:\n data_files = [('share/applications', ['scripts/spyder3.desktop']),\n ('share/pixmaps', ['img_src/spyder3.png']),\n ('share/metainfo', ['scripts/spyder3.appdata.xml'])]\n else:\n data_files = [('share/applications', ['scripts/spyder.desktop']),\n ('share/pixmaps', ['img_src/spyder.png'])]\n elif os.name == 'nt':\n data_files = [('scripts', ['img_src/spyder.ico',\n 'img_src/spyder_reset.ico'])]\n else:\n data_files = []\n return data_files\n\n\ndef get_packages():\n \"\"\"Return package list\"\"\"\n packages = (\n get_subpackages(LIBNAME)\n + get_subpackages('spyder_breakpoints')\n + get_subpackages('spyder_profiler')\n + get_subpackages('spyder_pylint')\n + get_subpackages('spyder_io_dcm')\n + get_subpackages('spyder_io_hdf5')\n )\n return packages\n\n\n#==============================================================================\n# Make Linux detect Spyder desktop file\n#==============================================================================\nclass MyInstallData(install_data):\n def run(self):\n install_data.run(self)\n if sys.platform.startswith('linux'):\n try:\n subprocess.call(['update-desktop-database'])\n except:\n print(\"ERROR: unable to update desktop database\",\n file=sys.stderr)\nCMDCLASS = {'install_data': MyInstallData}\n\n\n#==============================================================================\n# Sphinx build (documentation)\n#==============================================================================\ndef get_html_help_exe():\n \"\"\"Return HTML Help Workshop executable path (Windows only)\"\"\"\n if os.name == 'nt':\n hhc_base = r'C:\\Program Files%s\\HTML Help Workshop\\hhc.exe'\n for hhc_exe in (hhc_base % '', hhc_base % ' (x86)'):\n if osp.isfile(hhc_exe):\n return hhc_exe\n else:\n return\n\ntry:\n from sphinx import setup_command\n\n class MyBuild(build):\n user_options = [('no-doc', None, \"Don't build Spyder documentation\")] \\\n + build.user_options\n def __init__(self, *args, **kwargs):\n build.__init__(self, *args, **kwargs)\n self.no_doc = False\n def with_doc(self):\n setup_dir = os.path.dirname(os.path.abspath(__file__))\n is_doc_dir = os.path.isdir(os.path.join(setup_dir, 'doc'))\n install_obj = self.distribution.get_command_obj('install')\n return (is_doc_dir and not self.no_doc and not install_obj.no_doc)\n sub_commands = build.sub_commands + [('build_doc', with_doc)]\n CMDCLASS['build'] = MyBuild\n\n\n class MyInstall(install):\n user_options = [('no-doc', None, \"Don't build Spyder documentation\")] \\\n + install.user_options\n def __init__(self, *args, **kwargs):\n install.__init__(self, *args, **kwargs)\n self.no_doc = False\n CMDCLASS['install'] = MyInstall\n\n\n class MyBuildDoc(setup_command.BuildDoc):\n def run(self):\n build = self.get_finalized_command('build')\n sys.path.insert(0, os.path.abspath(build.build_lib))\n dirname = self.distribution.get_command_obj('build').build_purelib\n self.builder_target_dir = osp.join(dirname, 'spyder', 'doc')\n\n if not osp.exists(self.builder_target_dir):\n os.mkdir(self.builder_target_dir)\n\n hhc_exe = get_html_help_exe()\n self.builder = \"html\" if hhc_exe is None else \"htmlhelp\"\n\n try:\n setup_command.BuildDoc.run(self)\n except UnicodeDecodeError:\n print(\"ERROR: unable to build documentation because Sphinx \"\\\n \"do not handle source path with non-ASCII characters. \"\\\n \"Please try to move the source package to another \"\\\n \"location (path with *only* ASCII characters).\",\n file=sys.stderr)\n sys.path.pop(0)\n\n # Building chm doc, if HTML Help Workshop is installed\n if hhc_exe is not None:\n fname = osp.join(self.builder_target_dir, 'Spyderdoc.chm')\n subprocess.call('\"%s\" %s' % (hhc_exe, fname), shell=True)\n if osp.isfile(fname):\n dest = osp.join(dirname, 'spyder')\n try:\n shutil.move(fname, dest)\n except shutil.Error:\n print(\"Unable to replace %s\" % dest)\n shutil.rmtree(self.builder_target_dir)\n\n CMDCLASS['build_doc'] = MyBuildDoc\nexcept ImportError:\n print('WARNING: unable to build documentation because Sphinx '\\\n 'is not installed', file=sys.stderr)\n\n\n#==============================================================================\n# Main scripts\n#==============================================================================\n# NOTE: the '[...]_win_post_install.py' script is installed even on non-Windows\n# platforms due to a bug in pip installation process (see Issue 1158)\nSCRIPTS = ['%s_win_post_install.py' % NAME]\nif PY3 and sys.platform.startswith('linux'):\n SCRIPTS.append('spyder3')\nelse:\n SCRIPTS.append('spyder')\n\n\n#==============================================================================\n# Files added to the package\n#==============================================================================\nEXTLIST = ['.mo', '.svg', '.png', '.css', '.html', '.js', '.chm', '.ini',\n '.txt', '.rst', '.qss', '.ttf', '.json', '.c', '.cpp', '.java',\n '.md', '.R', '.csv', '.pyx', '.ipynb']\nif os.name == 'nt':\n SCRIPTS += ['spyder.bat']\n EXTLIST += ['.ico']\n\n\n#==============================================================================\n# Setup arguments\n#==============================================================================\nsetup_args = dict(name=NAME,\n version=__version__,\n description='Scientific PYthon Development EnviRonment',\n long_description=\n\"\"\"Spyder is an interactive Python development environment providing\nMATLAB-like features in a simple and light-weighted software.\nIt also provides ready-to-use pure-Python widgets to your PyQt5 or\nPyQt4 application: source code editor with syntax highlighting and\ncode introspection/analysis features, NumPy array editor, dictionary\neditor, Python console, etc.\"\"\",\n download_url='%s/files/%s-%s.zip' % (__project_url__, NAME, __version__),\n author=\"The Spyder Project Contributors\",\n url=__project_url__,\n license='MIT',\n keywords='PyQt5 PyQt4 editor shell console widgets IDE',\n platforms=['any'],\n packages=get_packages(),\n package_data={LIBNAME: get_package_data(LIBNAME, EXTLIST),\n 'spyder_breakpoints': get_package_data('spyder_breakpoints', EXTLIST),\n 'spyder_profiler': get_package_data('spyder_profiler', EXTLIST),\n 'spyder_pylint': get_package_data('spyder_pylint', EXTLIST),\n 'spyder_io_dcm': get_package_data('spyder_io_dcm', EXTLIST),\n 'spyder_io_hdf5': get_package_data('spyder_io_hdf5', EXTLIST),\n },\n scripts=[osp.join('scripts', fname) for fname in SCRIPTS],\n data_files=get_data_files(),\n classifiers=['License :: OSI Approved :: MIT License',\n 'Operating System :: MacOS',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Development Status :: 5 - Production/Stable',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Software Development :: Widget Sets'],\n cmdclass=CMDCLASS)\n\n\n#==============================================================================\n# Setuptools deps\n#==============================================================================\nif any(arg == 'bdist_wheel' for arg in sys.argv):\n import setuptools # analysis:ignore\n\ninstall_requires = [\n 'rope_py3k' if PY3 else 'rope>=0.9.4',\n 'jedi>=0.9.0',\n 'pyflakes',\n 'pygments>=2.0',\n 'qtconsole>=4.2.0',\n 'nbconvert',\n 'sphinx',\n 'pycodestyle',\n 'pylint',\n 'psutil',\n 'qtawesome>=0.4.1',\n 'qtpy>=1.1.0',\n 'pickleshare',\n 'pyzmq',\n 'chardet>=2.0.0',\n 'numpydoc',\n]\n\nextras_require = {\n 'test:python_version == \"2.7\"': ['mock'],\n 'test': ['pytest',\n 'pytest-qt',\n 'pytest-cov',\n 'pytest-xvfb',\n 'mock',\n 'flaky',\n 'pandas',\n 'scipy',\n 'sympy',\n 'pillow',\n 'matplotlib',\n 'cython'],\n}\n\nif 'setuptools' in sys.modules:\n setup_args['install_requires'] = install_requires\n setup_args['extras_require'] = extras_require\n\n setup_args['entry_points'] = {\n 'gui_scripts': [\n '{} = spyder.app.start:main'.format(\n 'spyder3' if PY3 else 'spyder')\n ]\n }\n\n setup_args.pop('scripts', None)\n\n\n#==============================================================================\n# Main setup\n#==============================================================================\nsetup(**setup_args)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright \u00a9 Spyder Project Contributors\n# Licensed under the terms of the MIT License\n# (see spyder/__init__.py for details)\n\n\"\"\"\nSpyder\n======\n\nThe Scientific PYthon Development EnviRonment\n\"\"\"\n\nfrom __future__ import print_function\n\nimport os\nimport os.path as osp\nimport subprocess\nimport sys\nimport shutil\n\nfrom distutils.core import setup\nfrom distutils.command.build import build\nfrom distutils.command.install import install\nfrom distutils.command.install_data import install_data\n\n\n#==============================================================================\n# Check for Python 3\n#==============================================================================\nPY3 = sys.version_info[0] == 3\n\n\n#==============================================================================\n# Minimal Python version sanity check\n# Taken from the notebook setup.py -- Modified BSD License\n#==============================================================================\nv = sys.version_info\nif v[:2] < (2,7) or (v[0] >= 3 and v[:2] < (3,3)):\n error = \"ERROR: Spyder requires Python version 2.7 or 3.3 or above.\"\n print(error, file=sys.stderr)\n sys.exit(1)\n\n\n#==============================================================================\n# Constants\n#==============================================================================\nNAME = 'spyder'\nLIBNAME = 'spyder'\nfrom spyder import __version__, __project_url__\n\n\n#==============================================================================\n# Auxiliary functions\n#==============================================================================\ndef get_package_data(name, extlist):\n \"\"\"Return data files for package *name* with extensions in *extlist*\"\"\"\n flist = []\n # Workaround to replace os.path.relpath (not available until Python 2.6):\n offset = len(name)+len(os.pathsep)\n for dirpath, _dirnames, filenames in os.walk(name):\n for fname in filenames:\n if not fname.startswith('.') and osp.splitext(fname)[1] in extlist:\n flist.append(osp.join(dirpath, fname)[offset:])\n return flist\n\n\ndef get_subpackages(name):\n \"\"\"Return subpackages of package *name*\"\"\"\n splist = []\n for dirpath, _dirnames, _filenames in os.walk(name):\n if osp.isfile(osp.join(dirpath, '__init__.py')):\n splist.append(\".\".join(dirpath.split(os.sep)))\n return splist\n\n\ndef get_data_files():\n \"\"\"Return data_files in a platform dependent manner\"\"\"\n if sys.platform.startswith('linux'):\n if PY3:\n data_files = [('share/applications', ['scripts/spyder3.desktop']),\n ('share/pixmaps', ['img_src/spyder3.png']),\n ('share/metainfo', ['scripts/spyder3.appdata.xml'])]\n else:\n data_files = [('share/applications', ['scripts/spyder.desktop']),\n ('share/pixmaps', ['img_src/spyder.png'])]\n elif os.name == 'nt':\n data_files = [('scripts', ['img_src/spyder.ico',\n 'img_src/spyder_reset.ico'])]\n else:\n data_files = []\n return data_files\n\n\ndef get_packages():\n \"\"\"Return package list\"\"\"\n packages = (\n get_subpackages(LIBNAME)\n + get_subpackages('spyder_breakpoints')\n + get_subpackages('spyder_profiler')\n + get_subpackages('spyder_pylint')\n + get_subpackages('spyder_io_dcm')\n + get_subpackages('spyder_io_hdf5')\n )\n return packages\n\n\n#==============================================================================\n# Make Linux detect Spyder desktop file\n#==============================================================================\nclass MyInstallData(install_data):\n def run(self):\n install_data.run(self)\n if sys.platform.startswith('linux'):\n try:\n subprocess.call(['update-desktop-database'])\n except:\n print(\"ERROR: unable to update desktop database\",\n file=sys.stderr)\nCMDCLASS = {'install_data': MyInstallData}\n\n\n#==============================================================================\n# Sphinx build (documentation)\n#==============================================================================\ndef get_html_help_exe():\n \"\"\"Return HTML Help Workshop executable path (Windows only)\"\"\"\n if os.name == 'nt':\n hhc_base = r'C:\\Program Files%s\\HTML Help Workshop\\hhc.exe'\n for hhc_exe in (hhc_base % '', hhc_base % ' (x86)'):\n if osp.isfile(hhc_exe):\n return hhc_exe\n else:\n return\n\ntry:\n from sphinx import setup_command\n\n class MyBuild(build):\n user_options = [('no-doc', None, \"Don't build Spyder documentation\")] \\\n + build.user_options\n def __init__(self, *args, **kwargs):\n build.__init__(self, *args, **kwargs)\n self.no_doc = False\n def with_doc(self):\n setup_dir = os.path.dirname(os.path.abspath(__file__))\n is_doc_dir = os.path.isdir(os.path.join(setup_dir, 'doc'))\n install_obj = self.distribution.get_command_obj('install')\n return (is_doc_dir and not self.no_doc and not install_obj.no_doc)\n sub_commands = build.sub_commands + [('build_doc', with_doc)]\n CMDCLASS['build'] = MyBuild\n\n\n class MyInstall(install):\n user_options = [('no-doc', None, \"Don't build Spyder documentation\")] \\\n + install.user_options\n def __init__(self, *args, **kwargs):\n install.__init__(self, *args, **kwargs)\n self.no_doc = False\n CMDCLASS['install'] = MyInstall\n\n\n class MyBuildDoc(setup_command.BuildDoc):\n def run(self):\n build = self.get_finalized_command('build')\n sys.path.insert(0, os.path.abspath(build.build_lib))\n dirname = self.distribution.get_command_obj('build').build_purelib\n self.builder_target_dir = osp.join(dirname, 'spyder', 'doc')\n\n if not osp.exists(self.builder_target_dir):\n os.mkdir(self.builder_target_dir)\n\n hhc_exe = get_html_help_exe()\n self.builder = \"html\" if hhc_exe is None else \"htmlhelp\"\n\n try:\n setup_command.BuildDoc.run(self)\n except UnicodeDecodeError:\n print(\"ERROR: unable to build documentation because Sphinx \"\\\n \"do not handle source path with non-ASCII characters. \"\\\n \"Please try to move the source package to another \"\\\n \"location (path with *only* ASCII characters).\",\n file=sys.stderr)\n sys.path.pop(0)\n\n # Building chm doc, if HTML Help Workshop is installed\n if hhc_exe is not None:\n fname = osp.join(self.builder_target_dir, 'Spyderdoc.chm')\n subprocess.call('\"%s\" %s' % (hhc_exe, fname), shell=True)\n if osp.isfile(fname):\n dest = osp.join(dirname, 'spyder')\n try:\n shutil.move(fname, dest)\n except shutil.Error:\n print(\"Unable to replace %s\" % dest)\n shutil.rmtree(self.builder_target_dir)\n\n CMDCLASS['build_doc'] = MyBuildDoc\nexcept ImportError:\n print('WARNING: unable to build documentation because Sphinx '\\\n 'is not installed', file=sys.stderr)\n\n\n#==============================================================================\n# Main scripts\n#==============================================================================\n# NOTE: the '[...]_win_post_install.py' script is installed even on non-Windows\n# platforms due to a bug in pip installation process (see Issue 1158)\nSCRIPTS = ['%s_win_post_install.py' % NAME]\nif PY3 and sys.platform.startswith('linux'):\n SCRIPTS.append('spyder3')\nelse:\n SCRIPTS.append('spyder')\n\n\n#==============================================================================\n# Files added to the package\n#==============================================================================\nEXTLIST = ['.mo', '.svg', '.png', '.css', '.html', '.js', '.chm', '.ini',\n '.txt', '.rst', '.qss', '.ttf', '.json', '.c', '.cpp', '.java',\n '.md', '.R', '.csv', '.pyx', '.ipynb']\nif os.name == 'nt':\n SCRIPTS += ['spyder.bat']\n EXTLIST += ['.ico']\n\n\n#==============================================================================\n# Setup arguments\n#==============================================================================\nsetup_args = dict(name=NAME,\n version=__version__,\n description='Scientific PYthon Development EnviRonment',\n long_description=\n\"\"\"Spyder is an interactive Python development environment providing\nMATLAB-like features in a simple and light-weighted software.\nIt also provides ready-to-use pure-Python widgets to your PyQt5 or\nPyQt4 application: source code editor with syntax highlighting and\ncode introspection/analysis features, NumPy array editor, dictionary\neditor, Python console, etc.\"\"\",\n download_url='%s/files/%s-%s.zip' % (__project_url__, NAME, __version__),\n author=\"The Spyder Project Contributors\",\n url=__project_url__,\n license='MIT',\n keywords='PyQt5 PyQt4 editor shell console widgets IDE',\n platforms=['any'],\n packages=get_packages(),\n package_data={LIBNAME: get_package_data(LIBNAME, EXTLIST),\n 'spyder_breakpoints': get_package_data('spyder_breakpoints', EXTLIST),\n 'spyder_profiler': get_package_data('spyder_profiler', EXTLIST),\n 'spyder_pylint': get_package_data('spyder_pylint', EXTLIST),\n 'spyder_io_dcm': get_package_data('spyder_io_dcm', EXTLIST),\n 'spyder_io_hdf5': get_package_data('spyder_io_hdf5', EXTLIST),\n },\n scripts=[osp.join('scripts', fname) for fname in SCRIPTS],\n data_files=get_data_files(),\n classifiers=['License :: OSI Approved :: MIT License',\n 'Operating System :: MacOS',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Development Status :: 5 - Production/Stable',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Software Development :: Widget Sets'],\n cmdclass=CMDCLASS)\n\n\n#==============================================================================\n# Setuptools deps\n#==============================================================================\nif any(arg == 'bdist_wheel' for arg in sys.argv):\n import setuptools # analysis:ignore\n\ninstall_requires = [\n 'rope>=0.10.5',\n 'jedi>=0.9.0',\n 'pyflakes',\n 'pygments>=2.0',\n 'qtconsole>=4.2.0',\n 'nbconvert',\n 'sphinx',\n 'pycodestyle',\n 'pylint',\n 'psutil',\n 'qtawesome>=0.4.1',\n 'qtpy>=1.1.0',\n 'pickleshare',\n 'pyzmq',\n 'chardet>=2.0.0',\n 'numpydoc',\n]\n\nextras_require = {\n 'test:python_version == \"2.7\"': ['mock'],\n 'test': ['pytest',\n 'pytest-qt',\n 'pytest-cov',\n 'pytest-xvfb',\n 'mock',\n 'flaky',\n 'pandas',\n 'scipy',\n 'sympy',\n 'pillow',\n 'matplotlib',\n 'cython'],\n}\n\nif 'setuptools' in sys.modules:\n setup_args['install_requires'] = install_requires\n setup_args['extras_require'] = extras_require\n\n setup_args['entry_points'] = {\n 'gui_scripts': [\n '{} = spyder.app.start:main'.format(\n 'spyder3' if PY3 else 'spyder')\n ]\n }\n\n setup_args.pop('scripts', None)\n\n\n#==============================================================================\n# Main setup\n#==============================================================================\nsetup(**setup_args)\n", "path": "setup.py"}]} | 3,715 | 109 |
gh_patches_debug_17505 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-439 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect total number of batches
**Describe the bug**
Sometimes total number of batches is computed wrong.
**To Reproduce**
Run the following code with current master branch:
```
from time import sleep
import torch
from torch.utils.data import DataLoader, Dataset
import pytorch_lightning as pl
class DummyDataset(Dataset):
def __init__(self, n):
super().__init__()
self.n = n
def __len__(self):
return self.n
def __getitem__(self, idx):
return torch.rand(10)
class CoolSystem(pl.LightningModule):
def __init__(self):
super(CoolSystem, self).__init__()
self.layer = torch.nn.Linear(10, 10)
def forward(self, x):
return self.layer(x)
def training_step(self, batch, batch_nb):
sleep(1)
return {'loss': torch.mean(self.forward(batch) ** 2)}
def validation_step(self, batch, batch_nb):
sleep(1)
return {}
def validation_end(self, outputs):
return {}
def configure_optimizers(self):
return [torch.optim.Adam(self.layer.parameters())]
@pl.data_loader
def train_dataloader(self):
return DataLoader(DummyDataset(10), batch_size=1)
@pl.data_loader
def val_dataloader(self):
return DataLoader(DummyDataset(5), batch_size=1)
model = CoolSystem()
trainer = pl.Trainer(weights_summary=None, nb_sanity_val_steps=0, early_stop_callback=False,
val_percent_check=1.0, val_check_interval=0.5)
trainer.fit(model)
```
At first output will look like:
`67%|█████▋ | 10/15 [00:10<00:05, 1.05s/it, batch_nb=4, epoch=0, loss=0.194, v_nb=0]`
But at the end of the epoch it will be like:
`20it [00:20, 1.07s/it, batch_nb=9, epoch=0, loss=0.212, v_nb=0]`
Moreover, if you run
```
trainer = pl.Trainer(weights_summary=None, nb_sanity_val_steps=0, early_stop_callback=False,
val_percent_check=1.0, val_check_interval=0.5, check_val_every_n_epoch=10)
```
The first epoch will end at the point
`67%|█████▋ | 10/15 [00:09<00:04, 1.01it/s, batch_nb=8, epoch=1, loss=0.069, v_nb=0]`
**Expected behavior**
Correct total number of batches.
**Possible solution**
Now we have `total_batches = nb_training_batches + nb_val_batches`, where `nb_val_batches` is the number of batches of only one validation loop. And the problem arises because actually there can be several validation loops during one training epoch. Moreover there is a parameter `check_val_every_n_epoch` and thus there can be no validation loops at all.
With this in mind, it looks like the correct formula is:
```
is_val_epoch = (current_epoch + 1) % check_val_every_n_epoch == 0
val_checks_per_epoch = nb_training_batches // val_check_batch if is_val_epoch else 0
total_batches = nb_training_batches + nb_val_batches * val_checks_per_epoch
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_lightning/trainer/train_loop_mixin.py`
Content:
```
1 import numpy as np
2
3 try:
4 from apex import amp
5
6 APEX_AVAILABLE = True
7 except ImportError:
8 APEX_AVAILABLE = False
9
10
11 class TrainerTrainLoopMixin(object):
12
13 def train(self):
14 # run all epochs
15 for epoch_nb in range(self.current_epoch, self.max_nb_epochs):
16 # set seed for distributed sampler (enables shuffling for each epoch)
17 if self.use_ddp and hasattr(self.get_train_dataloader().sampler, 'set_epoch'):
18 self.get_train_dataloader().sampler.set_epoch(epoch_nb)
19
20 # get model
21 model = self.get_model()
22
23 # update training progress in trainer and model
24 model.current_epoch = epoch_nb
25 self.current_epoch = epoch_nb
26 self.total_batches = self.nb_training_batches + self.nb_val_batches
27 self.batch_loss_value = 0 # accumulated grads
28
29 # limit the number of batches to 1 in fast_dev_run
30 if self.fast_dev_run:
31 self.total_batches = 1
32
33 # init progress_bar when requested
34 if self.show_progress_bar:
35 nb_iterations = self.total_batches
36
37 # for iterable train loader, the progress bar never ends
38 if self.is_iterable_train_dataloader:
39 nb_iterations = float('inf')
40 self.progress_bar.reset(nb_iterations)
41
42 # changing gradient according accumulation_scheduler
43 self.accumulation_scheduler.on_epoch_begin(epoch_nb, self)
44
45 # -----------------
46 # RUN TNG EPOCH
47 # -----------------
48 self.run_training_epoch()
49
50 # update LR schedulers
51 if self.lr_schedulers is not None:
52 for lr_scheduler in self.lr_schedulers:
53 lr_scheduler.step(self.current_epoch)
54
55 # early stopping
56 met_min_epochs = epoch_nb > self.min_nb_epochs
57 if self.enable_early_stop and (met_min_epochs or self.fast_dev_run):
58 should_stop = self.early_stop_callback.on_epoch_end(epoch=epoch_nb,
59 logs=self.callback_metrics)
60 # stop training
61 stop = should_stop and met_min_epochs
62 if stop:
63 return
64
65 if self.logger is not None:
66 self.logger.finalize("success")
67
68 def run_training_epoch(self):
69 # before epoch hook
70 if self.is_function_implemented('on_epoch_start'):
71 model = self.get_model()
72 model.on_epoch_start()
73
74 # run epoch
75 for batch_nb, batch in enumerate(self.get_train_dataloader()):
76 self.batch_nb = batch_nb
77
78 model = self.get_model()
79 model.global_step = self.global_step
80
81 # ---------------
82 # RUN TRAIN STEP
83 # ---------------
84 output = self.run_training_batch(batch, batch_nb)
85 batch_result, grad_norm_dic, batch_step_metrics = output
86
87 # when returning -1 from train_step, we end epoch early
88 early_stop_epoch = batch_result == -1
89
90 # ---------------
91 # RUN VAL STEP
92 # ---------------
93 is_val_check_batch = (batch_nb + 1) % self.val_check_batch == 0
94 can_check_epoch = (self.current_epoch + 1) % self.check_val_every_n_epoch == 0
95 should_check_val = ((is_val_check_batch or early_stop_epoch) and can_check_epoch)
96
97 # fast_dev_run always forces val checking after train batch
98 if self.fast_dev_run or should_check_val:
99 self.run_evaluation(test=self.testing)
100
101 # when logs should be saved
102 should_save_log = (batch_nb + 1) % self.log_save_interval == 0 or early_stop_epoch
103 if should_save_log or self.fast_dev_run:
104 if self.proc_rank == 0 and self.logger is not None:
105 self.logger.save()
106
107 # when metrics should be logged
108 should_log_metrics = batch_nb % self.row_log_interval == 0 or early_stop_epoch
109 if should_log_metrics or self.fast_dev_run:
110 # logs user requested information to logger
111 self.log_metrics(batch_step_metrics, grad_norm_dic)
112
113 self.global_step += 1
114 self.total_batch_nb += 1
115
116 # end epoch early
117 # stop when the flag is changed or we've gone past the amount
118 # requested in the batches
119 if early_stop_epoch or self.fast_dev_run:
120 break
121
122 # stop epoch if we limited nb batches
123 met_batch_limit = batch_nb >= self.nb_training_batches
124 if met_batch_limit:
125 break
126
127 # epoch end hook
128 if self.is_function_implemented('on_epoch_end'):
129 model = self.get_model()
130 model.on_epoch_end()
131
132 def run_training_batch(self, batch, batch_nb):
133 # track grad norms
134 grad_norm_dic = {}
135
136 # track all metrics for callbacks
137 all_callback_metrics = []
138
139 # track metrics to log
140 all_log_metrics = []
141
142 if batch is None:
143 return 0, grad_norm_dic
144
145 # hook
146 if self.is_function_implemented('on_batch_start'):
147 model_ref = self.get_model()
148 response = model_ref.on_batch_start(batch)
149
150 if response == -1:
151 return -1, grad_norm_dic
152
153 if self.show_progress_bar:
154 self.progress_bar.update(1)
155
156 # call training_step once per optimizer
157 for opt_idx, optimizer in enumerate(self.optimizers):
158
159 # wrap the forward step in a closure so second order methods work
160 def optimizer_closure():
161 # forward pass
162 output = self.training_forward(batch, batch_nb, opt_idx)
163 closure_loss, progress_bar_metrics, log_metrics, callback_metrics = output
164
165 # track metrics for callbacks
166 all_callback_metrics.append(callback_metrics)
167
168 # track progress bar metrics
169 self.add_tqdm_metrics(progress_bar_metrics)
170 all_log_metrics.append(log_metrics)
171
172 # accumulate loss
173 # (if accumulate_grad_batches = 1 no effect)
174 closure_loss = closure_loss / self.accumulate_grad_batches
175
176 # backward pass
177 # done in hook so user can overwrite if needed
178 model_ref = self.get_model()
179 model_ref.backward(self.use_amp, closure_loss, optimizer)
180
181 # insert after step hook
182 if self.is_function_implemented('on_after_backward'):
183 model_ref = self.get_model()
184 model_ref.on_after_backward()
185
186 return closure_loss
187
188 # calculate loss
189 loss = optimizer_closure()
190
191 # nan grads
192 if self.print_nan_grads:
193 self.print_nan_gradients()
194
195 # track total loss for logging (avoid mem leaks)
196 self.batch_loss_value += loss.item()
197
198 # gradient update with accumulated gradients
199 if (self.batch_nb + 1) % self.accumulate_grad_batches == 0:
200
201 # track gradient norms when requested
202 if batch_nb % self.row_log_interval == 0:
203 if self.track_grad_norm > 0:
204 model = self.get_model()
205 grad_norm_dic = model.grad_norm(self.track_grad_norm)
206
207 # clip gradients
208 self.clip_gradients()
209
210 # calls .step(), .zero_grad()
211 # override function to modify this behavior
212 model = self.get_model()
213 model.optimizer_step(self.current_epoch, batch_nb,
214 optimizer, opt_idx, optimizer_closure)
215
216 # calculate running loss for display
217 self.running_loss.append(self.batch_loss_value)
218 self.batch_loss_value = 0
219 self.avg_loss = np.mean(self.running_loss[-100:])
220
221 # update progress bar
222 if self.show_progress_bar:
223 # add model specific metrics
224 tqdm_metrics = self.training_tqdm_dict
225 self.progress_bar.set_postfix(**tqdm_metrics)
226
227 # activate batch end hook
228 if self.is_function_implemented('on_batch_end'):
229 model = self.get_model()
230 model.on_batch_end()
231
232 # collapse all metrics into one dict
233 all_log_metrics = {k: v for d in all_log_metrics for k, v in d.items()}
234
235 # track all metrics for callbacks
236 self.callback_metrics = {k: v for d in all_callback_metrics for k, v in d.items()}
237
238 return 0, grad_norm_dic, all_log_metrics
239
240 def training_forward(self, batch, batch_nb, opt_idx):
241 """
242 Handle forward for each training case (distributed, single gpu, etc...)
243 :param batch:
244 :param batch_nb:
245 :return:
246 """
247 # ---------------
248 # FORWARD
249 # ---------------
250 # enable not needing to add opt_idx to training_step
251 args = [batch, batch_nb]
252 if len(self.optimizers) > 1:
253 args.append(opt_idx)
254
255 if self.use_ddp or self.use_ddp2:
256 output = self.model(*args)
257 elif self.use_dp:
258 output = self.model(*args)
259 elif self.single_gpu:
260 gpu_id = 0
261 if type(self.data_parallel_device_ids) is list:
262 gpu_id = self.data_parallel_device_ids[0]
263 batch = self.transfer_batch_to_gpu(batch, gpu_id)
264 args[0] = batch
265 output = self.model.training_step(*args)
266
267 else:
268 output = self.model.training_step(*args)
269
270 # format and reduce outputs accordingly
271 output = self.process_output(output, train=True)
272 loss, progress_bar_metrics, log_metrics, callback_metrics = output
273 return loss, progress_bar_metrics, log_metrics, callback_metrics
274
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pytorch_lightning/trainer/train_loop_mixin.py b/pytorch_lightning/trainer/train_loop_mixin.py
--- a/pytorch_lightning/trainer/train_loop_mixin.py
+++ b/pytorch_lightning/trainer/train_loop_mixin.py
@@ -23,7 +23,15 @@
# update training progress in trainer and model
model.current_epoch = epoch_nb
self.current_epoch = epoch_nb
- self.total_batches = self.nb_training_batches + self.nb_val_batches
+
+ # val can be checked multiple times in epoch
+ is_val_epoch = (self.current_epoch + 1) % self.check_val_every_n_epoch == 0
+ val_checks_per_epoch = self.nb_training_batches // self.val_check_batch
+ val_checks_per_epoch = val_checks_per_epoch if is_val_epoch else 0
+
+ # total batches includes multiple val checks
+ self.total_batches = (self.nb_training_batches +
+ self.nb_val_batches * val_checks_per_epoch)
self.batch_loss_value = 0 # accumulated grads
# limit the number of batches to 1 in fast_dev_run
| {"golden_diff": "diff --git a/pytorch_lightning/trainer/train_loop_mixin.py b/pytorch_lightning/trainer/train_loop_mixin.py\n--- a/pytorch_lightning/trainer/train_loop_mixin.py\n+++ b/pytorch_lightning/trainer/train_loop_mixin.py\n@@ -23,7 +23,15 @@\n # update training progress in trainer and model\n model.current_epoch = epoch_nb\n self.current_epoch = epoch_nb\n- self.total_batches = self.nb_training_batches + self.nb_val_batches\n+\n+ # val can be checked multiple times in epoch\n+ is_val_epoch = (self.current_epoch + 1) % self.check_val_every_n_epoch == 0\n+ val_checks_per_epoch = self.nb_training_batches // self.val_check_batch\n+ val_checks_per_epoch = val_checks_per_epoch if is_val_epoch else 0\n+\n+ # total batches includes multiple val checks\n+ self.total_batches = (self.nb_training_batches +\n+ self.nb_val_batches * val_checks_per_epoch)\n self.batch_loss_value = 0 # accumulated grads\n \n # limit the number of batches to 1 in fast_dev_run\n", "issue": "Incorrect total number of batches\n**Describe the bug**\r\nSometimes total number of batches is computed wrong.\r\n\r\n**To Reproduce**\r\nRun the following code with current master branch:\r\n```\r\nfrom time import sleep\r\nimport torch\r\nfrom torch.utils.data import DataLoader, Dataset\r\n\r\nimport pytorch_lightning as pl\r\n\r\n\r\nclass DummyDataset(Dataset):\r\n def __init__(self, n):\r\n super().__init__()\r\n self.n = n\r\n\r\n def __len__(self):\r\n return self.n\r\n\r\n def __getitem__(self, idx):\r\n return torch.rand(10)\r\n\r\n\r\nclass CoolSystem(pl.LightningModule):\r\n def __init__(self):\r\n super(CoolSystem, self).__init__()\r\n self.layer = torch.nn.Linear(10, 10)\r\n\r\n def forward(self, x):\r\n return self.layer(x)\r\n\r\n def training_step(self, batch, batch_nb):\r\n sleep(1)\r\n return {'loss': torch.mean(self.forward(batch) ** 2)}\r\n\r\n def validation_step(self, batch, batch_nb):\r\n sleep(1)\r\n return {}\r\n\r\n def validation_end(self, outputs):\r\n return {}\r\n\r\n def configure_optimizers(self):\r\n return [torch.optim.Adam(self.layer.parameters())]\r\n\r\n @pl.data_loader\r\n def train_dataloader(self):\r\n return DataLoader(DummyDataset(10), batch_size=1)\r\n\r\n @pl.data_loader\r\n def val_dataloader(self):\r\n return DataLoader(DummyDataset(5), batch_size=1)\r\n\r\nmodel = CoolSystem()\r\ntrainer = pl.Trainer(weights_summary=None, nb_sanity_val_steps=0, early_stop_callback=False,\r\n val_percent_check=1.0, val_check_interval=0.5)\r\ntrainer.fit(model)\r\n```\r\nAt first output will look like:\r\n`67%|\u2588\u2588\u2588\u2588\u2588\u258b | 10/15 [00:10<00:05, 1.05s/it, batch_nb=4, epoch=0, loss=0.194, v_nb=0]`\r\n\r\nBut at the end of the epoch it will be like:\r\n`20it [00:20, 1.07s/it, batch_nb=9, epoch=0, loss=0.212, v_nb=0]`\r\n\r\nMoreover, if you run\r\n```\r\ntrainer = pl.Trainer(weights_summary=None, nb_sanity_val_steps=0, early_stop_callback=False,\r\n val_percent_check=1.0, val_check_interval=0.5, check_val_every_n_epoch=10)\r\n```\r\nThe first epoch will end at the point \r\n`67%|\u2588\u2588\u2588\u2588\u2588\u258b | 10/15 [00:09<00:04, 1.01it/s, batch_nb=8, epoch=1, loss=0.069, v_nb=0]` \r\n\r\n**Expected behavior**\r\nCorrect total number of batches.\r\n\r\n**Possible solution**\r\nNow we have `total_batches = nb_training_batches + nb_val_batches`, where `nb_val_batches` is the number of batches of only one validation loop. And the problem arises because actually there can be several validation loops during one training epoch. Moreover there is a parameter `check_val_every_n_epoch` and thus there can be no validation loops at all.\r\n\r\nWith this in mind, it looks like the correct formula is:\r\n```\r\nis_val_epoch = (current_epoch + 1) % check_val_every_n_epoch == 0\r\nval_checks_per_epoch = nb_training_batches // val_check_batch if is_val_epoch else 0\r\ntotal_batches = nb_training_batches + nb_val_batches * val_checks_per_epoch\r\n```\r\n\n", "before_files": [{"content": "import numpy as np\n\ntry:\n from apex import amp\n\n APEX_AVAILABLE = True\nexcept ImportError:\n APEX_AVAILABLE = False\n\n\nclass TrainerTrainLoopMixin(object):\n\n def train(self):\n # run all epochs\n for epoch_nb in range(self.current_epoch, self.max_nb_epochs):\n # set seed for distributed sampler (enables shuffling for each epoch)\n if self.use_ddp and hasattr(self.get_train_dataloader().sampler, 'set_epoch'):\n self.get_train_dataloader().sampler.set_epoch(epoch_nb)\n\n # get model\n model = self.get_model()\n\n # update training progress in trainer and model\n model.current_epoch = epoch_nb\n self.current_epoch = epoch_nb\n self.total_batches = self.nb_training_batches + self.nb_val_batches\n self.batch_loss_value = 0 # accumulated grads\n\n # limit the number of batches to 1 in fast_dev_run\n if self.fast_dev_run:\n self.total_batches = 1\n\n # init progress_bar when requested\n if self.show_progress_bar:\n nb_iterations = self.total_batches\n\n # for iterable train loader, the progress bar never ends\n if self.is_iterable_train_dataloader:\n nb_iterations = float('inf')\n self.progress_bar.reset(nb_iterations)\n\n # changing gradient according accumulation_scheduler\n self.accumulation_scheduler.on_epoch_begin(epoch_nb, self)\n\n # -----------------\n # RUN TNG EPOCH\n # -----------------\n self.run_training_epoch()\n\n # update LR schedulers\n if self.lr_schedulers is not None:\n for lr_scheduler in self.lr_schedulers:\n lr_scheduler.step(self.current_epoch)\n\n # early stopping\n met_min_epochs = epoch_nb > self.min_nb_epochs\n if self.enable_early_stop and (met_min_epochs or self.fast_dev_run):\n should_stop = self.early_stop_callback.on_epoch_end(epoch=epoch_nb,\n logs=self.callback_metrics)\n # stop training\n stop = should_stop and met_min_epochs\n if stop:\n return\n\n if self.logger is not None:\n self.logger.finalize(\"success\")\n\n def run_training_epoch(self):\n # before epoch hook\n if self.is_function_implemented('on_epoch_start'):\n model = self.get_model()\n model.on_epoch_start()\n\n # run epoch\n for batch_nb, batch in enumerate(self.get_train_dataloader()):\n self.batch_nb = batch_nb\n\n model = self.get_model()\n model.global_step = self.global_step\n\n # ---------------\n # RUN TRAIN STEP\n # ---------------\n output = self.run_training_batch(batch, batch_nb)\n batch_result, grad_norm_dic, batch_step_metrics = output\n\n # when returning -1 from train_step, we end epoch early\n early_stop_epoch = batch_result == -1\n\n # ---------------\n # RUN VAL STEP\n # ---------------\n is_val_check_batch = (batch_nb + 1) % self.val_check_batch == 0\n can_check_epoch = (self.current_epoch + 1) % self.check_val_every_n_epoch == 0\n should_check_val = ((is_val_check_batch or early_stop_epoch) and can_check_epoch)\n\n # fast_dev_run always forces val checking after train batch\n if self.fast_dev_run or should_check_val:\n self.run_evaluation(test=self.testing)\n\n # when logs should be saved\n should_save_log = (batch_nb + 1) % self.log_save_interval == 0 or early_stop_epoch\n if should_save_log or self.fast_dev_run:\n if self.proc_rank == 0 and self.logger is not None:\n self.logger.save()\n\n # when metrics should be logged\n should_log_metrics = batch_nb % self.row_log_interval == 0 or early_stop_epoch\n if should_log_metrics or self.fast_dev_run:\n # logs user requested information to logger\n self.log_metrics(batch_step_metrics, grad_norm_dic)\n\n self.global_step += 1\n self.total_batch_nb += 1\n\n # end epoch early\n # stop when the flag is changed or we've gone past the amount\n # requested in the batches\n if early_stop_epoch or self.fast_dev_run:\n break\n\n # stop epoch if we limited nb batches\n met_batch_limit = batch_nb >= self.nb_training_batches\n if met_batch_limit:\n break\n\n # epoch end hook\n if self.is_function_implemented('on_epoch_end'):\n model = self.get_model()\n model.on_epoch_end()\n\n def run_training_batch(self, batch, batch_nb):\n # track grad norms\n grad_norm_dic = {}\n\n # track all metrics for callbacks\n all_callback_metrics = []\n\n # track metrics to log\n all_log_metrics = []\n\n if batch is None:\n return 0, grad_norm_dic\n\n # hook\n if self.is_function_implemented('on_batch_start'):\n model_ref = self.get_model()\n response = model_ref.on_batch_start(batch)\n\n if response == -1:\n return -1, grad_norm_dic\n\n if self.show_progress_bar:\n self.progress_bar.update(1)\n\n # call training_step once per optimizer\n for opt_idx, optimizer in enumerate(self.optimizers):\n\n # wrap the forward step in a closure so second order methods work\n def optimizer_closure():\n # forward pass\n output = self.training_forward(batch, batch_nb, opt_idx)\n closure_loss, progress_bar_metrics, log_metrics, callback_metrics = output\n\n # track metrics for callbacks\n all_callback_metrics.append(callback_metrics)\n\n # track progress bar metrics\n self.add_tqdm_metrics(progress_bar_metrics)\n all_log_metrics.append(log_metrics)\n\n # accumulate loss\n # (if accumulate_grad_batches = 1 no effect)\n closure_loss = closure_loss / self.accumulate_grad_batches\n\n # backward pass\n # done in hook so user can overwrite if needed\n model_ref = self.get_model()\n model_ref.backward(self.use_amp, closure_loss, optimizer)\n\n # insert after step hook\n if self.is_function_implemented('on_after_backward'):\n model_ref = self.get_model()\n model_ref.on_after_backward()\n\n return closure_loss\n\n # calculate loss\n loss = optimizer_closure()\n\n # nan grads\n if self.print_nan_grads:\n self.print_nan_gradients()\n\n # track total loss for logging (avoid mem leaks)\n self.batch_loss_value += loss.item()\n\n # gradient update with accumulated gradients\n if (self.batch_nb + 1) % self.accumulate_grad_batches == 0:\n\n # track gradient norms when requested\n if batch_nb % self.row_log_interval == 0:\n if self.track_grad_norm > 0:\n model = self.get_model()\n grad_norm_dic = model.grad_norm(self.track_grad_norm)\n\n # clip gradients\n self.clip_gradients()\n\n # calls .step(), .zero_grad()\n # override function to modify this behavior\n model = self.get_model()\n model.optimizer_step(self.current_epoch, batch_nb,\n optimizer, opt_idx, optimizer_closure)\n\n # calculate running loss for display\n self.running_loss.append(self.batch_loss_value)\n self.batch_loss_value = 0\n self.avg_loss = np.mean(self.running_loss[-100:])\n\n # update progress bar\n if self.show_progress_bar:\n # add model specific metrics\n tqdm_metrics = self.training_tqdm_dict\n self.progress_bar.set_postfix(**tqdm_metrics)\n\n # activate batch end hook\n if self.is_function_implemented('on_batch_end'):\n model = self.get_model()\n model.on_batch_end()\n\n # collapse all metrics into one dict\n all_log_metrics = {k: v for d in all_log_metrics for k, v in d.items()}\n\n # track all metrics for callbacks\n self.callback_metrics = {k: v for d in all_callback_metrics for k, v in d.items()}\n\n return 0, grad_norm_dic, all_log_metrics\n\n def training_forward(self, batch, batch_nb, opt_idx):\n \"\"\"\n Handle forward for each training case (distributed, single gpu, etc...)\n :param batch:\n :param batch_nb:\n :return:\n \"\"\"\n # ---------------\n # FORWARD\n # ---------------\n # enable not needing to add opt_idx to training_step\n args = [batch, batch_nb]\n if len(self.optimizers) > 1:\n args.append(opt_idx)\n\n if self.use_ddp or self.use_ddp2:\n output = self.model(*args)\n elif self.use_dp:\n output = self.model(*args)\n elif self.single_gpu:\n gpu_id = 0\n if type(self.data_parallel_device_ids) is list:\n gpu_id = self.data_parallel_device_ids[0]\n batch = self.transfer_batch_to_gpu(batch, gpu_id)\n args[0] = batch\n output = self.model.training_step(*args)\n\n else:\n output = self.model.training_step(*args)\n\n # format and reduce outputs accordingly\n output = self.process_output(output, train=True)\n loss, progress_bar_metrics, log_metrics, callback_metrics = output\n return loss, progress_bar_metrics, log_metrics, callback_metrics\n", "path": "pytorch_lightning/trainer/train_loop_mixin.py"}], "after_files": [{"content": "import numpy as np\n\ntry:\n from apex import amp\n\n APEX_AVAILABLE = True\nexcept ImportError:\n APEX_AVAILABLE = False\n\n\nclass TrainerTrainLoopMixin(object):\n\n def train(self):\n # run all epochs\n for epoch_nb in range(self.current_epoch, self.max_nb_epochs):\n # set seed for distributed sampler (enables shuffling for each epoch)\n if self.use_ddp and hasattr(self.get_train_dataloader().sampler, 'set_epoch'):\n self.get_train_dataloader().sampler.set_epoch(epoch_nb)\n\n # get model\n model = self.get_model()\n\n # update training progress in trainer and model\n model.current_epoch = epoch_nb\n self.current_epoch = epoch_nb\n\n # val can be checked multiple times in epoch\n is_val_epoch = (self.current_epoch + 1) % self.check_val_every_n_epoch == 0\n val_checks_per_epoch = self.nb_training_batches // self.val_check_batch\n val_checks_per_epoch = val_checks_per_epoch if is_val_epoch else 0\n\n # total batches includes multiple val checks\n self.total_batches = (self.nb_training_batches +\n self.nb_val_batches * val_checks_per_epoch)\n self.batch_loss_value = 0 # accumulated grads\n\n # limit the number of batches to 1 in fast_dev_run\n if self.fast_dev_run:\n self.total_batches = 1\n\n # init progress_bar when requested\n if self.show_progress_bar:\n nb_iterations = self.total_batches\n\n # for iterable train loader, the progress bar never ends\n if self.is_iterable_train_dataloader:\n nb_iterations = float('inf')\n self.progress_bar.reset(nb_iterations)\n\n # changing gradient according accumulation_scheduler\n self.accumulation_scheduler.on_epoch_begin(epoch_nb, self)\n\n # -----------------\n # RUN TNG EPOCH\n # -----------------\n self.run_training_epoch()\n\n # update LR schedulers\n if self.lr_schedulers is not None:\n for lr_scheduler in self.lr_schedulers:\n lr_scheduler.step(self.current_epoch)\n\n # early stopping\n met_min_epochs = epoch_nb > self.min_nb_epochs\n if self.enable_early_stop and (met_min_epochs or self.fast_dev_run):\n should_stop = self.early_stop_callback.on_epoch_end(epoch=epoch_nb,\n logs=self.callback_metrics)\n # stop training\n stop = should_stop and met_min_epochs\n if stop:\n return\n\n if self.logger is not None:\n self.logger.finalize(\"success\")\n\n def run_training_epoch(self):\n # before epoch hook\n if self.is_function_implemented('on_epoch_start'):\n model = self.get_model()\n model.on_epoch_start()\n\n # run epoch\n for batch_nb, batch in enumerate(self.get_train_dataloader()):\n self.batch_nb = batch_nb\n\n model = self.get_model()\n model.global_step = self.global_step\n\n # ---------------\n # RUN TRAIN STEP\n # ---------------\n output = self.run_training_batch(batch, batch_nb)\n batch_result, grad_norm_dic, batch_step_metrics = output\n\n # when returning -1 from train_step, we end epoch early\n early_stop_epoch = batch_result == -1\n\n # ---------------\n # RUN VAL STEP\n # ---------------\n is_val_check_batch = (batch_nb + 1) % self.val_check_batch == 0\n can_check_epoch = (self.current_epoch + 1) % self.check_val_every_n_epoch == 0\n should_check_val = ((is_val_check_batch or early_stop_epoch) and can_check_epoch)\n\n # fast_dev_run always forces val checking after train batch\n if self.fast_dev_run or should_check_val:\n self.run_evaluation(test=self.testing)\n\n # when logs should be saved\n should_save_log = (batch_nb + 1) % self.log_save_interval == 0 or early_stop_epoch\n if should_save_log or self.fast_dev_run:\n if self.proc_rank == 0 and self.logger is not None:\n self.logger.save()\n\n # when metrics should be logged\n should_log_metrics = batch_nb % self.row_log_interval == 0 or early_stop_epoch\n if should_log_metrics or self.fast_dev_run:\n # logs user requested information to logger\n self.log_metrics(batch_step_metrics, grad_norm_dic)\n\n self.global_step += 1\n self.total_batch_nb += 1\n\n # end epoch early\n # stop when the flag is changed or we've gone past the amount\n # requested in the batches\n if early_stop_epoch or self.fast_dev_run:\n break\n\n # stop epoch if we limited nb batches\n met_batch_limit = batch_nb >= self.nb_training_batches\n if met_batch_limit:\n break\n\n # epoch end hook\n if self.is_function_implemented('on_epoch_end'):\n model = self.get_model()\n model.on_epoch_end()\n\n def run_training_batch(self, batch, batch_nb):\n # track grad norms\n grad_norm_dic = {}\n\n # track all metrics for callbacks\n all_callback_metrics = []\n\n # track metrics to log\n all_log_metrics = []\n\n if batch is None:\n return 0, grad_norm_dic\n\n # hook\n if self.is_function_implemented('on_batch_start'):\n model_ref = self.get_model()\n response = model_ref.on_batch_start(batch)\n\n if response == -1:\n return -1, grad_norm_dic\n\n if self.show_progress_bar:\n self.progress_bar.update(1)\n\n # call training_step once per optimizer\n for opt_idx, optimizer in enumerate(self.optimizers):\n\n # wrap the forward step in a closure so second order methods work\n def optimizer_closure():\n # forward pass\n output = self.training_forward(batch, batch_nb, opt_idx)\n closure_loss, progress_bar_metrics, log_metrics, callback_metrics = output\n\n # track metrics for callbacks\n all_callback_metrics.append(callback_metrics)\n\n # track progress bar metrics\n self.add_tqdm_metrics(progress_bar_metrics)\n all_log_metrics.append(log_metrics)\n\n # accumulate loss\n # (if accumulate_grad_batches = 1 no effect)\n closure_loss = closure_loss / self.accumulate_grad_batches\n\n # backward pass\n # done in hook so user can overwrite if needed\n model_ref = self.get_model()\n model_ref.backward(self.use_amp, closure_loss, optimizer)\n\n # insert after step hook\n if self.is_function_implemented('on_after_backward'):\n model_ref = self.get_model()\n model_ref.on_after_backward()\n\n return closure_loss\n\n # calculate loss\n loss = optimizer_closure()\n\n # nan grads\n if self.print_nan_grads:\n self.print_nan_gradients()\n\n # track total loss for logging (avoid mem leaks)\n self.batch_loss_value += loss.item()\n\n # gradient update with accumulated gradients\n if (self.batch_nb + 1) % self.accumulate_grad_batches == 0:\n\n # track gradient norms when requested\n if batch_nb % self.row_log_interval == 0:\n if self.track_grad_norm > 0:\n model = self.get_model()\n grad_norm_dic = model.grad_norm(self.track_grad_norm)\n\n # clip gradients\n self.clip_gradients()\n\n # calls .step(), .zero_grad()\n # override function to modify this behavior\n model = self.get_model()\n model.optimizer_step(self.current_epoch, batch_nb,\n optimizer, opt_idx, optimizer_closure)\n\n # calculate running loss for display\n self.running_loss.append(self.batch_loss_value)\n self.batch_loss_value = 0\n self.avg_loss = np.mean(self.running_loss[-100:])\n\n # update progress bar\n if self.show_progress_bar:\n # add model specific metrics\n tqdm_metrics = self.training_tqdm_dict\n self.progress_bar.set_postfix(**tqdm_metrics)\n\n # activate batch end hook\n if self.is_function_implemented('on_batch_end'):\n model = self.get_model()\n model.on_batch_end()\n\n # collapse all metrics into one dict\n all_log_metrics = {k: v for d in all_log_metrics for k, v in d.items()}\n\n # track all metrics for callbacks\n self.callback_metrics = {k: v for d in all_callback_metrics for k, v in d.items()}\n\n return 0, grad_norm_dic, all_log_metrics\n\n def training_forward(self, batch, batch_nb, opt_idx):\n \"\"\"\n Handle forward for each training case (distributed, single gpu, etc...)\n :param batch:\n :param batch_nb:\n :return:\n \"\"\"\n # ---------------\n # FORWARD\n # ---------------\n # enable not needing to add opt_idx to training_step\n args = [batch, batch_nb]\n if len(self.optimizers) > 1:\n args.append(opt_idx)\n\n if self.use_ddp or self.use_ddp2:\n output = self.model(*args)\n elif self.use_dp:\n output = self.model(*args)\n elif self.single_gpu:\n gpu_id = 0\n if type(self.data_parallel_device_ids) is list:\n gpu_id = self.data_parallel_device_ids[0]\n batch = self.transfer_batch_to_gpu(batch, gpu_id)\n args[0] = batch\n output = self.model.training_step(*args)\n\n else:\n output = self.model.training_step(*args)\n\n # format and reduce outputs accordingly\n output = self.process_output(output, train=True)\n loss, progress_bar_metrics, log_metrics, callback_metrics = output\n return loss, progress_bar_metrics, log_metrics, callback_metrics\n", "path": "pytorch_lightning/trainer/train_loop_mixin.py"}]} | 3,771 | 248 |
gh_patches_debug_11270 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-2886 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
E2520 raised for mutually exclusive properties when using Conditions
### CloudFormation Lint Version
cfn-lint 0.80.2
### What operating system are you using?
Windows
### Describe the bug
[E2520](https://github.com/aws-cloudformation/cfn-lint/blob/main/docs/rules.md#E2520) is raised for mutually exclusive properties when using Conditions
```
cfn-lint -t ./template.yaml
E2520 Property SourceSecurityGroupId should NOT exist with CidrIp for Resources/Ingress/Properties
.\template.yaml:13:7
```
The same was working prior `0.79.11`. PR [2875](https://github.com/aws-cloudformation/cfn-lint/pull/2875) seems to be the cause.
```
> cfn-lint --version
cfn-lint 0.79.10
> cfn-lint -t ./template.yaml
> echo $lastexitcode
0
```
### Expected behavior
E2520 is ignored for mutually exclusive properties that use the same Condition and Fn::If intrinsic function which makes sure only one of the properties has value.
### Reproduction template
```yaml
AWSTemplateFormatVersion: 2010-09-09
Parameters:
pCidr:
Type: String
Default: ''
Conditions:
cIsCidr: !Not [!Equals [!Ref pCidr, '']]
Resources:
Ingress:
Type: AWS::EC2::SecurityGroupIngress
Properties:
SourceSecurityGroupId: !If [ cIsCidr, !Ref AWS::NoValue, sg-abc12345 ]
CidrIp: !If [ cIsCidr, !Ref pCidr, !Ref AWS::NoValue ]
IpProtocol: "-1"
GroupId: sg-abc1234567
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/properties/Exclusive.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import cfnlint.helpers
6 from cfnlint.data import AdditionalSpecs
7 from cfnlint.rules import CloudFormationLintRule, RuleMatch
8
9
10 class Exclusive(CloudFormationLintRule):
11 """Check Properties Resource Configuration"""
12
13 id = "E2520"
14 shortdesc = "Check Properties that are mutually exclusive"
15 description = (
16 "Making sure CloudFormation properties that are exclusive are not defined"
17 )
18 source_url = "https://github.com/aws-cloudformation/cfn-python-lint"
19 tags = ["resources"]
20
21 def __init__(self):
22 """Init"""
23 super().__init__()
24 exclusivespec = cfnlint.helpers.load_resource(AdditionalSpecs, "Exclusive.json")
25 self.resource_types_specs = exclusivespec["ResourceTypes"]
26 self.property_types_specs = exclusivespec["PropertyTypes"]
27 for resource_type_spec in self.resource_types_specs:
28 self.resource_property_types.append(resource_type_spec)
29 for property_type_spec in self.property_types_specs:
30 self.resource_sub_property_types.append(property_type_spec)
31
32 def check(self, properties, exclusions, path, cfn):
33 """Check itself"""
34 matches = []
35 for p_value, p_path in properties.items_safe(path[:]):
36 for k, v in exclusions.items():
37 property_sets = cfn.get_object_without_conditions(p_value, [k] + v)
38 for property_set in property_sets:
39 obj = property_set["Object"].clean()
40 for prop in obj:
41 if prop in exclusions:
42 for excl_property in exclusions[prop]:
43 if excl_property in obj:
44 if property_set["Scenario"] is None:
45 message = "Property {0} should NOT exist with {1} for {2}"
46 matches.append(
47 RuleMatch(
48 p_path + [prop],
49 message.format(
50 excl_property,
51 prop,
52 "/".join(map(str, p_path)),
53 ),
54 )
55 )
56 else:
57 scenario_text = " and ".join(
58 [
59 f'when condition "{k}" is {v}'
60 for (k, v) in property_set[
61 "Scenario"
62 ].items()
63 ]
64 )
65 message = "Property {0} should NOT exist with {1} {2} for {3}"
66 matches.append(
67 RuleMatch(
68 p_path + [prop],
69 message.format(
70 excl_property,
71 prop,
72 scenario_text,
73 "/".join(map(str, p_path)),
74 ),
75 )
76 )
77
78 return matches
79
80 def match_resource_sub_properties(self, properties, property_type, path, cfn):
81 """Match for sub properties"""
82 matches = []
83
84 exclusions = self.property_types_specs.get(property_type, {})
85 matches.extend(self.check(properties, exclusions, path, cfn))
86
87 return matches
88
89 def match_resource_properties(self, properties, resource_type, path, cfn):
90 """Check CloudFormation Properties"""
91 matches = []
92
93 exclusions = self.resource_types_specs.get(resource_type, {})
94 matches.extend(self.check(properties, exclusions, path, cfn))
95
96 return matches
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/properties/Exclusive.py b/src/cfnlint/rules/resources/properties/Exclusive.py
--- a/src/cfnlint/rules/resources/properties/Exclusive.py
+++ b/src/cfnlint/rules/resources/properties/Exclusive.py
@@ -38,7 +38,7 @@
for property_set in property_sets:
obj = property_set["Object"].clean()
for prop in obj:
- if prop in exclusions:
+ if prop == k:
for excl_property in exclusions[prop]:
if excl_property in obj:
if property_set["Scenario"] is None:
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/properties/Exclusive.py b/src/cfnlint/rules/resources/properties/Exclusive.py\n--- a/src/cfnlint/rules/resources/properties/Exclusive.py\n+++ b/src/cfnlint/rules/resources/properties/Exclusive.py\n@@ -38,7 +38,7 @@\n for property_set in property_sets:\n obj = property_set[\"Object\"].clean()\n for prop in obj:\n- if prop in exclusions:\n+ if prop == k:\n for excl_property in exclusions[prop]:\n if excl_property in obj:\n if property_set[\"Scenario\"] is None:\n", "issue": "E2520 raised for mutually exclusive properties when using Conditions\n### CloudFormation Lint Version\n\ncfn-lint 0.80.2\n\n### What operating system are you using?\n\nWindows\n\n### Describe the bug\n\n[E2520](https://github.com/aws-cloudformation/cfn-lint/blob/main/docs/rules.md#E2520) is raised for mutually exclusive properties when using Conditions\r\n\r\n```\r\ncfn-lint -t ./template.yaml\r\nE2520 Property SourceSecurityGroupId should NOT exist with CidrIp for Resources/Ingress/Properties\r\n.\\template.yaml:13:7\r\n```\r\n\r\nThe same was working prior `0.79.11`. PR [2875](https://github.com/aws-cloudformation/cfn-lint/pull/2875) seems to be the cause.\r\n\r\n```\r\n> cfn-lint --version \r\ncfn-lint 0.79.10\r\n> cfn-lint -t ./template.yaml \r\n> echo $lastexitcode\r\n0\r\n```\n\n### Expected behavior\n\nE2520 is ignored for mutually exclusive properties that use the same Condition and Fn::If intrinsic function which makes sure only one of the properties has value.\n\n### Reproduction template\n\n```yaml\r\nAWSTemplateFormatVersion: 2010-09-09\r\nParameters:\r\n pCidr:\r\n Type: String\r\n Default: ''\r\nConditions:\r\n cIsCidr: !Not [!Equals [!Ref pCidr, '']]\r\nResources:\r\n Ingress:\r\n Type: AWS::EC2::SecurityGroupIngress\r\n Properties:\r\n SourceSecurityGroupId: !If [ cIsCidr, !Ref AWS::NoValue, sg-abc12345 ]\r\n CidrIp: !If [ cIsCidr, !Ref pCidr, !Ref AWS::NoValue ]\r\n IpProtocol: \"-1\"\r\n GroupId: sg-abc1234567\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport cfnlint.helpers\nfrom cfnlint.data import AdditionalSpecs\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass Exclusive(CloudFormationLintRule):\n \"\"\"Check Properties Resource Configuration\"\"\"\n\n id = \"E2520\"\n shortdesc = \"Check Properties that are mutually exclusive\"\n description = (\n \"Making sure CloudFormation properties that are exclusive are not defined\"\n )\n source_url = \"https://github.com/aws-cloudformation/cfn-python-lint\"\n tags = [\"resources\"]\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super().__init__()\n exclusivespec = cfnlint.helpers.load_resource(AdditionalSpecs, \"Exclusive.json\")\n self.resource_types_specs = exclusivespec[\"ResourceTypes\"]\n self.property_types_specs = exclusivespec[\"PropertyTypes\"]\n for resource_type_spec in self.resource_types_specs:\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in self.property_types_specs:\n self.resource_sub_property_types.append(property_type_spec)\n\n def check(self, properties, exclusions, path, cfn):\n \"\"\"Check itself\"\"\"\n matches = []\n for p_value, p_path in properties.items_safe(path[:]):\n for k, v in exclusions.items():\n property_sets = cfn.get_object_without_conditions(p_value, [k] + v)\n for property_set in property_sets:\n obj = property_set[\"Object\"].clean()\n for prop in obj:\n if prop in exclusions:\n for excl_property in exclusions[prop]:\n if excl_property in obj:\n if property_set[\"Scenario\"] is None:\n message = \"Property {0} should NOT exist with {1} for {2}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n else:\n scenario_text = \" and \".join(\n [\n f'when condition \"{k}\" is {v}'\n for (k, v) in property_set[\n \"Scenario\"\n ].items()\n ]\n )\n message = \"Property {0} should NOT exist with {1} {2} for {3}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n scenario_text,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = []\n\n exclusions = self.property_types_specs.get(property_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n exclusions = self.resource_types_specs.get(resource_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/Exclusive.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport cfnlint.helpers\nfrom cfnlint.data import AdditionalSpecs\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass Exclusive(CloudFormationLintRule):\n \"\"\"Check Properties Resource Configuration\"\"\"\n\n id = \"E2520\"\n shortdesc = \"Check Properties that are mutually exclusive\"\n description = (\n \"Making sure CloudFormation properties that are exclusive are not defined\"\n )\n source_url = \"https://github.com/aws-cloudformation/cfn-python-lint\"\n tags = [\"resources\"]\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super().__init__()\n exclusivespec = cfnlint.helpers.load_resource(AdditionalSpecs, \"Exclusive.json\")\n self.resource_types_specs = exclusivespec[\"ResourceTypes\"]\n self.property_types_specs = exclusivespec[\"PropertyTypes\"]\n for resource_type_spec in self.resource_types_specs:\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in self.property_types_specs:\n self.resource_sub_property_types.append(property_type_spec)\n\n def check(self, properties, exclusions, path, cfn):\n \"\"\"Check itself\"\"\"\n matches = []\n for p_value, p_path in properties.items_safe(path[:]):\n for k, v in exclusions.items():\n property_sets = cfn.get_object_without_conditions(p_value, [k] + v)\n for property_set in property_sets:\n obj = property_set[\"Object\"].clean()\n for prop in obj:\n if prop == k:\n for excl_property in exclusions[prop]:\n if excl_property in obj:\n if property_set[\"Scenario\"] is None:\n message = \"Property {0} should NOT exist with {1} for {2}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n else:\n scenario_text = \" and \".join(\n [\n f'when condition \"{k}\" is {v}'\n for (k, v) in property_set[\n \"Scenario\"\n ].items()\n ]\n )\n message = \"Property {0} should NOT exist with {1} {2} for {3}\"\n matches.append(\n RuleMatch(\n p_path + [prop],\n message.format(\n excl_property,\n prop,\n scenario_text,\n \"/\".join(map(str, p_path)),\n ),\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = []\n\n exclusions = self.property_types_specs.get(property_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n exclusions = self.resource_types_specs.get(resource_type, {})\n matches.extend(self.check(properties, exclusions, path, cfn))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/Exclusive.py"}]} | 1,566 | 133 |
gh_patches_debug_7736 | rasdani/github-patches | git_diff | google__flax-2492 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve documentation for `Dropout` and `rngs` argument in `linen.Module.apply()`
Here is an example of `Dropout` in a model definition:
https://github.com/google/flax/blob/d068512a932da3e05b822790a591bac391aeab36/examples/nlp_seq/models.py#L211
Here is the `apply()`, where `rngs` is passed in
https://github.com/google/flax/blob/d068512a932da3e05b822790a591bac391aeab36/examples/nlp_seq/train.py#L206-L207
However the `rng` is not very clearly explained in `apply()`
https://github.com/google/flax/blob/615f40be774e7ed66fd344e8291ac0d48ebcef7d/flax/linen/module.py#L749
The `rngs` seems to be passed to `flax/core/scope.py`
Here is the code for `Dropout` (linen)
https://github.com/google/flax/blob/9b4807840c5cb26ef5e29028e3558d404aee00a0/flax/linen/stochastic.py#L56-L57
Here is the code for `make_rng()`
https://github.com/google/flax/blob/615f40be774e7ed66fd344e8291ac0d48ebcef7d/flax/core/scope.py#L441-L447
The documentation for `rngs` in `apply()` should have a (pointer to) list of names of possible rngs
And documentation for `Dropout` should mention how to pass in rng using `apply()`, without directly passing in like `Dropout()(x,rng=rng)`.
Also probably need to mention the `make_rng()` `fold_in` the rng so each dropout layer will use different rng if there are multiple dropout layers.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flax/linen/stochastic.py`
Content:
```
1 # Copyright 2022 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Stochastic modules."""
16
17 from typing import Optional, Sequence
18
19 from flax.linen.module import compact
20 from flax.linen.module import merge_param
21 from flax.linen.module import Module
22 from jax import lax
23 from jax import random
24 import jax.numpy as jnp
25
26
27 class Dropout(Module):
28 """Create a dropout layer.
29
30 Attributes:
31 rate: the dropout probability. (_not_ the keep rate!)
32 broadcast_dims: dimensions that will share the same dropout mask
33 deterministic: if false the inputs are scaled by `1 / (1 - rate)` and
34 masked, whereas if true, no mask is applied and the inputs are returned
35 as is.
36 """
37 rate: float
38 broadcast_dims: Sequence[int] = ()
39 deterministic: Optional[bool] = None
40
41 @compact
42 def __call__(self, inputs, deterministic: Optional[bool] = None):
43 """Applies a random dropout mask to the input.
44
45 Args:
46 inputs: the inputs that should be randomly masked.
47 deterministic: if false the inputs are scaled by `1 / (1 - rate)` and
48 masked, whereas if true, no mask is applied and the inputs are returned
49 as is.
50
51 Returns:
52 The masked inputs reweighted to preserve mean.
53 """
54 deterministic = merge_param(
55 'deterministic', self.deterministic, deterministic)
56 if self.rate == 0.:
57 return inputs
58 # Prevent gradient NaNs in 1.0 edge-case.
59 if self.rate == 1.0:
60 return jnp.zeros_like(inputs)
61 keep_prob = 1. - self.rate
62 if deterministic:
63 return inputs
64 else:
65 rng = self.make_rng('dropout')
66 broadcast_shape = list(inputs.shape)
67 for dim in self.broadcast_dims:
68 broadcast_shape[dim] = 1
69 mask = random.bernoulli(rng, p=keep_prob, shape=broadcast_shape)
70 mask = jnp.broadcast_to(mask, inputs.shape)
71 return lax.select(mask, inputs / keep_prob, jnp.zeros_like(inputs))
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flax/linen/stochastic.py b/flax/linen/stochastic.py
--- a/flax/linen/stochastic.py
+++ b/flax/linen/stochastic.py
@@ -27,6 +27,11 @@
class Dropout(Module):
"""Create a dropout layer.
+ Note: When using :meth:`Module.apply() <flax.linen.Module.apply>`, make sure
+ to include an RNG seed named `'dropout'`. For example::
+
+ model.apply({'params': params}, inputs=inputs, train=True, rngs={'dropout': dropout_rng})`
+
Attributes:
rate: the dropout probability. (_not_ the keep rate!)
broadcast_dims: dimensions that will share the same dropout mask
| {"golden_diff": "diff --git a/flax/linen/stochastic.py b/flax/linen/stochastic.py\n--- a/flax/linen/stochastic.py\n+++ b/flax/linen/stochastic.py\n@@ -27,6 +27,11 @@\n class Dropout(Module):\n \"\"\"Create a dropout layer.\n \n+ Note: When using :meth:`Module.apply() <flax.linen.Module.apply>`, make sure\n+ to include an RNG seed named `'dropout'`. For example::\n+ \n+ model.apply({'params': params}, inputs=inputs, train=True, rngs={'dropout': dropout_rng})`\n+\n Attributes:\n rate: the dropout probability. (_not_ the keep rate!)\n broadcast_dims: dimensions that will share the same dropout mask\n", "issue": "Improve documentation for `Dropout` and `rngs` argument in `linen.Module.apply()`\n\r\nHere is an example of `Dropout` in a model definition:\r\nhttps://github.com/google/flax/blob/d068512a932da3e05b822790a591bac391aeab36/examples/nlp_seq/models.py#L211\r\n\r\nHere is the `apply()`, where `rngs` is passed in\r\nhttps://github.com/google/flax/blob/d068512a932da3e05b822790a591bac391aeab36/examples/nlp_seq/train.py#L206-L207\r\nHowever the `rng` is not very clearly explained in `apply()`\r\nhttps://github.com/google/flax/blob/615f40be774e7ed66fd344e8291ac0d48ebcef7d/flax/linen/module.py#L749\r\nThe `rngs` seems to be passed to `flax/core/scope.py`\r\nHere is the code for `Dropout` (linen)\r\nhttps://github.com/google/flax/blob/9b4807840c5cb26ef5e29028e3558d404aee00a0/flax/linen/stochastic.py#L56-L57\r\nHere is the code for `make_rng()`\r\nhttps://github.com/google/flax/blob/615f40be774e7ed66fd344e8291ac0d48ebcef7d/flax/core/scope.py#L441-L447\r\n\r\nThe documentation for `rngs` in `apply()` should have a (pointer to) list of names of possible rngs\r\nAnd documentation for `Dropout` should mention how to pass in rng using `apply()`, without directly passing in like `Dropout()(x,rng=rng)`.\r\nAlso probably need to mention the `make_rng()` `fold_in` the rng so each dropout layer will use different rng if there are multiple dropout layers.\n", "before_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Stochastic modules.\"\"\"\n\nfrom typing import Optional, Sequence\n\nfrom flax.linen.module import compact\nfrom flax.linen.module import merge_param\nfrom flax.linen.module import Module\nfrom jax import lax\nfrom jax import random\nimport jax.numpy as jnp\n\n\nclass Dropout(Module):\n \"\"\"Create a dropout layer.\n\n Attributes:\n rate: the dropout probability. (_not_ the keep rate!)\n broadcast_dims: dimensions that will share the same dropout mask\n deterministic: if false the inputs are scaled by `1 / (1 - rate)` and\n masked, whereas if true, no mask is applied and the inputs are returned\n as is.\n \"\"\"\n rate: float\n broadcast_dims: Sequence[int] = ()\n deterministic: Optional[bool] = None\n\n @compact\n def __call__(self, inputs, deterministic: Optional[bool] = None):\n \"\"\"Applies a random dropout mask to the input.\n\n Args:\n inputs: the inputs that should be randomly masked.\n deterministic: if false the inputs are scaled by `1 / (1 - rate)` and\n masked, whereas if true, no mask is applied and the inputs are returned\n as is.\n\n Returns:\n The masked inputs reweighted to preserve mean.\n \"\"\"\n deterministic = merge_param(\n 'deterministic', self.deterministic, deterministic)\n if self.rate == 0.:\n return inputs\n # Prevent gradient NaNs in 1.0 edge-case.\n if self.rate == 1.0:\n return jnp.zeros_like(inputs)\n keep_prob = 1. - self.rate\n if deterministic:\n return inputs\n else:\n rng = self.make_rng('dropout')\n broadcast_shape = list(inputs.shape)\n for dim in self.broadcast_dims:\n broadcast_shape[dim] = 1\n mask = random.bernoulli(rng, p=keep_prob, shape=broadcast_shape)\n mask = jnp.broadcast_to(mask, inputs.shape)\n return lax.select(mask, inputs / keep_prob, jnp.zeros_like(inputs))\n", "path": "flax/linen/stochastic.py"}], "after_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Stochastic modules.\"\"\"\n\nfrom typing import Optional, Sequence\n\nfrom flax.linen.module import compact\nfrom flax.linen.module import merge_param\nfrom flax.linen.module import Module\nfrom jax import lax\nfrom jax import random\nimport jax.numpy as jnp\n\n\nclass Dropout(Module):\n \"\"\"Create a dropout layer.\n\n Note: When using :meth:`Module.apply() <flax.linen.Module.apply>`, make sure\n to include an RNG seed named `'dropout'`. For example::\n \n model.apply({'params': params}, inputs=inputs, train=True, rngs={'dropout': dropout_rng})`\n\n Attributes:\n rate: the dropout probability. (_not_ the keep rate!)\n broadcast_dims: dimensions that will share the same dropout mask\n deterministic: if false the inputs are scaled by `1 / (1 - rate)` and\n masked, whereas if true, no mask is applied and the inputs are returned\n as is.\n \"\"\"\n rate: float\n broadcast_dims: Sequence[int] = ()\n deterministic: Optional[bool] = None\n\n @compact\n def __call__(self, inputs, deterministic: Optional[bool] = None):\n \"\"\"Applies a random dropout mask to the input.\n\n Args:\n inputs: the inputs that should be randomly masked.\n deterministic: if false the inputs are scaled by `1 / (1 - rate)` and\n masked, whereas if true, no mask is applied and the inputs are returned\n as is.\n\n Returns:\n The masked inputs reweighted to preserve mean.\n \"\"\"\n deterministic = merge_param(\n 'deterministic', self.deterministic, deterministic)\n if self.rate == 0.:\n return inputs\n # Prevent gradient NaNs in 1.0 edge-case.\n if self.rate == 1.0:\n return jnp.zeros_like(inputs)\n keep_prob = 1. - self.rate\n if deterministic:\n return inputs\n else:\n rng = self.make_rng('dropout')\n broadcast_shape = list(inputs.shape)\n for dim in self.broadcast_dims:\n broadcast_shape[dim] = 1\n mask = random.bernoulli(rng, p=keep_prob, shape=broadcast_shape)\n mask = jnp.broadcast_to(mask, inputs.shape)\n return lax.select(mask, inputs / keep_prob, jnp.zeros_like(inputs))\n", "path": "flax/linen/stochastic.py"}]} | 1,471 | 167 |
gh_patches_debug_7598 | rasdani/github-patches | git_diff | onnx__sklearn-onnx-525 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Conversion of HistGradientBoosting fails with scikit-learn 0.24
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skl2onnx/common/tree_ensemble.py`
Content:
```
1 # -------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License. See License.txt in the project root for
4 # license information.
5 # --------------------------------------------------------------------------
6 """
7 Common functions to convert any learner based on trees.
8 """
9 import numpy as np
10
11
12 def get_default_tree_classifier_attribute_pairs():
13 attrs = {}
14 attrs['post_transform'] = 'NONE'
15 attrs['nodes_treeids'] = []
16 attrs['nodes_nodeids'] = []
17 attrs['nodes_featureids'] = []
18 attrs['nodes_modes'] = []
19 attrs['nodes_values'] = []
20 attrs['nodes_truenodeids'] = []
21 attrs['nodes_falsenodeids'] = []
22 attrs['nodes_missing_value_tracks_true'] = []
23 attrs['nodes_hitrates'] = []
24 attrs['class_treeids'] = []
25 attrs['class_nodeids'] = []
26 attrs['class_ids'] = []
27 attrs['class_weights'] = []
28 return attrs
29
30
31 def get_default_tree_regressor_attribute_pairs():
32 attrs = {}
33 attrs['post_transform'] = 'NONE'
34 attrs['n_targets'] = 0
35 attrs['nodes_treeids'] = []
36 attrs['nodes_nodeids'] = []
37 attrs['nodes_featureids'] = []
38 attrs['nodes_modes'] = []
39 attrs['nodes_values'] = []
40 attrs['nodes_truenodeids'] = []
41 attrs['nodes_falsenodeids'] = []
42 attrs['nodes_missing_value_tracks_true'] = []
43 attrs['nodes_hitrates'] = []
44 attrs['target_treeids'] = []
45 attrs['target_nodeids'] = []
46 attrs['target_ids'] = []
47 attrs['target_weights'] = []
48 return attrs
49
50
51 def find_switch_point(fy, nfy):
52 """
53 Finds the double so that
54 ``(float)x != (float)(x + espilon)``.
55 """
56 a = np.float64(fy)
57 b = np.float64(nfy)
58 fa = np.float32(a)
59 a0, b0 = a, a
60 while a != a0 or b != b0:
61 a0, b0 = a, b
62 m = (a + b) / 2
63 fm = np.float32(m)
64 if fm == fa:
65 a = m
66 fa = fm
67 else:
68 b = m
69 return a
70
71
72 def sklearn_threshold(dy, dtype, mode):
73 """
74 *scikit-learn* does not compare x to a threshold
75 but (float)x to a double threshold. As we need a float
76 threshold, we need a different value than the threshold
77 rounded to float. For floats, it finds float *w* which
78 verifies::
79
80 (float)x <= y <=> (float)x <= w
81
82 For doubles, it finds double *w* which verifies::
83
84 (float)x <= y <=> x <= w
85 """
86 if mode == "BRANCH_LEQ":
87 if dtype == np.float32:
88 fy = np.float32(dy)
89 if fy == dy:
90 return np.float64(fy)
91 if fy < dy:
92 return np.float64(fy)
93 eps = max(abs(fy), np.finfo(np.float32).eps) * 10
94 nfy = np.nextafter([fy], [fy - eps], dtype=np.float32)[0]
95 return np.float64(nfy)
96 elif dtype == np.float64:
97 fy = np.float32(dy)
98 eps = max(abs(fy), np.finfo(np.float32).eps) * 10
99 afy = np.nextafter([fy], [fy - eps], dtype=np.float32)[0]
100 afy2 = find_switch_point(afy, fy)
101 if fy > dy > afy2:
102 return afy2
103 bfy = np.nextafter([fy], [fy + eps], dtype=np.float32)[0]
104 bfy2 = find_switch_point(fy, bfy)
105 if fy <= dy <= bfy2:
106 return bfy2
107 return np.float64(fy)
108 raise TypeError("Unexpected dtype {}.".format(dtype))
109 raise RuntimeError("Threshold is not changed for other mode and "
110 "'BRANCH_LEQ' (actually '{}').".format(mode))
111
112
113 def add_node(attr_pairs, is_classifier, tree_id, tree_weight, node_id,
114 feature_id, mode, value, true_child_id, false_child_id,
115 weights, weight_id_bias, leaf_weights_are_counts,
116 adjust_threshold_for_sklearn, dtype,
117 nodes_missing_value_tracks_true=False):
118 attr_pairs['nodes_treeids'].append(tree_id)
119 attr_pairs['nodes_nodeids'].append(node_id)
120 attr_pairs['nodes_featureids'].append(feature_id)
121 attr_pairs['nodes_modes'].append(mode)
122 if adjust_threshold_for_sklearn and mode != 'LEAF':
123 attr_pairs['nodes_values'].append(
124 sklearn_threshold(value, dtype, mode))
125 else:
126 attr_pairs['nodes_values'].append(value)
127 attr_pairs['nodes_truenodeids'].append(true_child_id)
128 attr_pairs['nodes_falsenodeids'].append(false_child_id)
129 attr_pairs['nodes_missing_value_tracks_true'].append(
130 nodes_missing_value_tracks_true)
131 attr_pairs['nodes_hitrates'].append(1.)
132
133 # Add leaf information for making prediction
134 if mode == 'LEAF':
135 flattened_weights = weights.flatten()
136 factor = tree_weight
137 # If the values stored at leaves are counts of possible classes, we
138 # need convert them to probabilities by doing a normalization.
139 if leaf_weights_are_counts:
140 s = sum(flattened_weights)
141 factor /= float(s) if s != 0. else 1.
142 flattened_weights = [w * factor for w in flattened_weights]
143 if len(flattened_weights) == 2 and is_classifier:
144 flattened_weights = [flattened_weights[1]]
145
146 # Note that attribute names for making prediction are different for
147 # classifiers and regressors
148 if is_classifier:
149 for i, w in enumerate(flattened_weights):
150 attr_pairs['class_treeids'].append(tree_id)
151 attr_pairs['class_nodeids'].append(node_id)
152 attr_pairs['class_ids'].append(i + weight_id_bias)
153 attr_pairs['class_weights'].append(w)
154 else:
155 for i, w in enumerate(flattened_weights):
156 attr_pairs['target_treeids'].append(tree_id)
157 attr_pairs['target_nodeids'].append(node_id)
158 attr_pairs['target_ids'].append(i + weight_id_bias)
159 attr_pairs['target_weights'].append(w)
160
161
162 def add_tree_to_attribute_pairs(attr_pairs, is_classifier, tree, tree_id,
163 tree_weight, weight_id_bias,
164 leaf_weights_are_counts,
165 adjust_threshold_for_sklearn=False,
166 dtype=None):
167 for i in range(tree.node_count):
168 node_id = i
169 weight = tree.value[i]
170
171 if tree.children_left[i] > i or tree.children_right[i] > i:
172 mode = 'BRANCH_LEQ'
173 feat_id = tree.feature[i]
174 threshold = tree.threshold[i]
175 left_child_id = int(tree.children_left[i])
176 right_child_id = int(tree.children_right[i])
177 else:
178 mode = 'LEAF'
179 feat_id = 0
180 threshold = 0.
181 left_child_id = 0
182 right_child_id = 0
183
184 add_node(attr_pairs, is_classifier, tree_id, tree_weight, node_id,
185 feat_id, mode, threshold, left_child_id, right_child_id,
186 weight, weight_id_bias, leaf_weights_are_counts,
187 adjust_threshold_for_sklearn=adjust_threshold_for_sklearn,
188 dtype=dtype)
189
190
191 def add_tree_to_attribute_pairs_hist_gradient_boosting(
192 attr_pairs, is_classifier, tree, tree_id,
193 tree_weight, weight_id_bias,
194 leaf_weights_are_counts,
195 adjust_threshold_for_sklearn=False,
196 dtype=None):
197 for i, node in enumerate(tree.nodes):
198 node_id = i
199 weight = node['value']
200
201 if node['is_leaf']:
202 mode = 'LEAF'
203 feat_id = 0
204 threshold = 0.
205 left_child_id = 0
206 right_child_id = 0
207 missing = False
208 else:
209 mode = 'BRANCH_LEQ'
210 feat_id = node['feature_idx']
211 threshold = node['threshold']
212 left_child_id = node['left']
213 right_child_id = node['right']
214 missing = node['missing_go_to_left']
215
216 add_node(attr_pairs, is_classifier, tree_id, tree_weight, node_id,
217 feat_id, mode, threshold, left_child_id, right_child_id,
218 weight, weight_id_bias, leaf_weights_are_counts,
219 adjust_threshold_for_sklearn=adjust_threshold_for_sklearn,
220 dtype=dtype, nodes_missing_value_tracks_true=missing)
221
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skl2onnx/common/tree_ensemble.py b/skl2onnx/common/tree_ensemble.py
--- a/skl2onnx/common/tree_ensemble.py
+++ b/skl2onnx/common/tree_ensemble.py
@@ -208,7 +208,10 @@
else:
mode = 'BRANCH_LEQ'
feat_id = node['feature_idx']
- threshold = node['threshold']
+ try:
+ threshold = node['threshold']
+ except ValueError:
+ threshold = node['num_threshold']
left_child_id = node['left']
right_child_id = node['right']
missing = node['missing_go_to_left']
| {"golden_diff": "diff --git a/skl2onnx/common/tree_ensemble.py b/skl2onnx/common/tree_ensemble.py\n--- a/skl2onnx/common/tree_ensemble.py\n+++ b/skl2onnx/common/tree_ensemble.py\n@@ -208,7 +208,10 @@\n else:\n mode = 'BRANCH_LEQ'\n feat_id = node['feature_idx']\n- threshold = node['threshold']\n+ try:\n+ threshold = node['threshold']\n+ except ValueError:\n+ threshold = node['num_threshold']\n left_child_id = node['left']\n right_child_id = node['right']\n missing = node['missing_go_to_left']\n", "issue": "Conversion of HistGradientBoosting fails with scikit-learn 0.24\n\n", "before_files": [{"content": "# -------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for\n# license information.\n# --------------------------------------------------------------------------\n\"\"\"\nCommon functions to convert any learner based on trees.\n\"\"\"\nimport numpy as np\n\n\ndef get_default_tree_classifier_attribute_pairs():\n attrs = {}\n attrs['post_transform'] = 'NONE'\n attrs['nodes_treeids'] = []\n attrs['nodes_nodeids'] = []\n attrs['nodes_featureids'] = []\n attrs['nodes_modes'] = []\n attrs['nodes_values'] = []\n attrs['nodes_truenodeids'] = []\n attrs['nodes_falsenodeids'] = []\n attrs['nodes_missing_value_tracks_true'] = []\n attrs['nodes_hitrates'] = []\n attrs['class_treeids'] = []\n attrs['class_nodeids'] = []\n attrs['class_ids'] = []\n attrs['class_weights'] = []\n return attrs\n\n\ndef get_default_tree_regressor_attribute_pairs():\n attrs = {}\n attrs['post_transform'] = 'NONE'\n attrs['n_targets'] = 0\n attrs['nodes_treeids'] = []\n attrs['nodes_nodeids'] = []\n attrs['nodes_featureids'] = []\n attrs['nodes_modes'] = []\n attrs['nodes_values'] = []\n attrs['nodes_truenodeids'] = []\n attrs['nodes_falsenodeids'] = []\n attrs['nodes_missing_value_tracks_true'] = []\n attrs['nodes_hitrates'] = []\n attrs['target_treeids'] = []\n attrs['target_nodeids'] = []\n attrs['target_ids'] = []\n attrs['target_weights'] = []\n return attrs\n\n\ndef find_switch_point(fy, nfy):\n \"\"\"\n Finds the double so that\n ``(float)x != (float)(x + espilon)``.\n \"\"\"\n a = np.float64(fy)\n b = np.float64(nfy)\n fa = np.float32(a)\n a0, b0 = a, a\n while a != a0 or b != b0:\n a0, b0 = a, b\n m = (a + b) / 2\n fm = np.float32(m)\n if fm == fa:\n a = m\n fa = fm\n else:\n b = m\n return a\n\n\ndef sklearn_threshold(dy, dtype, mode):\n \"\"\"\n *scikit-learn* does not compare x to a threshold\n but (float)x to a double threshold. As we need a float\n threshold, we need a different value than the threshold\n rounded to float. For floats, it finds float *w* which\n verifies::\n\n (float)x <= y <=> (float)x <= w\n\n For doubles, it finds double *w* which verifies::\n\n (float)x <= y <=> x <= w\n \"\"\"\n if mode == \"BRANCH_LEQ\":\n if dtype == np.float32:\n fy = np.float32(dy)\n if fy == dy:\n return np.float64(fy)\n if fy < dy:\n return np.float64(fy)\n eps = max(abs(fy), np.finfo(np.float32).eps) * 10\n nfy = np.nextafter([fy], [fy - eps], dtype=np.float32)[0]\n return np.float64(nfy)\n elif dtype == np.float64:\n fy = np.float32(dy)\n eps = max(abs(fy), np.finfo(np.float32).eps) * 10\n afy = np.nextafter([fy], [fy - eps], dtype=np.float32)[0]\n afy2 = find_switch_point(afy, fy)\n if fy > dy > afy2:\n return afy2\n bfy = np.nextafter([fy], [fy + eps], dtype=np.float32)[0]\n bfy2 = find_switch_point(fy, bfy)\n if fy <= dy <= bfy2:\n return bfy2\n return np.float64(fy)\n raise TypeError(\"Unexpected dtype {}.\".format(dtype))\n raise RuntimeError(\"Threshold is not changed for other mode and \"\n \"'BRANCH_LEQ' (actually '{}').\".format(mode))\n\n\ndef add_node(attr_pairs, is_classifier, tree_id, tree_weight, node_id,\n feature_id, mode, value, true_child_id, false_child_id,\n weights, weight_id_bias, leaf_weights_are_counts,\n adjust_threshold_for_sklearn, dtype,\n nodes_missing_value_tracks_true=False):\n attr_pairs['nodes_treeids'].append(tree_id)\n attr_pairs['nodes_nodeids'].append(node_id)\n attr_pairs['nodes_featureids'].append(feature_id)\n attr_pairs['nodes_modes'].append(mode)\n if adjust_threshold_for_sklearn and mode != 'LEAF':\n attr_pairs['nodes_values'].append(\n sklearn_threshold(value, dtype, mode))\n else:\n attr_pairs['nodes_values'].append(value)\n attr_pairs['nodes_truenodeids'].append(true_child_id)\n attr_pairs['nodes_falsenodeids'].append(false_child_id)\n attr_pairs['nodes_missing_value_tracks_true'].append(\n nodes_missing_value_tracks_true)\n attr_pairs['nodes_hitrates'].append(1.)\n\n # Add leaf information for making prediction\n if mode == 'LEAF':\n flattened_weights = weights.flatten()\n factor = tree_weight\n # If the values stored at leaves are counts of possible classes, we\n # need convert them to probabilities by doing a normalization.\n if leaf_weights_are_counts:\n s = sum(flattened_weights)\n factor /= float(s) if s != 0. else 1.\n flattened_weights = [w * factor for w in flattened_weights]\n if len(flattened_weights) == 2 and is_classifier:\n flattened_weights = [flattened_weights[1]]\n\n # Note that attribute names for making prediction are different for\n # classifiers and regressors\n if is_classifier:\n for i, w in enumerate(flattened_weights):\n attr_pairs['class_treeids'].append(tree_id)\n attr_pairs['class_nodeids'].append(node_id)\n attr_pairs['class_ids'].append(i + weight_id_bias)\n attr_pairs['class_weights'].append(w)\n else:\n for i, w in enumerate(flattened_weights):\n attr_pairs['target_treeids'].append(tree_id)\n attr_pairs['target_nodeids'].append(node_id)\n attr_pairs['target_ids'].append(i + weight_id_bias)\n attr_pairs['target_weights'].append(w)\n\n\ndef add_tree_to_attribute_pairs(attr_pairs, is_classifier, tree, tree_id,\n tree_weight, weight_id_bias,\n leaf_weights_are_counts,\n adjust_threshold_for_sklearn=False,\n dtype=None):\n for i in range(tree.node_count):\n node_id = i\n weight = tree.value[i]\n\n if tree.children_left[i] > i or tree.children_right[i] > i:\n mode = 'BRANCH_LEQ'\n feat_id = tree.feature[i]\n threshold = tree.threshold[i]\n left_child_id = int(tree.children_left[i])\n right_child_id = int(tree.children_right[i])\n else:\n mode = 'LEAF'\n feat_id = 0\n threshold = 0.\n left_child_id = 0\n right_child_id = 0\n\n add_node(attr_pairs, is_classifier, tree_id, tree_weight, node_id,\n feat_id, mode, threshold, left_child_id, right_child_id,\n weight, weight_id_bias, leaf_weights_are_counts,\n adjust_threshold_for_sklearn=adjust_threshold_for_sklearn,\n dtype=dtype)\n\n\ndef add_tree_to_attribute_pairs_hist_gradient_boosting(\n attr_pairs, is_classifier, tree, tree_id,\n tree_weight, weight_id_bias,\n leaf_weights_are_counts,\n adjust_threshold_for_sklearn=False,\n dtype=None):\n for i, node in enumerate(tree.nodes):\n node_id = i\n weight = node['value']\n\n if node['is_leaf']:\n mode = 'LEAF'\n feat_id = 0\n threshold = 0.\n left_child_id = 0\n right_child_id = 0\n missing = False\n else:\n mode = 'BRANCH_LEQ'\n feat_id = node['feature_idx']\n threshold = node['threshold']\n left_child_id = node['left']\n right_child_id = node['right']\n missing = node['missing_go_to_left']\n\n add_node(attr_pairs, is_classifier, tree_id, tree_weight, node_id,\n feat_id, mode, threshold, left_child_id, right_child_id,\n weight, weight_id_bias, leaf_weights_are_counts,\n adjust_threshold_for_sklearn=adjust_threshold_for_sklearn,\n dtype=dtype, nodes_missing_value_tracks_true=missing)\n", "path": "skl2onnx/common/tree_ensemble.py"}], "after_files": [{"content": "# -------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for\n# license information.\n# --------------------------------------------------------------------------\n\"\"\"\nCommon functions to convert any learner based on trees.\n\"\"\"\nimport numpy as np\n\n\ndef get_default_tree_classifier_attribute_pairs():\n attrs = {}\n attrs['post_transform'] = 'NONE'\n attrs['nodes_treeids'] = []\n attrs['nodes_nodeids'] = []\n attrs['nodes_featureids'] = []\n attrs['nodes_modes'] = []\n attrs['nodes_values'] = []\n attrs['nodes_truenodeids'] = []\n attrs['nodes_falsenodeids'] = []\n attrs['nodes_missing_value_tracks_true'] = []\n attrs['nodes_hitrates'] = []\n attrs['class_treeids'] = []\n attrs['class_nodeids'] = []\n attrs['class_ids'] = []\n attrs['class_weights'] = []\n return attrs\n\n\ndef get_default_tree_regressor_attribute_pairs():\n attrs = {}\n attrs['post_transform'] = 'NONE'\n attrs['n_targets'] = 0\n attrs['nodes_treeids'] = []\n attrs['nodes_nodeids'] = []\n attrs['nodes_featureids'] = []\n attrs['nodes_modes'] = []\n attrs['nodes_values'] = []\n attrs['nodes_truenodeids'] = []\n attrs['nodes_falsenodeids'] = []\n attrs['nodes_missing_value_tracks_true'] = []\n attrs['nodes_hitrates'] = []\n attrs['target_treeids'] = []\n attrs['target_nodeids'] = []\n attrs['target_ids'] = []\n attrs['target_weights'] = []\n return attrs\n\n\ndef find_switch_point(fy, nfy):\n \"\"\"\n Finds the double so that\n ``(float)x != (float)(x + espilon)``.\n \"\"\"\n a = np.float64(fy)\n b = np.float64(nfy)\n fa = np.float32(a)\n a0, b0 = a, a\n while a != a0 or b != b0:\n a0, b0 = a, b\n m = (a + b) / 2\n fm = np.float32(m)\n if fm == fa:\n a = m\n fa = fm\n else:\n b = m\n return a\n\n\ndef sklearn_threshold(dy, dtype, mode):\n \"\"\"\n *scikit-learn* does not compare x to a threshold\n but (float)x to a double threshold. As we need a float\n threshold, we need a different value than the threshold\n rounded to float. For floats, it finds float *w* which\n verifies::\n\n (float)x <= y <=> (float)x <= w\n\n For doubles, it finds double *w* which verifies::\n\n (float)x <= y <=> x <= w\n \"\"\"\n if mode == \"BRANCH_LEQ\":\n if dtype == np.float32:\n fy = np.float32(dy)\n if fy == dy:\n return np.float64(fy)\n if fy < dy:\n return np.float64(fy)\n eps = max(abs(fy), np.finfo(np.float32).eps) * 10\n nfy = np.nextafter([fy], [fy - eps], dtype=np.float32)[0]\n return np.float64(nfy)\n elif dtype == np.float64:\n fy = np.float32(dy)\n eps = max(abs(fy), np.finfo(np.float32).eps) * 10\n afy = np.nextafter([fy], [fy - eps], dtype=np.float32)[0]\n afy2 = find_switch_point(afy, fy)\n if fy > dy > afy2:\n return afy2\n bfy = np.nextafter([fy], [fy + eps], dtype=np.float32)[0]\n bfy2 = find_switch_point(fy, bfy)\n if fy <= dy <= bfy2:\n return bfy2\n return np.float64(fy)\n raise TypeError(\"Unexpected dtype {}.\".format(dtype))\n raise RuntimeError(\"Threshold is not changed for other mode and \"\n \"'BRANCH_LEQ' (actually '{}').\".format(mode))\n\n\ndef add_node(attr_pairs, is_classifier, tree_id, tree_weight, node_id,\n feature_id, mode, value, true_child_id, false_child_id,\n weights, weight_id_bias, leaf_weights_are_counts,\n adjust_threshold_for_sklearn, dtype,\n nodes_missing_value_tracks_true=False):\n attr_pairs['nodes_treeids'].append(tree_id)\n attr_pairs['nodes_nodeids'].append(node_id)\n attr_pairs['nodes_featureids'].append(feature_id)\n attr_pairs['nodes_modes'].append(mode)\n if adjust_threshold_for_sklearn and mode != 'LEAF':\n attr_pairs['nodes_values'].append(\n sklearn_threshold(value, dtype, mode))\n else:\n attr_pairs['nodes_values'].append(value)\n attr_pairs['nodes_truenodeids'].append(true_child_id)\n attr_pairs['nodes_falsenodeids'].append(false_child_id)\n attr_pairs['nodes_missing_value_tracks_true'].append(\n nodes_missing_value_tracks_true)\n attr_pairs['nodes_hitrates'].append(1.)\n\n # Add leaf information for making prediction\n if mode == 'LEAF':\n flattened_weights = weights.flatten()\n factor = tree_weight\n # If the values stored at leaves are counts of possible classes, we\n # need convert them to probabilities by doing a normalization.\n if leaf_weights_are_counts:\n s = sum(flattened_weights)\n factor /= float(s) if s != 0. else 1.\n flattened_weights = [w * factor for w in flattened_weights]\n if len(flattened_weights) == 2 and is_classifier:\n flattened_weights = [flattened_weights[1]]\n\n # Note that attribute names for making prediction are different for\n # classifiers and regressors\n if is_classifier:\n for i, w in enumerate(flattened_weights):\n attr_pairs['class_treeids'].append(tree_id)\n attr_pairs['class_nodeids'].append(node_id)\n attr_pairs['class_ids'].append(i + weight_id_bias)\n attr_pairs['class_weights'].append(w)\n else:\n for i, w in enumerate(flattened_weights):\n attr_pairs['target_treeids'].append(tree_id)\n attr_pairs['target_nodeids'].append(node_id)\n attr_pairs['target_ids'].append(i + weight_id_bias)\n attr_pairs['target_weights'].append(w)\n\n\ndef add_tree_to_attribute_pairs(attr_pairs, is_classifier, tree, tree_id,\n tree_weight, weight_id_bias,\n leaf_weights_are_counts,\n adjust_threshold_for_sklearn=False,\n dtype=None):\n for i in range(tree.node_count):\n node_id = i\n weight = tree.value[i]\n\n if tree.children_left[i] > i or tree.children_right[i] > i:\n mode = 'BRANCH_LEQ'\n feat_id = tree.feature[i]\n threshold = tree.threshold[i]\n left_child_id = int(tree.children_left[i])\n right_child_id = int(tree.children_right[i])\n else:\n mode = 'LEAF'\n feat_id = 0\n threshold = 0.\n left_child_id = 0\n right_child_id = 0\n\n add_node(attr_pairs, is_classifier, tree_id, tree_weight, node_id,\n feat_id, mode, threshold, left_child_id, right_child_id,\n weight, weight_id_bias, leaf_weights_are_counts,\n adjust_threshold_for_sklearn=adjust_threshold_for_sklearn,\n dtype=dtype)\n\n\ndef add_tree_to_attribute_pairs_hist_gradient_boosting(\n attr_pairs, is_classifier, tree, tree_id,\n tree_weight, weight_id_bias,\n leaf_weights_are_counts,\n adjust_threshold_for_sklearn=False,\n dtype=None):\n for i, node in enumerate(tree.nodes):\n node_id = i\n weight = node['value']\n\n if node['is_leaf']:\n mode = 'LEAF'\n feat_id = 0\n threshold = 0.\n left_child_id = 0\n right_child_id = 0\n missing = False\n else:\n mode = 'BRANCH_LEQ'\n feat_id = node['feature_idx']\n try:\n threshold = node['threshold']\n except ValueError:\n threshold = node['num_threshold']\n left_child_id = node['left']\n right_child_id = node['right']\n missing = node['missing_go_to_left']\n\n add_node(attr_pairs, is_classifier, tree_id, tree_weight, node_id,\n feat_id, mode, threshold, left_child_id, right_child_id,\n weight, weight_id_bias, leaf_weights_are_counts,\n adjust_threshold_for_sklearn=adjust_threshold_for_sklearn,\n dtype=dtype, nodes_missing_value_tracks_true=missing)\n", "path": "skl2onnx/common/tree_ensemble.py"}]} | 2,762 | 150 |
gh_patches_debug_23848 | rasdani/github-patches | git_diff | cocotb__cocotb-2959 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Include logger name in log output
The culprit is logged, but the new "short" log format doesn't include the logger name so I don't get any useful information from this. IMO the default log format should include the logger name.
https://github.com/cocotb/cocotb/blob/c69454db92388a8915c99a35ca9cba06565fe4d5/cocotb/scheduler.py#L451
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cocotb/log.py`
Content:
```
1 # Copyright (c) 2013, 2018 Potential Ventures Ltd
2 # Copyright (c) 2013 SolarFlare Communications Inc
3 # All rights reserved.
4 #
5 # Redistribution and use in source and binary forms, with or without
6 # modification, are permitted provided that the following conditions are met:
7 # * Redistributions of source code must retain the above copyright
8 # notice, this list of conditions and the following disclaimer.
9 # * Redistributions in binary form must reproduce the above copyright
10 # notice, this list of conditions and the following disclaimer in the
11 # documentation and/or other materials provided with the distribution.
12 # * Neither the name of Potential Ventures Ltd,
13 # SolarFlare Communications Inc nor the
14 # names of its contributors may be used to endorse or promote products
15 # derived from this software without specific prior written permission.
16 #
17 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
18 # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
19 # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
20 # DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY
21 # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
22 # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
23 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
24 # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
25 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
26 # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
27
28 """
29 Everything related to logging
30 """
31
32 import logging
33 import os
34 import sys
35 import warnings
36
37 import cocotb.ANSI as ANSI
38 from cocotb import simulator
39 from cocotb.utils import get_sim_time, get_time_from_sim_steps, want_color_output
40
41 try:
42 _suppress = int(os.environ.get("COCOTB_REDUCED_LOG_FMT", "1"))
43 except ValueError:
44 _suppress = 1
45
46 # Column alignment
47 _LEVEL_CHARS = len("CRITICAL") # noqa
48 _RECORD_CHARS = 35 # noqa
49 _FILENAME_CHARS = 20 # noqa
50 _LINENO_CHARS = 4 # noqa
51 _FUNCNAME_CHARS = 31 # noqa
52
53 # Custom log level
54 logging.TRACE = 5
55 logging.addLevelName(5, "TRACE")
56
57 # Default log level if not overwritten by the user.
58 _COCOTB_LOG_LEVEL_DEFAULT = "INFO"
59
60
61 def default_config():
62 """Apply the default cocotb log formatting to the root logger.
63
64 This hooks up the logger to write to stdout, using either
65 :class:`SimColourLogFormatter` or :class:`SimLogFormatter` depending
66 on whether colored output is requested. It also adds a
67 :class:`SimTimeContextFilter` filter so that
68 :attr:`~logging.LogRecord.created_sim_time` is available to the formatter.
69
70 The logging level for cocotb logs is set based on the
71 :envvar:`COCOTB_LOG_LEVEL` environment variable, which defaults to ``INFO``.
72
73 If desired, this logging configuration can be overwritten by calling
74 ``logging.basicConfig(..., force=True)`` (in Python 3.8 onwards), or by
75 manually resetting the root logger instance.
76 An example of this can be found in the section on :ref:`rotating-logger`.
77
78 .. versionadded:: 1.4
79 """
80 # construct an appropriate handler
81 hdlr = logging.StreamHandler(sys.stdout)
82 hdlr.addFilter(SimTimeContextFilter())
83 if want_color_output():
84 hdlr.setFormatter(SimColourLogFormatter())
85 else:
86 hdlr.setFormatter(SimLogFormatter())
87
88 logging.setLoggerClass(SimBaseLog) # For backwards compatibility
89 logging.basicConfig()
90 logging.getLogger().handlers = [hdlr] # overwrite default handlers
91
92 # apply level settings for cocotb
93 log = logging.getLogger("cocotb")
94
95 try:
96 # All log levels are upper case, convert the user input for convenience.
97 level = os.environ["COCOTB_LOG_LEVEL"].upper()
98 except KeyError:
99 level = _COCOTB_LOG_LEVEL_DEFAULT
100
101 try:
102 log.setLevel(level)
103 except ValueError:
104 valid_levels = ("CRITICAL", "ERROR", "WARNING", "INFO", "DEBUG", "TRACE")
105 raise ValueError(
106 "Invalid log level %r passed through the "
107 "COCOTB_LOG_LEVEL environment variable. Valid log "
108 "levels: %s" % (level, ", ".join(valid_levels))
109 )
110
111 # Notify GPI of log level, which it uses as an optimization to avoid
112 # calling into Python.
113 simulator.log_level(log.getEffectiveLevel())
114
115
116 class SimBaseLog(logging.getLoggerClass()):
117 """This class only exists for backwards compatibility"""
118
119 @property
120 def logger(self):
121 warnings.warn(
122 "the .logger attribute should not be used now that `SimLog` "
123 "returns a native logger instance directly.",
124 DeprecationWarning,
125 stacklevel=2,
126 )
127 return self
128
129 @property
130 def colour(self):
131 warnings.warn(
132 "the .colour attribute may be removed in future, use the "
133 "equivalent `cocotb.utils.want_color_output()` instead",
134 DeprecationWarning,
135 stacklevel=2,
136 )
137 return want_color_output()
138
139 def setLevel(self, level: int) -> None:
140 super().setLevel(level)
141 if self.name == "gpi":
142 simulator.log_level(level)
143
144
145 # this used to be a class, hence the unusual capitalization
146 def SimLog(name, ident=None):
147 """Like logging.getLogger, but append a numeric identifier to the name"""
148 if ident is not None:
149 name = f"{name}.0x{ident:x}"
150 return logging.getLogger(name)
151
152
153 class SimTimeContextFilter(logging.Filter):
154 """
155 A filter to inject simulator times into the log records.
156
157 This uses the approach described in the :ref:`Python logging cookbook <python:filters-contextual>`.
158
159 This adds the :attr:`~logging.LogRecord.created_sim_time` attribute.
160
161 .. versionadded:: 1.4
162 """
163
164 # needed to make our docs render well
165 def __init__(self):
166 """"""
167 super().__init__()
168
169 def filter(self, record):
170 try:
171 record.created_sim_time = get_sim_time()
172 except RecursionError:
173 # get_sim_time may try to log - if that happens, we can't
174 # attach a simulator time to this message.
175 record.created_sim_time = None
176 return True
177
178
179 class SimLogFormatter(logging.Formatter):
180 """Log formatter to provide consistent log message handling.
181
182 This will only add simulator timestamps if the handler object this
183 formatter is attached to has a :class:`SimTimeContextFilter` filter
184 attached, which cocotb ensures by default.
185 """
186
187 # Removes the arguments from the base class. Docstring needed to make
188 # sphinx happy.
189 def __init__(self):
190 """Takes no arguments."""
191 super().__init__()
192
193 # Justify and truncate
194 @staticmethod
195 def ljust(string, chars):
196 if len(string) > chars:
197 return ".." + string[(chars - 2) * -1 :]
198 return string.ljust(chars)
199
200 @staticmethod
201 def rjust(string, chars):
202 if len(string) > chars:
203 return ".." + string[(chars - 2) * -1 :]
204 return string.rjust(chars)
205
206 def _format(self, level, record, msg, coloured=False):
207 sim_time = getattr(record, "created_sim_time", None)
208 if sim_time is None:
209 sim_time_str = " -.--ns"
210 else:
211 time_ns = get_time_from_sim_steps(sim_time, "ns")
212 sim_time_str = f"{time_ns:6.2f}ns"
213 prefix = sim_time_str.rjust(11) + " " + level + " "
214 if not _suppress:
215 prefix += (
216 self.ljust(record.name, _RECORD_CHARS)
217 + self.rjust(os.path.split(record.filename)[1], _FILENAME_CHARS)
218 + ":"
219 + self.ljust(str(record.lineno), _LINENO_CHARS)
220 + " in "
221 + self.ljust(str(record.funcName), _FUNCNAME_CHARS)
222 + " "
223 )
224
225 # these lines are copied from the builtin logger
226 if record.exc_info:
227 # Cache the traceback text to avoid converting it multiple times
228 # (it's constant anyway)
229 if not record.exc_text:
230 record.exc_text = self.formatException(record.exc_info)
231 if record.exc_text:
232 if msg[-1:] != "\n":
233 msg = msg + "\n"
234 msg = msg + record.exc_text
235
236 prefix_len = len(prefix)
237 if coloured:
238 prefix_len -= len(level) - _LEVEL_CHARS
239 pad = "\n" + " " * (prefix_len)
240 return prefix + pad.join(msg.split("\n"))
241
242 def format(self, record):
243 """Prettify the log output, annotate with simulation time"""
244
245 msg = record.getMessage()
246 level = record.levelname.ljust(_LEVEL_CHARS)
247
248 return self._format(level, record, msg)
249
250
251 class SimColourLogFormatter(SimLogFormatter):
252 """Log formatter to provide consistent log message handling."""
253
254 loglevel2colour = {
255 logging.TRACE: "%s",
256 logging.DEBUG: "%s",
257 logging.INFO: "%s",
258 logging.WARNING: ANSI.COLOR_WARNING + "%s" + ANSI.COLOR_DEFAULT,
259 logging.ERROR: ANSI.COLOR_ERROR + "%s" + ANSI.COLOR_DEFAULT,
260 logging.CRITICAL: ANSI.COLOR_CRITICAL + "%s" + ANSI.COLOR_DEFAULT,
261 }
262
263 def format(self, record):
264 """Prettify the log output, annotate with simulation time"""
265
266 msg = record.getMessage()
267
268 # Need to colour each line in case coloring is applied in the message
269 msg = "\n".join(
270 [
271 SimColourLogFormatter.loglevel2colour.get(record.levelno, "%s") % line
272 for line in msg.split("\n")
273 ]
274 )
275 level = SimColourLogFormatter.loglevel2colour.get(
276 record.levelno, "%s"
277 ) % record.levelname.ljust(_LEVEL_CHARS)
278
279 return self._format(level, record, msg, coloured=True)
280
281
282 def _filter_from_c(logger_name, level):
283 return logging.getLogger(logger_name).isEnabledFor(level)
284
285
286 def _log_from_c(logger_name, level, filename, lineno, msg, function_name):
287 """
288 This is for use from the C world, and allows us to insert C stack
289 information.
290 """
291 logger = logging.getLogger(logger_name)
292 if logger.isEnabledFor(level):
293 record = logger.makeRecord(
294 logger.name, level, filename, lineno, msg, None, None, function_name
295 )
296 logger.handle(record)
297
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cocotb/log.py b/cocotb/log.py
--- a/cocotb/log.py
+++ b/cocotb/log.py
@@ -44,11 +44,11 @@
_suppress = 1
# Column alignment
-_LEVEL_CHARS = len("CRITICAL") # noqa
-_RECORD_CHARS = 35 # noqa
-_FILENAME_CHARS = 20 # noqa
-_LINENO_CHARS = 4 # noqa
-_FUNCNAME_CHARS = 31 # noqa
+_LEVEL_CHARS = len("CRITICAL")
+_RECORD_CHARS = 34
+_FILENAME_CHARS = 20
+_LINENO_CHARS = 4
+_FUNCNAME_CHARS = 31
# Custom log level
logging.TRACE = 5
@@ -210,11 +210,17 @@
else:
time_ns = get_time_from_sim_steps(sim_time, "ns")
sim_time_str = f"{time_ns:6.2f}ns"
- prefix = sim_time_str.rjust(11) + " " + level + " "
+ prefix = (
+ sim_time_str.rjust(11)
+ + " "
+ + level
+ + " "
+ + self.ljust(record.name, _RECORD_CHARS)
+ + " "
+ )
if not _suppress:
prefix += (
- self.ljust(record.name, _RECORD_CHARS)
- + self.rjust(os.path.split(record.filename)[1], _FILENAME_CHARS)
+ self.rjust(os.path.split(record.filename)[1], _FILENAME_CHARS)
+ ":"
+ self.ljust(str(record.lineno), _LINENO_CHARS)
+ " in "
| {"golden_diff": "diff --git a/cocotb/log.py b/cocotb/log.py\n--- a/cocotb/log.py\n+++ b/cocotb/log.py\n@@ -44,11 +44,11 @@\n _suppress = 1\n \n # Column alignment\n-_LEVEL_CHARS = len(\"CRITICAL\") # noqa\n-_RECORD_CHARS = 35 # noqa\n-_FILENAME_CHARS = 20 # noqa\n-_LINENO_CHARS = 4 # noqa\n-_FUNCNAME_CHARS = 31 # noqa\n+_LEVEL_CHARS = len(\"CRITICAL\")\n+_RECORD_CHARS = 34\n+_FILENAME_CHARS = 20\n+_LINENO_CHARS = 4\n+_FUNCNAME_CHARS = 31\n \n # Custom log level\n logging.TRACE = 5\n@@ -210,11 +210,17 @@\n else:\n time_ns = get_time_from_sim_steps(sim_time, \"ns\")\n sim_time_str = f\"{time_ns:6.2f}ns\"\n- prefix = sim_time_str.rjust(11) + \" \" + level + \" \"\n+ prefix = (\n+ sim_time_str.rjust(11)\n+ + \" \"\n+ + level\n+ + \" \"\n+ + self.ljust(record.name, _RECORD_CHARS)\n+ + \" \"\n+ )\n if not _suppress:\n prefix += (\n- self.ljust(record.name, _RECORD_CHARS)\n- + self.rjust(os.path.split(record.filename)[1], _FILENAME_CHARS)\n+ self.rjust(os.path.split(record.filename)[1], _FILENAME_CHARS)\n + \":\"\n + self.ljust(str(record.lineno), _LINENO_CHARS)\n + \" in \"\n", "issue": "Include logger name in log output\nThe culprit is logged, but the new \"short\" log format doesn't include the logger name so I don't get any useful information from this. IMO the default log format should include the logger name.\r\nhttps://github.com/cocotb/cocotb/blob/c69454db92388a8915c99a35ca9cba06565fe4d5/cocotb/scheduler.py#L451\n", "before_files": [{"content": "# Copyright (c) 2013, 2018 Potential Ventures Ltd\n# Copyright (c) 2013 SolarFlare Communications Inc\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of Potential Ventures Ltd,\n# SolarFlare Communications Inc nor the\n# names of its contributors may be used to endorse or promote products\n# derived from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\n# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nEverything related to logging\n\"\"\"\n\nimport logging\nimport os\nimport sys\nimport warnings\n\nimport cocotb.ANSI as ANSI\nfrom cocotb import simulator\nfrom cocotb.utils import get_sim_time, get_time_from_sim_steps, want_color_output\n\ntry:\n _suppress = int(os.environ.get(\"COCOTB_REDUCED_LOG_FMT\", \"1\"))\nexcept ValueError:\n _suppress = 1\n\n# Column alignment\n_LEVEL_CHARS = len(\"CRITICAL\") # noqa\n_RECORD_CHARS = 35 # noqa\n_FILENAME_CHARS = 20 # noqa\n_LINENO_CHARS = 4 # noqa\n_FUNCNAME_CHARS = 31 # noqa\n\n# Custom log level\nlogging.TRACE = 5\nlogging.addLevelName(5, \"TRACE\")\n\n# Default log level if not overwritten by the user.\n_COCOTB_LOG_LEVEL_DEFAULT = \"INFO\"\n\n\ndef default_config():\n \"\"\"Apply the default cocotb log formatting to the root logger.\n\n This hooks up the logger to write to stdout, using either\n :class:`SimColourLogFormatter` or :class:`SimLogFormatter` depending\n on whether colored output is requested. It also adds a\n :class:`SimTimeContextFilter` filter so that\n :attr:`~logging.LogRecord.created_sim_time` is available to the formatter.\n\n The logging level for cocotb logs is set based on the\n :envvar:`COCOTB_LOG_LEVEL` environment variable, which defaults to ``INFO``.\n\n If desired, this logging configuration can be overwritten by calling\n ``logging.basicConfig(..., force=True)`` (in Python 3.8 onwards), or by\n manually resetting the root logger instance.\n An example of this can be found in the section on :ref:`rotating-logger`.\n\n .. versionadded:: 1.4\n \"\"\"\n # construct an appropriate handler\n hdlr = logging.StreamHandler(sys.stdout)\n hdlr.addFilter(SimTimeContextFilter())\n if want_color_output():\n hdlr.setFormatter(SimColourLogFormatter())\n else:\n hdlr.setFormatter(SimLogFormatter())\n\n logging.setLoggerClass(SimBaseLog) # For backwards compatibility\n logging.basicConfig()\n logging.getLogger().handlers = [hdlr] # overwrite default handlers\n\n # apply level settings for cocotb\n log = logging.getLogger(\"cocotb\")\n\n try:\n # All log levels are upper case, convert the user input for convenience.\n level = os.environ[\"COCOTB_LOG_LEVEL\"].upper()\n except KeyError:\n level = _COCOTB_LOG_LEVEL_DEFAULT\n\n try:\n log.setLevel(level)\n except ValueError:\n valid_levels = (\"CRITICAL\", \"ERROR\", \"WARNING\", \"INFO\", \"DEBUG\", \"TRACE\")\n raise ValueError(\n \"Invalid log level %r passed through the \"\n \"COCOTB_LOG_LEVEL environment variable. Valid log \"\n \"levels: %s\" % (level, \", \".join(valid_levels))\n )\n\n # Notify GPI of log level, which it uses as an optimization to avoid\n # calling into Python.\n simulator.log_level(log.getEffectiveLevel())\n\n\nclass SimBaseLog(logging.getLoggerClass()):\n \"\"\"This class only exists for backwards compatibility\"\"\"\n\n @property\n def logger(self):\n warnings.warn(\n \"the .logger attribute should not be used now that `SimLog` \"\n \"returns a native logger instance directly.\",\n DeprecationWarning,\n stacklevel=2,\n )\n return self\n\n @property\n def colour(self):\n warnings.warn(\n \"the .colour attribute may be removed in future, use the \"\n \"equivalent `cocotb.utils.want_color_output()` instead\",\n DeprecationWarning,\n stacklevel=2,\n )\n return want_color_output()\n\n def setLevel(self, level: int) -> None:\n super().setLevel(level)\n if self.name == \"gpi\":\n simulator.log_level(level)\n\n\n# this used to be a class, hence the unusual capitalization\ndef SimLog(name, ident=None):\n \"\"\"Like logging.getLogger, but append a numeric identifier to the name\"\"\"\n if ident is not None:\n name = f\"{name}.0x{ident:x}\"\n return logging.getLogger(name)\n\n\nclass SimTimeContextFilter(logging.Filter):\n \"\"\"\n A filter to inject simulator times into the log records.\n\n This uses the approach described in the :ref:`Python logging cookbook <python:filters-contextual>`.\n\n This adds the :attr:`~logging.LogRecord.created_sim_time` attribute.\n\n .. versionadded:: 1.4\n \"\"\"\n\n # needed to make our docs render well\n def __init__(self):\n \"\"\"\"\"\"\n super().__init__()\n\n def filter(self, record):\n try:\n record.created_sim_time = get_sim_time()\n except RecursionError:\n # get_sim_time may try to log - if that happens, we can't\n # attach a simulator time to this message.\n record.created_sim_time = None\n return True\n\n\nclass SimLogFormatter(logging.Formatter):\n \"\"\"Log formatter to provide consistent log message handling.\n\n This will only add simulator timestamps if the handler object this\n formatter is attached to has a :class:`SimTimeContextFilter` filter\n attached, which cocotb ensures by default.\n \"\"\"\n\n # Removes the arguments from the base class. Docstring needed to make\n # sphinx happy.\n def __init__(self):\n \"\"\"Takes no arguments.\"\"\"\n super().__init__()\n\n # Justify and truncate\n @staticmethod\n def ljust(string, chars):\n if len(string) > chars:\n return \"..\" + string[(chars - 2) * -1 :]\n return string.ljust(chars)\n\n @staticmethod\n def rjust(string, chars):\n if len(string) > chars:\n return \"..\" + string[(chars - 2) * -1 :]\n return string.rjust(chars)\n\n def _format(self, level, record, msg, coloured=False):\n sim_time = getattr(record, \"created_sim_time\", None)\n if sim_time is None:\n sim_time_str = \" -.--ns\"\n else:\n time_ns = get_time_from_sim_steps(sim_time, \"ns\")\n sim_time_str = f\"{time_ns:6.2f}ns\"\n prefix = sim_time_str.rjust(11) + \" \" + level + \" \"\n if not _suppress:\n prefix += (\n self.ljust(record.name, _RECORD_CHARS)\n + self.rjust(os.path.split(record.filename)[1], _FILENAME_CHARS)\n + \":\"\n + self.ljust(str(record.lineno), _LINENO_CHARS)\n + \" in \"\n + self.ljust(str(record.funcName), _FUNCNAME_CHARS)\n + \" \"\n )\n\n # these lines are copied from the builtin logger\n if record.exc_info:\n # Cache the traceback text to avoid converting it multiple times\n # (it's constant anyway)\n if not record.exc_text:\n record.exc_text = self.formatException(record.exc_info)\n if record.exc_text:\n if msg[-1:] != \"\\n\":\n msg = msg + \"\\n\"\n msg = msg + record.exc_text\n\n prefix_len = len(prefix)\n if coloured:\n prefix_len -= len(level) - _LEVEL_CHARS\n pad = \"\\n\" + \" \" * (prefix_len)\n return prefix + pad.join(msg.split(\"\\n\"))\n\n def format(self, record):\n \"\"\"Prettify the log output, annotate with simulation time\"\"\"\n\n msg = record.getMessage()\n level = record.levelname.ljust(_LEVEL_CHARS)\n\n return self._format(level, record, msg)\n\n\nclass SimColourLogFormatter(SimLogFormatter):\n \"\"\"Log formatter to provide consistent log message handling.\"\"\"\n\n loglevel2colour = {\n logging.TRACE: \"%s\",\n logging.DEBUG: \"%s\",\n logging.INFO: \"%s\",\n logging.WARNING: ANSI.COLOR_WARNING + \"%s\" + ANSI.COLOR_DEFAULT,\n logging.ERROR: ANSI.COLOR_ERROR + \"%s\" + ANSI.COLOR_DEFAULT,\n logging.CRITICAL: ANSI.COLOR_CRITICAL + \"%s\" + ANSI.COLOR_DEFAULT,\n }\n\n def format(self, record):\n \"\"\"Prettify the log output, annotate with simulation time\"\"\"\n\n msg = record.getMessage()\n\n # Need to colour each line in case coloring is applied in the message\n msg = \"\\n\".join(\n [\n SimColourLogFormatter.loglevel2colour.get(record.levelno, \"%s\") % line\n for line in msg.split(\"\\n\")\n ]\n )\n level = SimColourLogFormatter.loglevel2colour.get(\n record.levelno, \"%s\"\n ) % record.levelname.ljust(_LEVEL_CHARS)\n\n return self._format(level, record, msg, coloured=True)\n\n\ndef _filter_from_c(logger_name, level):\n return logging.getLogger(logger_name).isEnabledFor(level)\n\n\ndef _log_from_c(logger_name, level, filename, lineno, msg, function_name):\n \"\"\"\n This is for use from the C world, and allows us to insert C stack\n information.\n \"\"\"\n logger = logging.getLogger(logger_name)\n if logger.isEnabledFor(level):\n record = logger.makeRecord(\n logger.name, level, filename, lineno, msg, None, None, function_name\n )\n logger.handle(record)\n", "path": "cocotb/log.py"}], "after_files": [{"content": "# Copyright (c) 2013, 2018 Potential Ventures Ltd\n# Copyright (c) 2013 SolarFlare Communications Inc\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of Potential Ventures Ltd,\n# SolarFlare Communications Inc nor the\n# names of its contributors may be used to endorse or promote products\n# derived from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\n# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nEverything related to logging\n\"\"\"\n\nimport logging\nimport os\nimport sys\nimport warnings\n\nimport cocotb.ANSI as ANSI\nfrom cocotb import simulator\nfrom cocotb.utils import get_sim_time, get_time_from_sim_steps, want_color_output\n\ntry:\n _suppress = int(os.environ.get(\"COCOTB_REDUCED_LOG_FMT\", \"1\"))\nexcept ValueError:\n _suppress = 1\n\n# Column alignment\n_LEVEL_CHARS = len(\"CRITICAL\")\n_RECORD_CHARS = 34\n_FILENAME_CHARS = 20\n_LINENO_CHARS = 4\n_FUNCNAME_CHARS = 31\n\n# Custom log level\nlogging.TRACE = 5\nlogging.addLevelName(5, \"TRACE\")\n\n# Default log level if not overwritten by the user.\n_COCOTB_LOG_LEVEL_DEFAULT = \"INFO\"\n\n\ndef default_config():\n \"\"\"Apply the default cocotb log formatting to the root logger.\n\n This hooks up the logger to write to stdout, using either\n :class:`SimColourLogFormatter` or :class:`SimLogFormatter` depending\n on whether colored output is requested. It also adds a\n :class:`SimTimeContextFilter` filter so that\n :attr:`~logging.LogRecord.created_sim_time` is available to the formatter.\n\n The logging level for cocotb logs is set based on the\n :envvar:`COCOTB_LOG_LEVEL` environment variable, which defaults to ``INFO``.\n\n If desired, this logging configuration can be overwritten by calling\n ``logging.basicConfig(..., force=True)`` (in Python 3.8 onwards), or by\n manually resetting the root logger instance.\n An example of this can be found in the section on :ref:`rotating-logger`.\n\n .. versionadded:: 1.4\n \"\"\"\n # construct an appropriate handler\n hdlr = logging.StreamHandler(sys.stdout)\n hdlr.addFilter(SimTimeContextFilter())\n if want_color_output():\n hdlr.setFormatter(SimColourLogFormatter())\n else:\n hdlr.setFormatter(SimLogFormatter())\n\n logging.setLoggerClass(SimBaseLog) # For backwards compatibility\n logging.basicConfig()\n logging.getLogger().handlers = [hdlr] # overwrite default handlers\n\n # apply level settings for cocotb\n log = logging.getLogger(\"cocotb\")\n\n try:\n # All log levels are upper case, convert the user input for convenience.\n level = os.environ[\"COCOTB_LOG_LEVEL\"].upper()\n except KeyError:\n level = _COCOTB_LOG_LEVEL_DEFAULT\n\n try:\n log.setLevel(level)\n except ValueError:\n valid_levels = (\"CRITICAL\", \"ERROR\", \"WARNING\", \"INFO\", \"DEBUG\", \"TRACE\")\n raise ValueError(\n \"Invalid log level %r passed through the \"\n \"COCOTB_LOG_LEVEL environment variable. Valid log \"\n \"levels: %s\" % (level, \", \".join(valid_levels))\n )\n\n # Notify GPI of log level, which it uses as an optimization to avoid\n # calling into Python.\n simulator.log_level(log.getEffectiveLevel())\n\n\nclass SimBaseLog(logging.getLoggerClass()):\n \"\"\"This class only exists for backwards compatibility\"\"\"\n\n @property\n def logger(self):\n warnings.warn(\n \"the .logger attribute should not be used now that `SimLog` \"\n \"returns a native logger instance directly.\",\n DeprecationWarning,\n stacklevel=2,\n )\n return self\n\n @property\n def colour(self):\n warnings.warn(\n \"the .colour attribute may be removed in future, use the \"\n \"equivalent `cocotb.utils.want_color_output()` instead\",\n DeprecationWarning,\n stacklevel=2,\n )\n return want_color_output()\n\n def setLevel(self, level: int) -> None:\n super().setLevel(level)\n if self.name == \"gpi\":\n simulator.log_level(level)\n\n\n# this used to be a class, hence the unusual capitalization\ndef SimLog(name, ident=None):\n \"\"\"Like logging.getLogger, but append a numeric identifier to the name\"\"\"\n if ident is not None:\n name = f\"{name}.0x{ident:x}\"\n return logging.getLogger(name)\n\n\nclass SimTimeContextFilter(logging.Filter):\n \"\"\"\n A filter to inject simulator times into the log records.\n\n This uses the approach described in the :ref:`Python logging cookbook <python:filters-contextual>`.\n\n This adds the :attr:`~logging.LogRecord.created_sim_time` attribute.\n\n .. versionadded:: 1.4\n \"\"\"\n\n # needed to make our docs render well\n def __init__(self):\n \"\"\"\"\"\"\n super().__init__()\n\n def filter(self, record):\n try:\n record.created_sim_time = get_sim_time()\n except RecursionError:\n # get_sim_time may try to log - if that happens, we can't\n # attach a simulator time to this message.\n record.created_sim_time = None\n return True\n\n\nclass SimLogFormatter(logging.Formatter):\n \"\"\"Log formatter to provide consistent log message handling.\n\n This will only add simulator timestamps if the handler object this\n formatter is attached to has a :class:`SimTimeContextFilter` filter\n attached, which cocotb ensures by default.\n \"\"\"\n\n # Removes the arguments from the base class. Docstring needed to make\n # sphinx happy.\n def __init__(self):\n \"\"\"Takes no arguments.\"\"\"\n super().__init__()\n\n # Justify and truncate\n @staticmethod\n def ljust(string, chars):\n if len(string) > chars:\n return \"..\" + string[(chars - 2) * -1 :]\n return string.ljust(chars)\n\n @staticmethod\n def rjust(string, chars):\n if len(string) > chars:\n return \"..\" + string[(chars - 2) * -1 :]\n return string.rjust(chars)\n\n def _format(self, level, record, msg, coloured=False):\n sim_time = getattr(record, \"created_sim_time\", None)\n if sim_time is None:\n sim_time_str = \" -.--ns\"\n else:\n time_ns = get_time_from_sim_steps(sim_time, \"ns\")\n sim_time_str = f\"{time_ns:6.2f}ns\"\n prefix = (\n sim_time_str.rjust(11)\n + \" \"\n + level\n + \" \"\n + self.ljust(record.name, _RECORD_CHARS)\n + \" \"\n )\n if not _suppress:\n prefix += (\n self.rjust(os.path.split(record.filename)[1], _FILENAME_CHARS)\n + \":\"\n + self.ljust(str(record.lineno), _LINENO_CHARS)\n + \" in \"\n + self.ljust(str(record.funcName), _FUNCNAME_CHARS)\n + \" \"\n )\n\n # these lines are copied from the builtin logger\n if record.exc_info:\n # Cache the traceback text to avoid converting it multiple times\n # (it's constant anyway)\n if not record.exc_text:\n record.exc_text = self.formatException(record.exc_info)\n if record.exc_text:\n if msg[-1:] != \"\\n\":\n msg = msg + \"\\n\"\n msg = msg + record.exc_text\n\n prefix_len = len(prefix)\n if coloured:\n prefix_len -= len(level) - _LEVEL_CHARS\n pad = \"\\n\" + \" \" * (prefix_len)\n return prefix + pad.join(msg.split(\"\\n\"))\n\n def format(self, record):\n \"\"\"Prettify the log output, annotate with simulation time\"\"\"\n\n msg = record.getMessage()\n level = record.levelname.ljust(_LEVEL_CHARS)\n\n return self._format(level, record, msg)\n\n\nclass SimColourLogFormatter(SimLogFormatter):\n \"\"\"Log formatter to provide consistent log message handling.\"\"\"\n\n loglevel2colour = {\n logging.TRACE: \"%s\",\n logging.DEBUG: \"%s\",\n logging.INFO: \"%s\",\n logging.WARNING: ANSI.COLOR_WARNING + \"%s\" + ANSI.COLOR_DEFAULT,\n logging.ERROR: ANSI.COLOR_ERROR + \"%s\" + ANSI.COLOR_DEFAULT,\n logging.CRITICAL: ANSI.COLOR_CRITICAL + \"%s\" + ANSI.COLOR_DEFAULT,\n }\n\n def format(self, record):\n \"\"\"Prettify the log output, annotate with simulation time\"\"\"\n\n msg = record.getMessage()\n\n # Need to colour each line in case coloring is applied in the message\n msg = \"\\n\".join(\n [\n SimColourLogFormatter.loglevel2colour.get(record.levelno, \"%s\") % line\n for line in msg.split(\"\\n\")\n ]\n )\n level = SimColourLogFormatter.loglevel2colour.get(\n record.levelno, \"%s\"\n ) % record.levelname.ljust(_LEVEL_CHARS)\n\n return self._format(level, record, msg, coloured=True)\n\n\ndef _filter_from_c(logger_name, level):\n return logging.getLogger(logger_name).isEnabledFor(level)\n\n\ndef _log_from_c(logger_name, level, filename, lineno, msg, function_name):\n \"\"\"\n This is for use from the C world, and allows us to insert C stack\n information.\n \"\"\"\n logger = logging.getLogger(logger_name)\n if logger.isEnabledFor(level):\n record = logger.makeRecord(\n logger.name, level, filename, lineno, msg, None, None, function_name\n )\n logger.handle(record)\n", "path": "cocotb/log.py"}]} | 3,555 | 382 |
gh_patches_debug_18985 | rasdani/github-patches | git_diff | oppia__oppia-6309 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
InteractiveMap interaction: in the rule editor, clicks on the map are not displayed correctly
Create an exploration with a map interaction. Add a rule and click on the map to choose the point the rule applies to. A marker should appear where you click, but it does not.
Save and close the rule, then re-open it. The marker is now displayed correctly.
Create a new rule. Before being clicked on the map should be blank, but instead it displays the position of the marker from the previous rule.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `extensions/dependencies/dependencies_config.py`
Content:
```
1 # coding: utf-8
2 #
3 # Copyright 2014 The Oppia Authors. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS-IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 """Configuration for JavaScript library dependencies."""
18
19
20 # A dict mapping dependency ids to the Angular module names they
21 # should insert when the Angular app is first initialized.
22 DEPENDENCIES_TO_ANGULAR_MODULES_DICT = {
23 'codemirror': ['ui.codemirror'],
24 'google_maps': ['ui.map'],
25 'guppy': [],
26 'logic_proof': [],
27 'math_expressions': [],
28 'midijs': [],
29 'pencilcode': [],
30 'skulpt': [],
31 }
32
```
Path: `extensions/interactions/InteractiveMap/InteractiveMap.py`
Content:
```
1 # coding: utf-8
2 #
3 # Copyright 2014 The Oppia Authors. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, softwar
12 # distributed under the License is distributed on an "AS-IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 """Python configuration for InteractiveMap interaction."""
18
19 from extensions.interactions import base
20
21
22 class InteractiveMap(base.BaseInteraction):
23 """Interaction for pinpointing a location on a map."""
24
25 name = 'World Map'
26 description = 'Allows learners to specify a position on a world map.'
27 display_mode = base.DISPLAY_MODE_SUPPLEMENTAL
28 is_trainable = False
29 _dependency_ids = ['google_maps']
30 answer_type = 'CoordTwoDim'
31 instructions = 'Click on the map'
32 narrow_instructions = 'View map'
33 needs_summary = True
34 # There needs to be a way to pass marker location so that an answer can be
35 # conveyed meaningfully to the learner. Once this issue is fixed,
36 # InteractiveMap interaction can be supported by the solution feature.
37 can_have_solution = False
38 show_generic_submit_button = False
39
40 _customization_arg_specs = [{
41 'name': 'latitude',
42 'description': 'Starting center latitude (-90 to 90)',
43 'schema': {
44 'type': 'float',
45 'validators': [{
46 'id': 'is_at_least',
47 'min_value': -90.0,
48 }, {
49 'id': 'is_at_most',
50 'max_value': 90.0,
51 }]
52 },
53 'default_value': 0.0,
54 }, {
55 'name': 'longitude',
56 'description': 'Starting center longitude (-180 to 180)',
57 'schema': {
58 'type': 'float',
59 'validators': [{
60 'id': 'is_at_least',
61 'min_value': -180.0,
62 }, {
63 'id': 'is_at_most',
64 'max_value': 180.0,
65 }]
66 },
67 'default_value': 0.0,
68 }, {
69 'name': 'zoom',
70 'description': 'Starting zoom level (0 shows the entire earth)',
71 'schema': {
72 'type': 'float',
73 },
74 'default_value': 0.0,
75 }]
76
77 _answer_visualization_specs = [{
78 # Table with answer counts for top N answers.
79 'id': 'FrequencyTable',
80 'options': {
81 'column_headers': ['Answer', 'Count'],
82 'title': 'Top 10 answers',
83 },
84 'calculation_id': 'Top10AnswerFrequencies',
85 'addressed_info_is_supported': True,
86 }]
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/extensions/dependencies/dependencies_config.py b/extensions/dependencies/dependencies_config.py
--- a/extensions/dependencies/dependencies_config.py
+++ b/extensions/dependencies/dependencies_config.py
@@ -21,7 +21,7 @@
# should insert when the Angular app is first initialized.
DEPENDENCIES_TO_ANGULAR_MODULES_DICT = {
'codemirror': ['ui.codemirror'],
- 'google_maps': ['ui.map'],
+ 'ui_leaflet': ['ui-leaflet'],
'guppy': [],
'logic_proof': [],
'math_expressions': [],
diff --git a/extensions/interactions/InteractiveMap/InteractiveMap.py b/extensions/interactions/InteractiveMap/InteractiveMap.py
--- a/extensions/interactions/InteractiveMap/InteractiveMap.py
+++ b/extensions/interactions/InteractiveMap/InteractiveMap.py
@@ -26,7 +26,7 @@
description = 'Allows learners to specify a position on a world map.'
display_mode = base.DISPLAY_MODE_SUPPLEMENTAL
is_trainable = False
- _dependency_ids = ['google_maps']
+ _dependency_ids = ['ui_leaflet']
answer_type = 'CoordTwoDim'
instructions = 'Click on the map'
narrow_instructions = 'View map'
| {"golden_diff": "diff --git a/extensions/dependencies/dependencies_config.py b/extensions/dependencies/dependencies_config.py\n--- a/extensions/dependencies/dependencies_config.py\n+++ b/extensions/dependencies/dependencies_config.py\n@@ -21,7 +21,7 @@\n # should insert when the Angular app is first initialized.\n DEPENDENCIES_TO_ANGULAR_MODULES_DICT = {\n 'codemirror': ['ui.codemirror'],\n- 'google_maps': ['ui.map'],\n+ 'ui_leaflet': ['ui-leaflet'],\n 'guppy': [],\n 'logic_proof': [],\n 'math_expressions': [],\ndiff --git a/extensions/interactions/InteractiveMap/InteractiveMap.py b/extensions/interactions/InteractiveMap/InteractiveMap.py\n--- a/extensions/interactions/InteractiveMap/InteractiveMap.py\n+++ b/extensions/interactions/InteractiveMap/InteractiveMap.py\n@@ -26,7 +26,7 @@\n description = 'Allows learners to specify a position on a world map.'\n display_mode = base.DISPLAY_MODE_SUPPLEMENTAL\n is_trainable = False\n- _dependency_ids = ['google_maps']\n+ _dependency_ids = ['ui_leaflet']\n answer_type = 'CoordTwoDim'\n instructions = 'Click on the map'\n narrow_instructions = 'View map'\n", "issue": "InteractiveMap interaction: in the rule editor, clicks on the map are not displayed correctly\nCreate an exploration with a map interaction. Add a rule and click on the map to choose the point the rule applies to. A marker should appear where you click, but it does not.\n\nSave and close the rule, then re-open it. The marker is now displayed correctly.\n\nCreate a new rule. Before being clicked on the map should be blank, but instead it displays the position of the marker from the previous rule.\n\n", "before_files": [{"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Configuration for JavaScript library dependencies.\"\"\"\n\n\n# A dict mapping dependency ids to the Angular module names they\n# should insert when the Angular app is first initialized.\nDEPENDENCIES_TO_ANGULAR_MODULES_DICT = {\n 'codemirror': ['ui.codemirror'],\n 'google_maps': ['ui.map'],\n 'guppy': [],\n 'logic_proof': [],\n 'math_expressions': [],\n 'midijs': [],\n 'pencilcode': [],\n 'skulpt': [],\n}\n", "path": "extensions/dependencies/dependencies_config.py"}, {"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, softwar\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Python configuration for InteractiveMap interaction.\"\"\"\n\nfrom extensions.interactions import base\n\n\nclass InteractiveMap(base.BaseInteraction):\n \"\"\"Interaction for pinpointing a location on a map.\"\"\"\n\n name = 'World Map'\n description = 'Allows learners to specify a position on a world map.'\n display_mode = base.DISPLAY_MODE_SUPPLEMENTAL\n is_trainable = False\n _dependency_ids = ['google_maps']\n answer_type = 'CoordTwoDim'\n instructions = 'Click on the map'\n narrow_instructions = 'View map'\n needs_summary = True\n # There needs to be a way to pass marker location so that an answer can be\n # conveyed meaningfully to the learner. Once this issue is fixed,\n # InteractiveMap interaction can be supported by the solution feature.\n can_have_solution = False\n show_generic_submit_button = False\n\n _customization_arg_specs = [{\n 'name': 'latitude',\n 'description': 'Starting center latitude (-90 to 90)',\n 'schema': {\n 'type': 'float',\n 'validators': [{\n 'id': 'is_at_least',\n 'min_value': -90.0,\n }, {\n 'id': 'is_at_most',\n 'max_value': 90.0,\n }]\n },\n 'default_value': 0.0,\n }, {\n 'name': 'longitude',\n 'description': 'Starting center longitude (-180 to 180)',\n 'schema': {\n 'type': 'float',\n 'validators': [{\n 'id': 'is_at_least',\n 'min_value': -180.0,\n }, {\n 'id': 'is_at_most',\n 'max_value': 180.0,\n }]\n },\n 'default_value': 0.0,\n }, {\n 'name': 'zoom',\n 'description': 'Starting zoom level (0 shows the entire earth)',\n 'schema': {\n 'type': 'float',\n },\n 'default_value': 0.0,\n }]\n\n _answer_visualization_specs = [{\n # Table with answer counts for top N answers.\n 'id': 'FrequencyTable',\n 'options': {\n 'column_headers': ['Answer', 'Count'],\n 'title': 'Top 10 answers',\n },\n 'calculation_id': 'Top10AnswerFrequencies',\n 'addressed_info_is_supported': True,\n }]\n", "path": "extensions/interactions/InteractiveMap/InteractiveMap.py"}], "after_files": [{"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Configuration for JavaScript library dependencies.\"\"\"\n\n\n# A dict mapping dependency ids to the Angular module names they\n# should insert when the Angular app is first initialized.\nDEPENDENCIES_TO_ANGULAR_MODULES_DICT = {\n 'codemirror': ['ui.codemirror'],\n 'ui_leaflet': ['ui-leaflet'],\n 'guppy': [],\n 'logic_proof': [],\n 'math_expressions': [],\n 'midijs': [],\n 'pencilcode': [],\n 'skulpt': [],\n}\n", "path": "extensions/dependencies/dependencies_config.py"}, {"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, softwar\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Python configuration for InteractiveMap interaction.\"\"\"\n\nfrom extensions.interactions import base\n\n\nclass InteractiveMap(base.BaseInteraction):\n \"\"\"Interaction for pinpointing a location on a map.\"\"\"\n\n name = 'World Map'\n description = 'Allows learners to specify a position on a world map.'\n display_mode = base.DISPLAY_MODE_SUPPLEMENTAL\n is_trainable = False\n _dependency_ids = ['ui_leaflet']\n answer_type = 'CoordTwoDim'\n instructions = 'Click on the map'\n narrow_instructions = 'View map'\n needs_summary = True\n # There needs to be a way to pass marker location so that an answer can be\n # conveyed meaningfully to the learner. Once this issue is fixed,\n # InteractiveMap interaction can be supported by the solution feature.\n can_have_solution = False\n show_generic_submit_button = False\n\n _customization_arg_specs = [{\n 'name': 'latitude',\n 'description': 'Starting center latitude (-90 to 90)',\n 'schema': {\n 'type': 'float',\n 'validators': [{\n 'id': 'is_at_least',\n 'min_value': -90.0,\n }, {\n 'id': 'is_at_most',\n 'max_value': 90.0,\n }]\n },\n 'default_value': 0.0,\n }, {\n 'name': 'longitude',\n 'description': 'Starting center longitude (-180 to 180)',\n 'schema': {\n 'type': 'float',\n 'validators': [{\n 'id': 'is_at_least',\n 'min_value': -180.0,\n }, {\n 'id': 'is_at_most',\n 'max_value': 180.0,\n }]\n },\n 'default_value': 0.0,\n }, {\n 'name': 'zoom',\n 'description': 'Starting zoom level (0 shows the entire earth)',\n 'schema': {\n 'type': 'float',\n },\n 'default_value': 0.0,\n }]\n\n _answer_visualization_specs = [{\n # Table with answer counts for top N answers.\n 'id': 'FrequencyTable',\n 'options': {\n 'column_headers': ['Answer', 'Count'],\n 'title': 'Top 10 answers',\n },\n 'calculation_id': 'Top10AnswerFrequencies',\n 'addressed_info_is_supported': True,\n }]\n", "path": "extensions/interactions/InteractiveMap/InteractiveMap.py"}]} | 1,527 | 276 |
gh_patches_debug_8482 | rasdani/github-patches | git_diff | airctic__icevision-910 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Efficientdet inference returns wrong bbox predictions
## 🐛 Bug
When running inference on efficientdet models, the predictions are squeezed to only fit square aspect ratio. This problem is only visible when running efficientdet in rectangular input shape (eg 512x768). Here is a screenshot of default behavior:

Note that predictions are squezzed, seemingly to only square image resolution. I have discovered that the bug comes from `process_infer_record` function, where the image input shape is passed to effdet in the wrong notation (H, W instead of W, H).
I applied that fix and the result is working as expected:

**To Reproduce**
Steps to reproduce the behavior:
1. Train efficientdet model in rectangular image input shape
2. Run inference
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `icevision/models/ross/efficientdet/dataloaders.py`
Content:
```
1 __all__ = [
2 "build_train_batch",
3 "build_valid_batch",
4 "build_infer_batch",
5 "train_dl",
6 "valid_dl",
7 "infer_dl",
8 ]
9
10 from icevision.imports import *
11 from icevision.models.utils import *
12
13
14 def train_dl(dataset, batch_tfms=None, **dataloader_kwargs) -> DataLoader:
15 """A `DataLoader` with a custom `collate_fn` that batches items as required for training the model.
16
17 # Arguments
18 dataset: Possibly a `Dataset` object, but more generally, any `Sequence` that returns records.
19 batch_tfms: Transforms to be applied at the batch level.
20 **dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch `DataLoader`.
21 The parameter `collate_fn` is already defined internally and cannot be passed here.
22
23 # Returns
24 A Pytorch `DataLoader`.
25 """
26 return transform_dl(
27 dataset=dataset,
28 build_batch=build_train_batch,
29 batch_tfms=batch_tfms,
30 **dataloader_kwargs
31 )
32
33
34 def valid_dl(dataset, batch_tfms=None, **dataloader_kwargs) -> DataLoader:
35 """A `DataLoader` with a custom `collate_fn` that batches items as required for validating the model.
36
37 # Arguments
38 dataset: Possibly a `Dataset` object, but more generally, any `Sequence` that returns records.
39 batch_tfms: Transforms to be applied at the batch level.
40 **dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch `DataLoader`.
41 The parameter `collate_fn` is already defined internally and cannot be passed here.
42
43 # Returns
44 A Pytorch `DataLoader`.
45 """
46 return transform_dl(
47 dataset=dataset,
48 build_batch=build_valid_batch,
49 batch_tfms=batch_tfms,
50 **dataloader_kwargs
51 )
52
53
54 def infer_dl(dataset, batch_tfms=None, **dataloader_kwargs) -> DataLoader:
55 """A `DataLoader` with a custom `collate_fn` that batches items as required for inferring the model.
56
57 # Arguments
58 dataset: Possibly a `Dataset` object, but more generally, any `Sequence` that returns records.
59 batch_tfms: Transforms to be applied at the batch level.
60 **dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch `DataLoader`.
61 The parameter `collate_fn` is already defined internally and cannot be passed here.
62
63 # Returns
64 A Pytorch `DataLoader`.
65 """
66 return transform_dl(
67 dataset=dataset,
68 build_batch=build_infer_batch,
69 batch_tfms=batch_tfms,
70 **dataloader_kwargs
71 )
72
73
74 def build_train_batch(records):
75 """Builds a batch in the format required by the model when training.
76
77 # Arguments
78 records: A `Sequence` of records.
79
80 # Returns
81 A tuple with two items. The first will be a tuple like `(images, targets)`,
82 in the input format required by the model. The second will be a list
83 of the input records.
84
85 # Examples
86
87 Use the result of this function to feed the model.
88 ```python
89 batch, records = build_train_batch(records)
90 outs = model(*batch)
91 ```
92 """
93 batch_images, batch_bboxes, batch_classes = zip(
94 *(process_train_record(record) for record in records)
95 )
96
97 # convert to tensors
98 batch_images = torch.stack(batch_images)
99 batch_bboxes = [tensor(bboxes, dtype=torch.float32) for bboxes in batch_bboxes]
100 batch_classes = [tensor(classes, dtype=torch.float32) for classes in batch_classes]
101
102 # convert to EffDet interface
103 targets = dict(bbox=batch_bboxes, cls=batch_classes)
104
105 return (batch_images, targets), records
106
107
108 def build_valid_batch(records):
109 """Builds a batch in the format required by the model when validating.
110
111 # Arguments
112 records: A `Sequence` of records.
113
114 # Returns
115 A tuple with two items. The first will be a tuple like `(images, targets)`,
116 in the input format required by the model. The second will be a list
117 of the input records.
118
119 # Examples
120
121 Use the result of this function to feed the model.
122 ```python
123 batch, records = build_valid_batch(records)
124 outs = model(*batch)
125 ```
126 """
127 (batch_images, targets), records = build_train_batch(records)
128
129 # convert to EffDet interface, when not training, dummy size and scale is required
130 targets = dict(img_size=None, img_scale=None, **targets)
131
132 return (batch_images, targets), records
133
134
135 def build_infer_batch(records):
136 """Builds a batch in the format required by the model when doing inference.
137
138 # Arguments
139 records: A `Sequence` of records.
140
141 # Returns
142 A tuple with two items. The first will be a tuple like `(images, targets)`,
143 in the input format required by the model. The second will be a list
144 of the input records.
145 Use the result of this function to feed the model.
146 ```python
147 batch, records = build_infer_batch(records)
148 outs = model(*batch)
149 ```
150 """
151 batch_images, batch_sizes, batch_scales = zip(
152 *(process_infer_record(record) for record in records)
153 )
154
155 # convert to tensors
156 batch_images = torch.stack(batch_images)
157 batch_sizes = tensor(batch_sizes, dtype=torch.float32)
158 batch_scales = tensor(batch_scales, dtype=torch.float32)
159
160 # convert to EffDet interface
161 targets = dict(img_size=batch_sizes, img_scale=batch_scales)
162
163 return (batch_images, targets), records
164
165
166 def process_train_record(record) -> tuple:
167 """Extracts information from record and prepares a format required by the EffDet training"""
168 image = im2tensor(record.img)
169 # background and dummy if no label in record
170 classes = record.detection.label_ids if record.detection.label_ids else [0]
171 bboxes = (
172 [bbox.yxyx for bbox in record.detection.bboxes]
173 if len(record.detection.label_ids) > 0
174 else [[0, 0, 0, 0]]
175 )
176 return image, bboxes, classes
177
178
179 def process_infer_record(record) -> tuple:
180 """Extracts information from record and prepares a format required by the EffDet inference"""
181 image = im2tensor(record.img)
182 image_size = image.shape[-2:]
183 image_scale = 1.0
184
185 return image, image_size, image_scale
186
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/icevision/models/ross/efficientdet/dataloaders.py b/icevision/models/ross/efficientdet/dataloaders.py
--- a/icevision/models/ross/efficientdet/dataloaders.py
+++ b/icevision/models/ross/efficientdet/dataloaders.py
@@ -179,7 +179,7 @@
def process_infer_record(record) -> tuple:
"""Extracts information from record and prepares a format required by the EffDet inference"""
image = im2tensor(record.img)
- image_size = image.shape[-2:]
+ n_channels, image_height, image_width = image.shape
image_scale = 1.0
-
- return image, image_size, image_scale
+ # EffDet expects image size to be passed in W, H notation
+ return image, (image_width, image_height), image_scale
| {"golden_diff": "diff --git a/icevision/models/ross/efficientdet/dataloaders.py b/icevision/models/ross/efficientdet/dataloaders.py\n--- a/icevision/models/ross/efficientdet/dataloaders.py\n+++ b/icevision/models/ross/efficientdet/dataloaders.py\n@@ -179,7 +179,7 @@\n def process_infer_record(record) -> tuple:\n \"\"\"Extracts information from record and prepares a format required by the EffDet inference\"\"\"\n image = im2tensor(record.img)\n- image_size = image.shape[-2:]\n+ n_channels, image_height, image_width = image.shape\n image_scale = 1.0\n-\n- return image, image_size, image_scale\n+ # EffDet expects image size to be passed in W, H notation\n+ return image, (image_width, image_height), image_scale\n", "issue": "Efficientdet inference returns wrong bbox predictions\n## \ud83d\udc1b Bug\r\nWhen running inference on efficientdet models, the predictions are squeezed to only fit square aspect ratio. This problem is only visible when running efficientdet in rectangular input shape (eg 512x768). Here is a screenshot of default behavior:\r\n\r\n\r\nNote that predictions are squezzed, seemingly to only square image resolution. I have discovered that the bug comes from `process_infer_record` function, where the image input shape is passed to effdet in the wrong notation (H, W instead of W, H).\r\n\r\nI applied that fix and the result is working as expected:\r\n\r\n\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Train efficientdet model in rectangular image input shape\r\n2. Run inference\r\n\r\n\n", "before_files": [{"content": "__all__ = [\n \"build_train_batch\",\n \"build_valid_batch\",\n \"build_infer_batch\",\n \"train_dl\",\n \"valid_dl\",\n \"infer_dl\",\n]\n\nfrom icevision.imports import *\nfrom icevision.models.utils import *\n\n\ndef train_dl(dataset, batch_tfms=None, **dataloader_kwargs) -> DataLoader:\n \"\"\"A `DataLoader` with a custom `collate_fn` that batches items as required for training the model.\n\n # Arguments\n dataset: Possibly a `Dataset` object, but more generally, any `Sequence` that returns records.\n batch_tfms: Transforms to be applied at the batch level.\n **dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch `DataLoader`.\n The parameter `collate_fn` is already defined internally and cannot be passed here.\n\n # Returns\n A Pytorch `DataLoader`.\n \"\"\"\n return transform_dl(\n dataset=dataset,\n build_batch=build_train_batch,\n batch_tfms=batch_tfms,\n **dataloader_kwargs\n )\n\n\ndef valid_dl(dataset, batch_tfms=None, **dataloader_kwargs) -> DataLoader:\n \"\"\"A `DataLoader` with a custom `collate_fn` that batches items as required for validating the model.\n\n # Arguments\n dataset: Possibly a `Dataset` object, but more generally, any `Sequence` that returns records.\n batch_tfms: Transforms to be applied at the batch level.\n **dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch `DataLoader`.\n The parameter `collate_fn` is already defined internally and cannot be passed here.\n\n # Returns\n A Pytorch `DataLoader`.\n \"\"\"\n return transform_dl(\n dataset=dataset,\n build_batch=build_valid_batch,\n batch_tfms=batch_tfms,\n **dataloader_kwargs\n )\n\n\ndef infer_dl(dataset, batch_tfms=None, **dataloader_kwargs) -> DataLoader:\n \"\"\"A `DataLoader` with a custom `collate_fn` that batches items as required for inferring the model.\n\n # Arguments\n dataset: Possibly a `Dataset` object, but more generally, any `Sequence` that returns records.\n batch_tfms: Transforms to be applied at the batch level.\n **dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch `DataLoader`.\n The parameter `collate_fn` is already defined internally and cannot be passed here.\n\n # Returns\n A Pytorch `DataLoader`.\n \"\"\"\n return transform_dl(\n dataset=dataset,\n build_batch=build_infer_batch,\n batch_tfms=batch_tfms,\n **dataloader_kwargs\n )\n\n\ndef build_train_batch(records):\n \"\"\"Builds a batch in the format required by the model when training.\n\n # Arguments\n records: A `Sequence` of records.\n\n # Returns\n A tuple with two items. The first will be a tuple like `(images, targets)`,\n in the input format required by the model. The second will be a list\n of the input records.\n\n # Examples\n\n Use the result of this function to feed the model.\n ```python\n batch, records = build_train_batch(records)\n outs = model(*batch)\n ```\n \"\"\"\n batch_images, batch_bboxes, batch_classes = zip(\n *(process_train_record(record) for record in records)\n )\n\n # convert to tensors\n batch_images = torch.stack(batch_images)\n batch_bboxes = [tensor(bboxes, dtype=torch.float32) for bboxes in batch_bboxes]\n batch_classes = [tensor(classes, dtype=torch.float32) for classes in batch_classes]\n\n # convert to EffDet interface\n targets = dict(bbox=batch_bboxes, cls=batch_classes)\n\n return (batch_images, targets), records\n\n\ndef build_valid_batch(records):\n \"\"\"Builds a batch in the format required by the model when validating.\n\n # Arguments\n records: A `Sequence` of records.\n\n # Returns\n A tuple with two items. The first will be a tuple like `(images, targets)`,\n in the input format required by the model. The second will be a list\n of the input records.\n\n # Examples\n\n Use the result of this function to feed the model.\n ```python\n batch, records = build_valid_batch(records)\n outs = model(*batch)\n ```\n \"\"\"\n (batch_images, targets), records = build_train_batch(records)\n\n # convert to EffDet interface, when not training, dummy size and scale is required\n targets = dict(img_size=None, img_scale=None, **targets)\n\n return (batch_images, targets), records\n\n\ndef build_infer_batch(records):\n \"\"\"Builds a batch in the format required by the model when doing inference.\n\n # Arguments\n records: A `Sequence` of records.\n\n # Returns\n A tuple with two items. The first will be a tuple like `(images, targets)`,\n in the input format required by the model. The second will be a list\n of the input records.\n Use the result of this function to feed the model.\n ```python\n batch, records = build_infer_batch(records)\n outs = model(*batch)\n ```\n \"\"\"\n batch_images, batch_sizes, batch_scales = zip(\n *(process_infer_record(record) for record in records)\n )\n\n # convert to tensors\n batch_images = torch.stack(batch_images)\n batch_sizes = tensor(batch_sizes, dtype=torch.float32)\n batch_scales = tensor(batch_scales, dtype=torch.float32)\n\n # convert to EffDet interface\n targets = dict(img_size=batch_sizes, img_scale=batch_scales)\n\n return (batch_images, targets), records\n\n\ndef process_train_record(record) -> tuple:\n \"\"\"Extracts information from record and prepares a format required by the EffDet training\"\"\"\n image = im2tensor(record.img)\n # background and dummy if no label in record\n classes = record.detection.label_ids if record.detection.label_ids else [0]\n bboxes = (\n [bbox.yxyx for bbox in record.detection.bboxes]\n if len(record.detection.label_ids) > 0\n else [[0, 0, 0, 0]]\n )\n return image, bboxes, classes\n\n\ndef process_infer_record(record) -> tuple:\n \"\"\"Extracts information from record and prepares a format required by the EffDet inference\"\"\"\n image = im2tensor(record.img)\n image_size = image.shape[-2:]\n image_scale = 1.0\n\n return image, image_size, image_scale\n", "path": "icevision/models/ross/efficientdet/dataloaders.py"}], "after_files": [{"content": "__all__ = [\n \"build_train_batch\",\n \"build_valid_batch\",\n \"build_infer_batch\",\n \"train_dl\",\n \"valid_dl\",\n \"infer_dl\",\n]\n\nfrom icevision.imports import *\nfrom icevision.models.utils import *\n\n\ndef train_dl(dataset, batch_tfms=None, **dataloader_kwargs) -> DataLoader:\n \"\"\"A `DataLoader` with a custom `collate_fn` that batches items as required for training the model.\n\n # Arguments\n dataset: Possibly a `Dataset` object, but more generally, any `Sequence` that returns records.\n batch_tfms: Transforms to be applied at the batch level.\n **dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch `DataLoader`.\n The parameter `collate_fn` is already defined internally and cannot be passed here.\n\n # Returns\n A Pytorch `DataLoader`.\n \"\"\"\n return transform_dl(\n dataset=dataset,\n build_batch=build_train_batch,\n batch_tfms=batch_tfms,\n **dataloader_kwargs\n )\n\n\ndef valid_dl(dataset, batch_tfms=None, **dataloader_kwargs) -> DataLoader:\n \"\"\"A `DataLoader` with a custom `collate_fn` that batches items as required for validating the model.\n\n # Arguments\n dataset: Possibly a `Dataset` object, but more generally, any `Sequence` that returns records.\n batch_tfms: Transforms to be applied at the batch level.\n **dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch `DataLoader`.\n The parameter `collate_fn` is already defined internally and cannot be passed here.\n\n # Returns\n A Pytorch `DataLoader`.\n \"\"\"\n return transform_dl(\n dataset=dataset,\n build_batch=build_valid_batch,\n batch_tfms=batch_tfms,\n **dataloader_kwargs\n )\n\n\ndef infer_dl(dataset, batch_tfms=None, **dataloader_kwargs) -> DataLoader:\n \"\"\"A `DataLoader` with a custom `collate_fn` that batches items as required for inferring the model.\n\n # Arguments\n dataset: Possibly a `Dataset` object, but more generally, any `Sequence` that returns records.\n batch_tfms: Transforms to be applied at the batch level.\n **dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch `DataLoader`.\n The parameter `collate_fn` is already defined internally and cannot be passed here.\n\n # Returns\n A Pytorch `DataLoader`.\n \"\"\"\n return transform_dl(\n dataset=dataset,\n build_batch=build_infer_batch,\n batch_tfms=batch_tfms,\n **dataloader_kwargs\n )\n\n\ndef build_train_batch(records):\n \"\"\"Builds a batch in the format required by the model when training.\n\n # Arguments\n records: A `Sequence` of records.\n\n # Returns\n A tuple with two items. The first will be a tuple like `(images, targets)`,\n in the input format required by the model. The second will be a list\n of the input records.\n\n # Examples\n\n Use the result of this function to feed the model.\n ```python\n batch, records = build_train_batch(records)\n outs = model(*batch)\n ```\n \"\"\"\n batch_images, batch_bboxes, batch_classes = zip(\n *(process_train_record(record) for record in records)\n )\n\n # convert to tensors\n batch_images = torch.stack(batch_images)\n batch_bboxes = [tensor(bboxes, dtype=torch.float32) for bboxes in batch_bboxes]\n batch_classes = [tensor(classes, dtype=torch.float32) for classes in batch_classes]\n\n # convert to EffDet interface\n targets = dict(bbox=batch_bboxes, cls=batch_classes)\n\n return (batch_images, targets), records\n\n\ndef build_valid_batch(records):\n \"\"\"Builds a batch in the format required by the model when validating.\n\n # Arguments\n records: A `Sequence` of records.\n\n # Returns\n A tuple with two items. The first will be a tuple like `(images, targets)`,\n in the input format required by the model. The second will be a list\n of the input records.\n\n # Examples\n\n Use the result of this function to feed the model.\n ```python\n batch, records = build_valid_batch(records)\n outs = model(*batch)\n ```\n \"\"\"\n (batch_images, targets), records = build_train_batch(records)\n\n # convert to EffDet interface, when not training, dummy size and scale is required\n targets = dict(img_size=None, img_scale=None, **targets)\n\n return (batch_images, targets), records\n\n\ndef build_infer_batch(records):\n \"\"\"Builds a batch in the format required by the model when doing inference.\n\n # Arguments\n records: A `Sequence` of records.\n\n # Returns\n A tuple with two items. The first will be a tuple like `(images, targets)`,\n in the input format required by the model. The second will be a list\n of the input records.\n Use the result of this function to feed the model.\n ```python\n batch, records = build_infer_batch(records)\n outs = model(*batch)\n ```\n \"\"\"\n batch_images, batch_sizes, batch_scales = zip(\n *(process_infer_record(record) for record in records)\n )\n\n # convert to tensors\n batch_images = torch.stack(batch_images)\n batch_sizes = tensor(batch_sizes, dtype=torch.float32)\n batch_scales = tensor(batch_scales, dtype=torch.float32)\n\n # convert to EffDet interface\n targets = dict(img_size=batch_sizes, img_scale=batch_scales)\n\n return (batch_images, targets), records\n\n\ndef process_train_record(record) -> tuple:\n \"\"\"Extracts information from record and prepares a format required by the EffDet training\"\"\"\n image = im2tensor(record.img)\n # background and dummy if no label in record\n classes = record.detection.label_ids if record.detection.label_ids else [0]\n bboxes = (\n [bbox.yxyx for bbox in record.detection.bboxes]\n if len(record.detection.label_ids) > 0\n else [[0, 0, 0, 0]]\n )\n return image, bboxes, classes\n\n\ndef process_infer_record(record) -> tuple:\n \"\"\"Extracts information from record and prepares a format required by the EffDet inference\"\"\"\n image = im2tensor(record.img)\n n_channels, image_height, image_width = image.shape\n image_scale = 1.0\n # EffDet expects image size to be passed in W, H notation\n return image, (image_width, image_height), image_scale\n", "path": "icevision/models/ross/efficientdet/dataloaders.py"}]} | 2,455 | 196 |
gh_patches_debug_34246 | rasdani/github-patches | git_diff | uccser__cs-unplugged-318 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support multiple page resources
Currently the create image function for a resource return a single image. Instead it should return a list of images, which would allow multiple page resources.
For example, for 4 pages of a single page resource the content would be:
```
Image output: [A]
Final document: A, A, A, A
```
For 4 pages of a three page resource the content would be:
```
Image output: [A, B, C], [A, B, C], [A, B, C], [A, B, C]
Final document: A, B, C, A, B, C, A, B, C, A, B, C
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `csunplugged/resources/views/generate_resource_pdf.py`
Content:
```
1 """Module for generating custom resource PDFs."""
2
3 from django.http import HttpResponse
4 from django.template.loader import render_to_string
5 from django.contrib.staticfiles import finders
6 from django.conf import settings
7 from PIL import Image
8 from io import BytesIO
9 import importlib
10 import base64
11
12 RESPONSE_CONTENT_DISPOSITION = 'attachment; filename="{filename}.pdf"'
13 MM_TO_PIXEL_RATIO = 3.78
14
15
16 def generate_resource_pdf(request, resource, module_path):
17 """Return a response containing a generated PDF resource.
18
19 Args:
20 request: HTTP request object
21 resource: Object of resource data.
22 module_path: Path to module for generating resource.
23
24 Returns:
25 HTTP response containing generated resource PDF.
26 """
27 # TODO: Weasyprint handling in production
28 import environ
29 env = environ.Env(
30 DJANGO_PRODUCTION=(bool),
31 )
32 if env("DJANGO_PRODUCTION"):
33 return HttpResponse("<html><body>PDF generation is currently not supported in production.</body></html>")
34 else:
35 from weasyprint import HTML, CSS
36 context = dict()
37 get_request = request.GET
38 context["paper_size"] = get_request["paper_size"]
39 context["resource"] = resource
40 context["header_text"] = get_request["header_text"]
41
42 resource_image_generator = importlib.import_module(module_path)
43 filename = "{} ({})".format(resource.name, resource_image_generator.subtitle(get_request, resource))
44 context["filename"] = filename
45
46 num_copies = range(0, int(get_request["copies"]))
47 context["resource_images"] = []
48 for copy in num_copies:
49 context["resource_images"].append(
50 generate_resource_image(get_request, resource, module_path)
51 )
52
53 pdf_html = render_to_string("resources/base-resource-pdf.html", context)
54 html = HTML(string=pdf_html, base_url=settings.STATIC_ROOT)
55 css_file = finders.find("css/print-resource-pdf.css")
56 css_string = open(css_file, encoding="UTF-8").read()
57 base_css = CSS(string=css_string)
58 pdf_file = html.write_pdf(stylesheets=[base_css])
59
60 response = HttpResponse(pdf_file, content_type="application/pdf")
61 response["Content-Disposition"] = RESPONSE_CONTENT_DISPOSITION.format(filename=filename)
62 return response
63
64
65 def generate_resource_image(get_request, resource, module_path):
66 """Retrieve image from resource generator and resize to size.
67
68 Args:
69 get_request: HTTP request object
70 resource: Object of resource data.
71 module_path: Path to module for generating resource.
72
73 Returns:
74 Base64 string of a generated resource image.
75 """
76 # Get image from resource image creator
77 resource_image_generator = importlib.import_module(module_path)
78 image = resource_image_generator.resource_image(get_request, resource)
79
80 # Resize image to reduce file size
81 if get_request["paper_size"] == "a4":
82 max_pixel_height = 267 * MM_TO_PIXEL_RATIO
83 elif get_request["paper_size"] == "letter":
84 max_pixel_height = 249 * MM_TO_PIXEL_RATIO
85 (width, height) = image.size
86 if height > max_pixel_height:
87 ratio = max_pixel_height / height
88 width *= ratio
89 height *= ratio
90 image = image.resize((int(width), int(height)), Image.ANTIALIAS)
91
92 # Save image to buffer
93 image_buffer = BytesIO()
94 image.save(image_buffer, format="PNG")
95
96 # Return base64 of image
97 return base64.b64encode(image_buffer.getvalue())
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/csunplugged/resources/views/generate_resource_pdf.py b/csunplugged/resources/views/generate_resource_pdf.py
--- a/csunplugged/resources/views/generate_resource_pdf.py
+++ b/csunplugged/resources/views/generate_resource_pdf.py
@@ -63,7 +63,9 @@
def generate_resource_image(get_request, resource, module_path):
- """Retrieve image from resource generator and resize to size.
+ """Retrieve image(s) for one copy of resource from resource generator.
+
+ Images are resized to size.
Args:
get_request: HTTP request object
@@ -71,27 +73,33 @@
module_path: Path to module for generating resource.
Returns:
- Base64 string of a generated resource image.
+ List of Base64 strings of a generated resource images for one copy.
"""
- # Get image from resource image creator
+ # Get images from resource image creator
resource_image_generator = importlib.import_module(module_path)
- image = resource_image_generator.resource_image(get_request, resource)
+ raw_images = resource_image_generator.resource_image(get_request, resource)
+ if not isinstance(raw_images, list):
+ raw_images = [raw_images]
- # Resize image to reduce file size
+ # Resize images to reduce file size
if get_request["paper_size"] == "a4":
max_pixel_height = 267 * MM_TO_PIXEL_RATIO
elif get_request["paper_size"] == "letter":
max_pixel_height = 249 * MM_TO_PIXEL_RATIO
- (width, height) = image.size
- if height > max_pixel_height:
- ratio = max_pixel_height / height
- width *= ratio
- height *= ratio
- image = image.resize((int(width), int(height)), Image.ANTIALIAS)
-
- # Save image to buffer
- image_buffer = BytesIO()
- image.save(image_buffer, format="PNG")
-
- # Return base64 of image
- return base64.b64encode(image_buffer.getvalue())
+
+ images = []
+ for image in raw_images:
+ (width, height) = image.size
+ if height > max_pixel_height:
+ ratio = max_pixel_height / height
+ width *= ratio
+ height *= ratio
+ image = image.resize((int(width), int(height)), Image.ANTIALIAS)
+
+ # Save image to buffer
+ image_buffer = BytesIO()
+ image.save(image_buffer, format="PNG")
+ # Add base64 of image to list of images
+ images.append(base64.b64encode(image_buffer.getvalue()))
+
+ return images
| {"golden_diff": "diff --git a/csunplugged/resources/views/generate_resource_pdf.py b/csunplugged/resources/views/generate_resource_pdf.py\n--- a/csunplugged/resources/views/generate_resource_pdf.py\n+++ b/csunplugged/resources/views/generate_resource_pdf.py\n@@ -63,7 +63,9 @@\n \n \n def generate_resource_image(get_request, resource, module_path):\n- \"\"\"Retrieve image from resource generator and resize to size.\n+ \"\"\"Retrieve image(s) for one copy of resource from resource generator.\n+\n+ Images are resized to size.\n \n Args:\n get_request: HTTP request object\n@@ -71,27 +73,33 @@\n module_path: Path to module for generating resource.\n \n Returns:\n- Base64 string of a generated resource image.\n+ List of Base64 strings of a generated resource images for one copy.\n \"\"\"\n- # Get image from resource image creator\n+ # Get images from resource image creator\n resource_image_generator = importlib.import_module(module_path)\n- image = resource_image_generator.resource_image(get_request, resource)\n+ raw_images = resource_image_generator.resource_image(get_request, resource)\n+ if not isinstance(raw_images, list):\n+ raw_images = [raw_images]\n \n- # Resize image to reduce file size\n+ # Resize images to reduce file size\n if get_request[\"paper_size\"] == \"a4\":\n max_pixel_height = 267 * MM_TO_PIXEL_RATIO\n elif get_request[\"paper_size\"] == \"letter\":\n max_pixel_height = 249 * MM_TO_PIXEL_RATIO\n- (width, height) = image.size\n- if height > max_pixel_height:\n- ratio = max_pixel_height / height\n- width *= ratio\n- height *= ratio\n- image = image.resize((int(width), int(height)), Image.ANTIALIAS)\n-\n- # Save image to buffer\n- image_buffer = BytesIO()\n- image.save(image_buffer, format=\"PNG\")\n-\n- # Return base64 of image\n- return base64.b64encode(image_buffer.getvalue())\n+\n+ images = []\n+ for image in raw_images:\n+ (width, height) = image.size\n+ if height > max_pixel_height:\n+ ratio = max_pixel_height / height\n+ width *= ratio\n+ height *= ratio\n+ image = image.resize((int(width), int(height)), Image.ANTIALIAS)\n+\n+ # Save image to buffer\n+ image_buffer = BytesIO()\n+ image.save(image_buffer, format=\"PNG\")\n+ # Add base64 of image to list of images\n+ images.append(base64.b64encode(image_buffer.getvalue()))\n+\n+ return images\n", "issue": "Support multiple page resources\nCurrently the create image function for a resource return a single image. Instead it should return a list of images, which would allow multiple page resources.\r\n\r\nFor example, for 4 pages of a single page resource the content would be:\r\n\r\n```\r\nImage output: [A]\r\nFinal document: A, A, A, A\r\n```\r\n\r\nFor 4 pages of a three page resource the content would be:\r\n\r\n```\r\nImage output: [A, B, C], [A, B, C], [A, B, C], [A, B, C] \r\nFinal document: A, B, C, A, B, C, A, B, C, A, B, C\r\n```\n", "before_files": [{"content": "\"\"\"Module for generating custom resource PDFs.\"\"\"\n\nfrom django.http import HttpResponse\nfrom django.template.loader import render_to_string\nfrom django.contrib.staticfiles import finders\nfrom django.conf import settings\nfrom PIL import Image\nfrom io import BytesIO\nimport importlib\nimport base64\n\nRESPONSE_CONTENT_DISPOSITION = 'attachment; filename=\"{filename}.pdf\"'\nMM_TO_PIXEL_RATIO = 3.78\n\n\ndef generate_resource_pdf(request, resource, module_path):\n \"\"\"Return a response containing a generated PDF resource.\n\n Args:\n request: HTTP request object\n resource: Object of resource data.\n module_path: Path to module for generating resource.\n\n Returns:\n HTTP response containing generated resource PDF.\n \"\"\"\n # TODO: Weasyprint handling in production\n import environ\n env = environ.Env(\n DJANGO_PRODUCTION=(bool),\n )\n if env(\"DJANGO_PRODUCTION\"):\n return HttpResponse(\"<html><body>PDF generation is currently not supported in production.</body></html>\")\n else:\n from weasyprint import HTML, CSS\n context = dict()\n get_request = request.GET\n context[\"paper_size\"] = get_request[\"paper_size\"]\n context[\"resource\"] = resource\n context[\"header_text\"] = get_request[\"header_text\"]\n\n resource_image_generator = importlib.import_module(module_path)\n filename = \"{} ({})\".format(resource.name, resource_image_generator.subtitle(get_request, resource))\n context[\"filename\"] = filename\n\n num_copies = range(0, int(get_request[\"copies\"]))\n context[\"resource_images\"] = []\n for copy in num_copies:\n context[\"resource_images\"].append(\n generate_resource_image(get_request, resource, module_path)\n )\n\n pdf_html = render_to_string(\"resources/base-resource-pdf.html\", context)\n html = HTML(string=pdf_html, base_url=settings.STATIC_ROOT)\n css_file = finders.find(\"css/print-resource-pdf.css\")\n css_string = open(css_file, encoding=\"UTF-8\").read()\n base_css = CSS(string=css_string)\n pdf_file = html.write_pdf(stylesheets=[base_css])\n\n response = HttpResponse(pdf_file, content_type=\"application/pdf\")\n response[\"Content-Disposition\"] = RESPONSE_CONTENT_DISPOSITION.format(filename=filename)\n return response\n\n\ndef generate_resource_image(get_request, resource, module_path):\n \"\"\"Retrieve image from resource generator and resize to size.\n\n Args:\n get_request: HTTP request object\n resource: Object of resource data.\n module_path: Path to module for generating resource.\n\n Returns:\n Base64 string of a generated resource image.\n \"\"\"\n # Get image from resource image creator\n resource_image_generator = importlib.import_module(module_path)\n image = resource_image_generator.resource_image(get_request, resource)\n\n # Resize image to reduce file size\n if get_request[\"paper_size\"] == \"a4\":\n max_pixel_height = 267 * MM_TO_PIXEL_RATIO\n elif get_request[\"paper_size\"] == \"letter\":\n max_pixel_height = 249 * MM_TO_PIXEL_RATIO\n (width, height) = image.size\n if height > max_pixel_height:\n ratio = max_pixel_height / height\n width *= ratio\n height *= ratio\n image = image.resize((int(width), int(height)), Image.ANTIALIAS)\n\n # Save image to buffer\n image_buffer = BytesIO()\n image.save(image_buffer, format=\"PNG\")\n\n # Return base64 of image\n return base64.b64encode(image_buffer.getvalue())\n", "path": "csunplugged/resources/views/generate_resource_pdf.py"}], "after_files": [{"content": "\"\"\"Module for generating custom resource PDFs.\"\"\"\n\nfrom django.http import HttpResponse\nfrom django.template.loader import render_to_string\nfrom django.contrib.staticfiles import finders\nfrom django.conf import settings\nfrom PIL import Image\nfrom io import BytesIO\nimport importlib\nimport base64\n\nRESPONSE_CONTENT_DISPOSITION = 'attachment; filename=\"{filename}.pdf\"'\nMM_TO_PIXEL_RATIO = 3.78\n\n\ndef generate_resource_pdf(request, resource, module_path):\n \"\"\"Return a response containing a generated PDF resource.\n\n Args:\n request: HTTP request object\n resource: Object of resource data.\n module_path: Path to module for generating resource.\n\n Returns:\n HTTP response containing generated resource PDF.\n \"\"\"\n # TODO: Weasyprint handling in production\n import environ\n env = environ.Env(\n DJANGO_PRODUCTION=(bool),\n )\n if env(\"DJANGO_PRODUCTION\"):\n return HttpResponse(\"<html><body>PDF generation is currently not supported in production.</body></html>\")\n else:\n from weasyprint import HTML, CSS\n context = dict()\n get_request = request.GET\n context[\"paper_size\"] = get_request[\"paper_size\"]\n context[\"resource\"] = resource\n context[\"header_text\"] = get_request[\"header_text\"]\n\n resource_image_generator = importlib.import_module(module_path)\n filename = \"{} ({})\".format(resource.name, resource_image_generator.subtitle(get_request, resource))\n context[\"filename\"] = filename\n\n num_copies = range(0, int(get_request[\"copies\"]))\n context[\"resource_images\"] = []\n for copy in num_copies:\n context[\"resource_images\"].append(\n generate_resource_image(get_request, resource, module_path)\n )\n\n pdf_html = render_to_string(\"resources/base-resource-pdf.html\", context)\n html = HTML(string=pdf_html, base_url=settings.STATIC_ROOT)\n css_file = finders.find(\"css/print-resource-pdf.css\")\n css_string = open(css_file, encoding=\"UTF-8\").read()\n base_css = CSS(string=css_string)\n pdf_file = html.write_pdf(stylesheets=[base_css])\n\n response = HttpResponse(pdf_file, content_type=\"application/pdf\")\n response[\"Content-Disposition\"] = RESPONSE_CONTENT_DISPOSITION.format(filename=filename)\n return response\n\n\ndef generate_resource_image(get_request, resource, module_path):\n \"\"\"Retrieve image(s) for one copy of resource from resource generator.\n\n Images are resized to size.\n\n Args:\n get_request: HTTP request object\n resource: Object of resource data.\n module_path: Path to module for generating resource.\n\n Returns:\n List of Base64 strings of a generated resource images for one copy.\n \"\"\"\n # Get images from resource image creator\n resource_image_generator = importlib.import_module(module_path)\n raw_images = resource_image_generator.resource_image(get_request, resource)\n if not isinstance(raw_images, list):\n raw_images = [raw_images]\n\n # Resize images to reduce file size\n if get_request[\"paper_size\"] == \"a4\":\n max_pixel_height = 267 * MM_TO_PIXEL_RATIO\n elif get_request[\"paper_size\"] == \"letter\":\n max_pixel_height = 249 * MM_TO_PIXEL_RATIO\n\n images = []\n for image in raw_images:\n (width, height) = image.size\n if height > max_pixel_height:\n ratio = max_pixel_height / height\n width *= ratio\n height *= ratio\n image = image.resize((int(width), int(height)), Image.ANTIALIAS)\n\n # Save image to buffer\n image_buffer = BytesIO()\n image.save(image_buffer, format=\"PNG\")\n # Add base64 of image to list of images\n images.append(base64.b64encode(image_buffer.getvalue()))\n\n return images\n", "path": "csunplugged/resources/views/generate_resource_pdf.py"}]} | 1,366 | 599 |
gh_patches_debug_32375 | rasdani/github-patches | git_diff | getsentry__sentry-python-897 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Crash in pure_eval
This happened while we were experiencing a DB outage:
```
File "/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py", line 443, in fetch
return await self._execute(query, args, 0, timeout)
File "/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py", line 1445, in _execute
result, _ = await self.__execute(
File "/server/athenian/api/db.py", line 191, in _asyncpg_execute
result = await self._execute_original(query, args, limit, timeout, return_status)
File "/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py", line 1454, in __execute
return await self._do_execute(query, executor, timeout)
File "/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py", line 1476, in _do_execute
result = await executor(stmt, None)
File "asyncpg/protocol/protocol.pyx", line 196, in bind_execute
return await waiter
asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/sentry_sdk/scope.py", line 353, in apply_to_event
new_event = event_processor(event, hint)
File "/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py", line 79, in add_executing_info
pure_eval_frame(tb.tb_frame) or sentry_frame["vars"]
File "/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py", line 128, in pure_eval_frame
expressions.sort(key=closeness, reverse=True)
File "/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py", line 113, in closeness
nodes_before_stmt = [
File "/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py", line 114, in <listcomp>
node for node in nodes if node.first_token.startpos < stmt.last_token.endpos
AttributeError: 'Name' object has no attribute 'first_token'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/pure_eval.py`
Content:
```
1 from __future__ import absolute_import
2
3 import ast
4
5 from sentry_sdk import Hub, serializer
6 from sentry_sdk._types import MYPY
7 from sentry_sdk.integrations import Integration, DidNotEnable
8 from sentry_sdk.scope import add_global_event_processor
9 from sentry_sdk.utils import walk_exception_chain, iter_stacks
10
11 if MYPY:
12 from typing import Optional, Dict, Any, Tuple, List
13 from types import FrameType
14
15 from sentry_sdk._types import Event, Hint
16
17 try:
18 import executing
19 except ImportError:
20 raise DidNotEnable("executing is not installed")
21
22 try:
23 import pure_eval
24 except ImportError:
25 raise DidNotEnable("pure_eval is not installed")
26
27 try:
28 # Used implicitly, just testing it's available
29 import asttokens # noqa
30 except ImportError:
31 raise DidNotEnable("asttokens is not installed")
32
33
34 class PureEvalIntegration(Integration):
35 identifier = "pure_eval"
36
37 @staticmethod
38 def setup_once():
39 # type: () -> None
40
41 @add_global_event_processor
42 def add_executing_info(event, hint):
43 # type: (Event, Optional[Hint]) -> Optional[Event]
44 if Hub.current.get_integration(PureEvalIntegration) is None:
45 return event
46
47 if hint is None:
48 return event
49
50 exc_info = hint.get("exc_info", None)
51
52 if exc_info is None:
53 return event
54
55 exception = event.get("exception", None)
56
57 if exception is None:
58 return event
59
60 values = exception.get("values", None)
61
62 if values is None:
63 return event
64
65 for exception, (_exc_type, _exc_value, exc_tb) in zip(
66 reversed(values), walk_exception_chain(exc_info)
67 ):
68 sentry_frames = [
69 frame
70 for frame in exception.get("stacktrace", {}).get("frames", [])
71 if frame.get("function")
72 ]
73 tbs = list(iter_stacks(exc_tb))
74 if len(sentry_frames) != len(tbs):
75 continue
76
77 for sentry_frame, tb in zip(sentry_frames, tbs):
78 sentry_frame["vars"] = (
79 pure_eval_frame(tb.tb_frame) or sentry_frame["vars"]
80 )
81 return event
82
83
84 def pure_eval_frame(frame):
85 # type: (FrameType) -> Dict[str, Any]
86 source = executing.Source.for_frame(frame)
87 if not source.tree:
88 return {}
89
90 statements = source.statements_at_line(frame.f_lineno)
91 if not statements:
92 return {}
93
94 scope = stmt = list(statements)[0]
95 while True:
96 # Get the parent first in case the original statement is already
97 # a function definition, e.g. if we're calling a decorator
98 # In that case we still want the surrounding scope, not that function
99 scope = scope.parent
100 if isinstance(scope, (ast.FunctionDef, ast.ClassDef, ast.Module)):
101 break
102
103 evaluator = pure_eval.Evaluator.from_frame(frame)
104 expressions = evaluator.interesting_expressions_grouped(scope)
105
106 def closeness(expression):
107 # type: (Tuple[List[Any], Any]) -> int
108 # Prioritise expressions with a node closer to the statement executed
109 # without being after that statement
110 # A higher return value is better - the expression will appear
111 # earlier in the list of values and is less likely to be trimmed
112 nodes, _value = expression
113 nodes_before_stmt = [
114 node for node in nodes if node.first_token.startpos < stmt.last_token.endpos
115 ]
116 if nodes_before_stmt:
117 # The position of the last node before or in the statement
118 return max(node.first_token.startpos for node in nodes_before_stmt)
119 else:
120 # The position of the first node after the statement
121 # Negative means it's always lower priority than nodes that come before
122 # Less negative means closer to the statement and higher priority
123 return -min(node.first_token.startpos for node in nodes)
124
125 # This adds the first_token and last_token attributes to nodes
126 atok = source.asttokens()
127
128 expressions.sort(key=closeness, reverse=True)
129 return {
130 atok.get_text(nodes[0]): value
131 for nodes, value in expressions[: serializer.MAX_DATABAG_BREADTH]
132 }
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/integrations/pure_eval.py b/sentry_sdk/integrations/pure_eval.py
--- a/sentry_sdk/integrations/pure_eval.py
+++ b/sentry_sdk/integrations/pure_eval.py
@@ -104,23 +104,29 @@
expressions = evaluator.interesting_expressions_grouped(scope)
def closeness(expression):
- # type: (Tuple[List[Any], Any]) -> int
+ # type: (Tuple[List[Any], Any]) -> Tuple[int, int]
# Prioritise expressions with a node closer to the statement executed
# without being after that statement
# A higher return value is better - the expression will appear
# earlier in the list of values and is less likely to be trimmed
nodes, _value = expression
+
+ def start(n):
+ # type: (ast.expr) -> Tuple[int, int]
+ return (n.lineno, n.col_offset)
+
nodes_before_stmt = [
- node for node in nodes if node.first_token.startpos < stmt.last_token.endpos
+ node for node in nodes if start(node) < stmt.last_token.end
]
if nodes_before_stmt:
# The position of the last node before or in the statement
- return max(node.first_token.startpos for node in nodes_before_stmt)
+ return max(start(node) for node in nodes_before_stmt)
else:
# The position of the first node after the statement
# Negative means it's always lower priority than nodes that come before
# Less negative means closer to the statement and higher priority
- return -min(node.first_token.startpos for node in nodes)
+ lineno, col_offset = min(start(node) for node in nodes)
+ return (-lineno, -col_offset)
# This adds the first_token and last_token attributes to nodes
atok = source.asttokens()
| {"golden_diff": "diff --git a/sentry_sdk/integrations/pure_eval.py b/sentry_sdk/integrations/pure_eval.py\n--- a/sentry_sdk/integrations/pure_eval.py\n+++ b/sentry_sdk/integrations/pure_eval.py\n@@ -104,23 +104,29 @@\n expressions = evaluator.interesting_expressions_grouped(scope)\n \n def closeness(expression):\n- # type: (Tuple[List[Any], Any]) -> int\n+ # type: (Tuple[List[Any], Any]) -> Tuple[int, int]\n # Prioritise expressions with a node closer to the statement executed\n # without being after that statement\n # A higher return value is better - the expression will appear\n # earlier in the list of values and is less likely to be trimmed\n nodes, _value = expression\n+\n+ def start(n):\n+ # type: (ast.expr) -> Tuple[int, int]\n+ return (n.lineno, n.col_offset)\n+\n nodes_before_stmt = [\n- node for node in nodes if node.first_token.startpos < stmt.last_token.endpos\n+ node for node in nodes if start(node) < stmt.last_token.end\n ]\n if nodes_before_stmt:\n # The position of the last node before or in the statement\n- return max(node.first_token.startpos for node in nodes_before_stmt)\n+ return max(start(node) for node in nodes_before_stmt)\n else:\n # The position of the first node after the statement\n # Negative means it's always lower priority than nodes that come before\n # Less negative means closer to the statement and higher priority\n- return -min(node.first_token.startpos for node in nodes)\n+ lineno, col_offset = min(start(node) for node in nodes)\n+ return (-lineno, -col_offset)\n \n # This adds the first_token and last_token attributes to nodes\n atok = source.asttokens()\n", "issue": "Crash in pure_eval\nThis happened while we were experiencing a DB outage:\r\n```\r\n File \"/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py\", line 443, in fetch\r\n return await self._execute(query, args, 0, timeout)\r\n File \"/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py\", line 1445, in _execute\r\n result, _ = await self.__execute(\r\n File \"/server/athenian/api/db.py\", line 191, in _asyncpg_execute\r\n result = await self._execute_original(query, args, limit, timeout, return_status)\r\n File \"/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py\", line 1454, in __execute\r\n return await self._do_execute(query, executor, timeout)\r\n File \"/usr/local/lib/python3.8/dist-packages/asyncpg/connection.py\", line 1476, in _do_execute\r\n result = await executor(stmt, None)\r\n File \"asyncpg/protocol/protocol.pyx\", line 196, in bind_execute\r\n return await waiter\r\nasyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/dist-packages/sentry_sdk/scope.py\", line 353, in apply_to_event\r\n new_event = event_processor(event, hint)\r\n File \"/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py\", line 79, in add_executing_info\r\n pure_eval_frame(tb.tb_frame) or sentry_frame[\"vars\"]\r\n File \"/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py\", line 128, in pure_eval_frame\r\n expressions.sort(key=closeness, reverse=True)\r\n File \"/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py\", line 113, in closeness\r\n nodes_before_stmt = [\r\n File \"/usr/local/lib/python3.8/dist-packages/sentry_sdk/integrations/pure_eval.py\", line 114, in <listcomp>\r\n node for node in nodes if node.first_token.startpos < stmt.last_token.endpos\r\nAttributeError: 'Name' object has no attribute 'first_token'\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport ast\n\nfrom sentry_sdk import Hub, serializer\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk.scope import add_global_event_processor\nfrom sentry_sdk.utils import walk_exception_chain, iter_stacks\n\nif MYPY:\n from typing import Optional, Dict, Any, Tuple, List\n from types import FrameType\n\n from sentry_sdk._types import Event, Hint\n\ntry:\n import executing\nexcept ImportError:\n raise DidNotEnable(\"executing is not installed\")\n\ntry:\n import pure_eval\nexcept ImportError:\n raise DidNotEnable(\"pure_eval is not installed\")\n\ntry:\n # Used implicitly, just testing it's available\n import asttokens # noqa\nexcept ImportError:\n raise DidNotEnable(\"asttokens is not installed\")\n\n\nclass PureEvalIntegration(Integration):\n identifier = \"pure_eval\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n\n @add_global_event_processor\n def add_executing_info(event, hint):\n # type: (Event, Optional[Hint]) -> Optional[Event]\n if Hub.current.get_integration(PureEvalIntegration) is None:\n return event\n\n if hint is None:\n return event\n\n exc_info = hint.get(\"exc_info\", None)\n\n if exc_info is None:\n return event\n\n exception = event.get(\"exception\", None)\n\n if exception is None:\n return event\n\n values = exception.get(\"values\", None)\n\n if values is None:\n return event\n\n for exception, (_exc_type, _exc_value, exc_tb) in zip(\n reversed(values), walk_exception_chain(exc_info)\n ):\n sentry_frames = [\n frame\n for frame in exception.get(\"stacktrace\", {}).get(\"frames\", [])\n if frame.get(\"function\")\n ]\n tbs = list(iter_stacks(exc_tb))\n if len(sentry_frames) != len(tbs):\n continue\n\n for sentry_frame, tb in zip(sentry_frames, tbs):\n sentry_frame[\"vars\"] = (\n pure_eval_frame(tb.tb_frame) or sentry_frame[\"vars\"]\n )\n return event\n\n\ndef pure_eval_frame(frame):\n # type: (FrameType) -> Dict[str, Any]\n source = executing.Source.for_frame(frame)\n if not source.tree:\n return {}\n\n statements = source.statements_at_line(frame.f_lineno)\n if not statements:\n return {}\n\n scope = stmt = list(statements)[0]\n while True:\n # Get the parent first in case the original statement is already\n # a function definition, e.g. if we're calling a decorator\n # In that case we still want the surrounding scope, not that function\n scope = scope.parent\n if isinstance(scope, (ast.FunctionDef, ast.ClassDef, ast.Module)):\n break\n\n evaluator = pure_eval.Evaluator.from_frame(frame)\n expressions = evaluator.interesting_expressions_grouped(scope)\n\n def closeness(expression):\n # type: (Tuple[List[Any], Any]) -> int\n # Prioritise expressions with a node closer to the statement executed\n # without being after that statement\n # A higher return value is better - the expression will appear\n # earlier in the list of values and is less likely to be trimmed\n nodes, _value = expression\n nodes_before_stmt = [\n node for node in nodes if node.first_token.startpos < stmt.last_token.endpos\n ]\n if nodes_before_stmt:\n # The position of the last node before or in the statement\n return max(node.first_token.startpos for node in nodes_before_stmt)\n else:\n # The position of the first node after the statement\n # Negative means it's always lower priority than nodes that come before\n # Less negative means closer to the statement and higher priority\n return -min(node.first_token.startpos for node in nodes)\n\n # This adds the first_token and last_token attributes to nodes\n atok = source.asttokens()\n\n expressions.sort(key=closeness, reverse=True)\n return {\n atok.get_text(nodes[0]): value\n for nodes, value in expressions[: serializer.MAX_DATABAG_BREADTH]\n }\n", "path": "sentry_sdk/integrations/pure_eval.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport ast\n\nfrom sentry_sdk import Hub, serializer\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk.scope import add_global_event_processor\nfrom sentry_sdk.utils import walk_exception_chain, iter_stacks\n\nif MYPY:\n from typing import Optional, Dict, Any, Tuple, List\n from types import FrameType\n\n from sentry_sdk._types import Event, Hint\n\ntry:\n import executing\nexcept ImportError:\n raise DidNotEnable(\"executing is not installed\")\n\ntry:\n import pure_eval\nexcept ImportError:\n raise DidNotEnable(\"pure_eval is not installed\")\n\ntry:\n # Used implicitly, just testing it's available\n import asttokens # noqa\nexcept ImportError:\n raise DidNotEnable(\"asttokens is not installed\")\n\n\nclass PureEvalIntegration(Integration):\n identifier = \"pure_eval\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n\n @add_global_event_processor\n def add_executing_info(event, hint):\n # type: (Event, Optional[Hint]) -> Optional[Event]\n if Hub.current.get_integration(PureEvalIntegration) is None:\n return event\n\n if hint is None:\n return event\n\n exc_info = hint.get(\"exc_info\", None)\n\n if exc_info is None:\n return event\n\n exception = event.get(\"exception\", None)\n\n if exception is None:\n return event\n\n values = exception.get(\"values\", None)\n\n if values is None:\n return event\n\n for exception, (_exc_type, _exc_value, exc_tb) in zip(\n reversed(values), walk_exception_chain(exc_info)\n ):\n sentry_frames = [\n frame\n for frame in exception.get(\"stacktrace\", {}).get(\"frames\", [])\n if frame.get(\"function\")\n ]\n tbs = list(iter_stacks(exc_tb))\n if len(sentry_frames) != len(tbs):\n continue\n\n for sentry_frame, tb in zip(sentry_frames, tbs):\n sentry_frame[\"vars\"] = (\n pure_eval_frame(tb.tb_frame) or sentry_frame[\"vars\"]\n )\n return event\n\n\ndef pure_eval_frame(frame):\n # type: (FrameType) -> Dict[str, Any]\n source = executing.Source.for_frame(frame)\n if not source.tree:\n return {}\n\n statements = source.statements_at_line(frame.f_lineno)\n if not statements:\n return {}\n\n scope = stmt = list(statements)[0]\n while True:\n # Get the parent first in case the original statement is already\n # a function definition, e.g. if we're calling a decorator\n # In that case we still want the surrounding scope, not that function\n scope = scope.parent\n if isinstance(scope, (ast.FunctionDef, ast.ClassDef, ast.Module)):\n break\n\n evaluator = pure_eval.Evaluator.from_frame(frame)\n expressions = evaluator.interesting_expressions_grouped(scope)\n\n def closeness(expression):\n # type: (Tuple[List[Any], Any]) -> Tuple[int, int]\n # Prioritise expressions with a node closer to the statement executed\n # without being after that statement\n # A higher return value is better - the expression will appear\n # earlier in the list of values and is less likely to be trimmed\n nodes, _value = expression\n\n def start(n):\n # type: (ast.expr) -> Tuple[int, int]\n return (n.lineno, n.col_offset)\n\n nodes_before_stmt = [\n node for node in nodes if start(node) < stmt.last_token.end\n ]\n if nodes_before_stmt:\n # The position of the last node before or in the statement\n return max(start(node) for node in nodes_before_stmt)\n else:\n # The position of the first node after the statement\n # Negative means it's always lower priority than nodes that come before\n # Less negative means closer to the statement and higher priority\n lineno, col_offset = min(start(node) for node in nodes)\n return (-lineno, -col_offset)\n\n # This adds the first_token and last_token attributes to nodes\n atok = source.asttokens()\n\n expressions.sort(key=closeness, reverse=True)\n return {\n atok.get_text(nodes[0]): value\n for nodes, value in expressions[: serializer.MAX_DATABAG_BREADTH]\n }\n", "path": "sentry_sdk/integrations/pure_eval.py"}]} | 2,023 | 418 |
gh_patches_debug_15867 | rasdani/github-patches | git_diff | dotkom__onlineweb4-1521 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Delete user in Dashboard user edit doesn't perform any action
The delete user button does not actually perform a request.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/authentication/dashboard/views.py`
Content:
```
1 # -*- encoding: utf-8 -*-
2
3 import json
4
5 from django.conf import settings
6 from django.contrib.auth.decorators import login_required
7 from django.contrib.auth.models import Group
8 from django.core.exceptions import PermissionDenied
9 from django.core.paginator import Paginator
10 from django.core.urlresolvers import reverse, reverse_lazy
11 from django.http import HttpResponse
12 from django.shortcuts import get_object_or_404, render
13 from django.views.generic import DeleteView, DetailView, ListView, UpdateView
14 from guardian.decorators import permission_required
15 from watson.views import SearchView
16
17 from apps.authentication.forms import UserUpdateForm
18 from apps.authentication.models import OnlineUser as User
19 from apps.authentication.models import AllowedUsername
20 from apps.dashboard.tools import DashboardPermissionMixin, get_base_context, has_access
21
22
23 @login_required
24 def index(request):
25 """
26 This is the main dashboard view
27 """
28
29 if not has_access(request):
30 raise PermissionDenied
31
32 context = get_base_context(request)
33
34 return render(request, 'auth/dashboard/index.html', context)
35
36
37 # GROUP MODULE VIEWS
38 @login_required
39 @permission_required('authentication.change_onlineuser', return_403=True)
40 def groups_index(request):
41 """
42 Group module in dashboard that lists groups.
43 """
44
45 if not has_access(request):
46 raise PermissionDenied
47
48 context = get_base_context(request)
49
50 context['groups'] = list(Group.objects.all())
51 context['groups'].sort(key=lambda x: str(x).lower())
52
53 return render(request, 'auth/dashboard/groups_index.html', context)
54
55
56 @login_required
57 @permission_required('authentication.change_onlineuser', return_403=True)
58 def groups_detail(request, pk):
59 """
60 Group module in dashboard that lists groups.
61 """
62
63 if not has_access(request):
64 raise PermissionDenied
65
66 context = get_base_context(request)
67
68 context['group'] = get_object_or_404(Group, pk=pk)
69
70 # AJAX
71 if request.method == 'POST':
72 if request.is_ajax and 'action' in request.POST:
73 resp = {'status': 200}
74 if request.POST['action'] == 'remove_user':
75 user = get_object_or_404(User, pk=int(request.POST['user_id']))
76 context['group'].user_set.remove(user)
77 resp['message'] = '%s ble fjernet fra %s' % (user.get_full_name(), context['group'].name)
78 resp['users'] = [{'user': u.get_full_name(), 'id': u.id} for u in context['group'].user_set.all()]
79 resp['users'].sort(key=lambda x: x['user'])
80
81 return HttpResponse(json.dumps(resp), status=200)
82 elif request.POST['action'] == 'add_user':
83 user = get_object_or_404(User, pk=int(request.POST['user_id']))
84 context['group'].user_set.add(user)
85 resp['full_name'] = user.get_full_name()
86 resp['users'] = [{'user': u.get_full_name(), 'id': u.id} for u in context['group'].user_set.all()]
87 resp['users'].sort(key=lambda x: x['user'])
88 resp['message'] = '%s ble lagt til i %s' % (resp['full_name'], context['group'].name)
89
90 return HttpResponse(json.dumps(resp), status=200)
91
92 return HttpResponse('Ugyldig handling.', status=400)
93
94 if hasattr(settings, 'GROUP_SYNCER') and settings.GROUP_SYNCER:
95 group_id = int(pk)
96 # Groups that list this one as their destination
97 context['sync_group_from'] = []
98 # Groups that list this one as one of their sources
99 context['sync_group_to'] = []
100
101 # Make a dict that simply maps {id: name} for all groups
102 groups = {g.id: g.name for g in Group.objects.all().order_by('id')}
103
104 for job in settings.GROUP_SYNCER:
105 if group_id in job['source']:
106 context['sync_group_to'].extend([groups[g_id] for g_id in job['destination']])
107 if group_id in job['destination']:
108 context['sync_group_from'].extend([groups[g_id] for g_id in job['source']])
109
110 context['group_users'] = list(context['group'].user_set.all())
111
112 context['group_permissions'] = list(context['group'].permissions.all())
113
114 context['group_users'].sort(key=lambda x: str(x).lower())
115 context['group_permissions'].sort(key=lambda x: str(x))
116
117 return render(request, 'auth/dashboard/groups_detail.html', context)
118
119
120 @login_required
121 @permission_required("authentication.view_allowedusername", return_403=True)
122 def members_index(request):
123
124 """
125 Index overview for allowedusernames in dashboard
126 """
127
128 if not has_access(request):
129 raise PermissionDenied
130
131 def merge_names(members):
132 for i in members:
133 user = list(User.objects.filter(ntnu_username=i.username))
134 if user:
135 i.full_name = user[0].get_full_name()
136 return members
137
138 context = get_base_context(request)
139 members = AllowedUsername.objects.all()
140 context['members'] = merge_names(members)
141
142 return render(request, 'auth/dashboard/user_list.html', context)
143
144
145 class UserListView(DashboardPermissionMixin, ListView):
146 model = User
147 queryset = User.objects.all().exclude(id=-1)
148 paginate_by = 25
149 paginator_class = Paginator
150 permission_required = 'authentication.view_onlineuser'
151 template_name = 'auth/dashboard/user_list.html'
152
153
154 class UserSearchView(DashboardPermissionMixin, SearchView):
155 model = User
156 queryset = User.objects.all().exclude(id=-1)
157 paginate_by = 25
158 paginator_class = Paginator
159 permission_required = 'authentication.view_onlineuser'
160 template_name = 'auth/dashboard/user_list.html'
161 empty_query_redirect = reverse_lazy('user_list')
162
163
164 class UserDetailView(DashboardPermissionMixin, DetailView):
165 model = User
166 context_object_name = 'user'
167 permission_required = 'authentication.view_onlineuser'
168 pk_url_kwarg = 'user_id'
169 template_name = 'auth/dashboard/user_detail.html'
170
171
172 class UserUpdateView(DashboardPermissionMixin, UpdateView):
173 form_class = UserUpdateForm
174 model = User
175 permission_required = 'authentication.change_onlineuser'
176 pk_url_kwarg = 'user_id'
177 template_name = 'auth/dashboard/user_edit.html'
178
179 def get_success_url(self):
180 return reverse('dashboard_user_detail', kwargs={'user_id': self.kwargs.get('user_id')})
181
182
183 class UserDeleteView(DashboardPermissionMixin, DeleteView):
184 model = User
185 permission_required = 'authentication.delete_onlineuser'
186 pk_url_kwarg = 'user_id'
187 success_url = reverse_lazy('auth_index')
188
189
190 @login_required
191 @permission_required("authentication.add_allowedusername", return_403=True)
192 def members_new(request):
193 """
194 Create new allowedusername form and handling
195 """
196 if not has_access(request):
197 raise PermissionDenied
198
199 context = get_base_context(request)
200
201 return render(request, 'auth/dashboard/members_new.html', context)
202
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/authentication/dashboard/views.py b/apps/authentication/dashboard/views.py
--- a/apps/authentication/dashboard/views.py
+++ b/apps/authentication/dashboard/views.py
@@ -172,6 +172,7 @@
class UserUpdateView(DashboardPermissionMixin, UpdateView):
form_class = UserUpdateForm
model = User
+ context_object_name = 'user'
permission_required = 'authentication.change_onlineuser'
pk_url_kwarg = 'user_id'
template_name = 'auth/dashboard/user_edit.html'
@@ -184,7 +185,7 @@
model = User
permission_required = 'authentication.delete_onlineuser'
pk_url_kwarg = 'user_id'
- success_url = reverse_lazy('auth_index')
+ success_url = reverse_lazy('user_list')
@login_required
| {"golden_diff": "diff --git a/apps/authentication/dashboard/views.py b/apps/authentication/dashboard/views.py\n--- a/apps/authentication/dashboard/views.py\n+++ b/apps/authentication/dashboard/views.py\n@@ -172,6 +172,7 @@\n class UserUpdateView(DashboardPermissionMixin, UpdateView):\n form_class = UserUpdateForm\n model = User\n+ context_object_name = 'user'\n permission_required = 'authentication.change_onlineuser'\n pk_url_kwarg = 'user_id'\n template_name = 'auth/dashboard/user_edit.html'\n@@ -184,7 +185,7 @@\n model = User\n permission_required = 'authentication.delete_onlineuser'\n pk_url_kwarg = 'user_id'\n- success_url = reverse_lazy('auth_index')\n+ success_url = reverse_lazy('user_list')\n \n \n @login_required\n", "issue": "Delete user in Dashboard user edit doesn't perform any action\nThe delete user button does not actually perform a request.\n\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\n\nimport json\n\nfrom django.conf import settings\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.auth.models import Group\nfrom django.core.exceptions import PermissionDenied\nfrom django.core.paginator import Paginator\nfrom django.core.urlresolvers import reverse, reverse_lazy\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404, render\nfrom django.views.generic import DeleteView, DetailView, ListView, UpdateView\nfrom guardian.decorators import permission_required\nfrom watson.views import SearchView\n\nfrom apps.authentication.forms import UserUpdateForm\nfrom apps.authentication.models import OnlineUser as User\nfrom apps.authentication.models import AllowedUsername\nfrom apps.dashboard.tools import DashboardPermissionMixin, get_base_context, has_access\n\n\n@login_required\ndef index(request):\n \"\"\"\n This is the main dashboard view\n \"\"\"\n\n if not has_access(request):\n raise PermissionDenied\n\n context = get_base_context(request)\n\n return render(request, 'auth/dashboard/index.html', context)\n\n\n# GROUP MODULE VIEWS\n@login_required\n@permission_required('authentication.change_onlineuser', return_403=True)\ndef groups_index(request):\n \"\"\"\n Group module in dashboard that lists groups.\n \"\"\"\n\n if not has_access(request):\n raise PermissionDenied\n\n context = get_base_context(request)\n\n context['groups'] = list(Group.objects.all())\n context['groups'].sort(key=lambda x: str(x).lower())\n\n return render(request, 'auth/dashboard/groups_index.html', context)\n\n\n@login_required\n@permission_required('authentication.change_onlineuser', return_403=True)\ndef groups_detail(request, pk):\n \"\"\"\n Group module in dashboard that lists groups.\n \"\"\"\n\n if not has_access(request):\n raise PermissionDenied\n\n context = get_base_context(request)\n\n context['group'] = get_object_or_404(Group, pk=pk)\n\n # AJAX\n if request.method == 'POST':\n if request.is_ajax and 'action' in request.POST:\n resp = {'status': 200}\n if request.POST['action'] == 'remove_user':\n user = get_object_or_404(User, pk=int(request.POST['user_id']))\n context['group'].user_set.remove(user)\n resp['message'] = '%s ble fjernet fra %s' % (user.get_full_name(), context['group'].name)\n resp['users'] = [{'user': u.get_full_name(), 'id': u.id} for u in context['group'].user_set.all()]\n resp['users'].sort(key=lambda x: x['user'])\n\n return HttpResponse(json.dumps(resp), status=200)\n elif request.POST['action'] == 'add_user':\n user = get_object_or_404(User, pk=int(request.POST['user_id']))\n context['group'].user_set.add(user)\n resp['full_name'] = user.get_full_name()\n resp['users'] = [{'user': u.get_full_name(), 'id': u.id} for u in context['group'].user_set.all()]\n resp['users'].sort(key=lambda x: x['user'])\n resp['message'] = '%s ble lagt til i %s' % (resp['full_name'], context['group'].name)\n\n return HttpResponse(json.dumps(resp), status=200)\n\n return HttpResponse('Ugyldig handling.', status=400)\n\n if hasattr(settings, 'GROUP_SYNCER') and settings.GROUP_SYNCER:\n group_id = int(pk)\n # Groups that list this one as their destination\n context['sync_group_from'] = []\n # Groups that list this one as one of their sources\n context['sync_group_to'] = []\n\n # Make a dict that simply maps {id: name} for all groups\n groups = {g.id: g.name for g in Group.objects.all().order_by('id')}\n\n for job in settings.GROUP_SYNCER:\n if group_id in job['source']:\n context['sync_group_to'].extend([groups[g_id] for g_id in job['destination']])\n if group_id in job['destination']:\n context['sync_group_from'].extend([groups[g_id] for g_id in job['source']])\n\n context['group_users'] = list(context['group'].user_set.all())\n\n context['group_permissions'] = list(context['group'].permissions.all())\n\n context['group_users'].sort(key=lambda x: str(x).lower())\n context['group_permissions'].sort(key=lambda x: str(x))\n\n return render(request, 'auth/dashboard/groups_detail.html', context)\n\n\n@login_required\n@permission_required(\"authentication.view_allowedusername\", return_403=True)\ndef members_index(request):\n\n \"\"\"\n Index overview for allowedusernames in dashboard\n \"\"\"\n\n if not has_access(request):\n raise PermissionDenied\n\n def merge_names(members):\n for i in members:\n user = list(User.objects.filter(ntnu_username=i.username))\n if user:\n i.full_name = user[0].get_full_name()\n return members\n\n context = get_base_context(request)\n members = AllowedUsername.objects.all()\n context['members'] = merge_names(members)\n\n return render(request, 'auth/dashboard/user_list.html', context)\n\n\nclass UserListView(DashboardPermissionMixin, ListView):\n model = User\n queryset = User.objects.all().exclude(id=-1)\n paginate_by = 25\n paginator_class = Paginator\n permission_required = 'authentication.view_onlineuser'\n template_name = 'auth/dashboard/user_list.html'\n\n\nclass UserSearchView(DashboardPermissionMixin, SearchView):\n model = User\n queryset = User.objects.all().exclude(id=-1)\n paginate_by = 25\n paginator_class = Paginator\n permission_required = 'authentication.view_onlineuser'\n template_name = 'auth/dashboard/user_list.html'\n empty_query_redirect = reverse_lazy('user_list')\n\n\nclass UserDetailView(DashboardPermissionMixin, DetailView):\n model = User\n context_object_name = 'user'\n permission_required = 'authentication.view_onlineuser'\n pk_url_kwarg = 'user_id'\n template_name = 'auth/dashboard/user_detail.html'\n\n\nclass UserUpdateView(DashboardPermissionMixin, UpdateView):\n form_class = UserUpdateForm\n model = User\n permission_required = 'authentication.change_onlineuser'\n pk_url_kwarg = 'user_id'\n template_name = 'auth/dashboard/user_edit.html'\n\n def get_success_url(self):\n return reverse('dashboard_user_detail', kwargs={'user_id': self.kwargs.get('user_id')})\n\n\nclass UserDeleteView(DashboardPermissionMixin, DeleteView):\n model = User\n permission_required = 'authentication.delete_onlineuser'\n pk_url_kwarg = 'user_id'\n success_url = reverse_lazy('auth_index')\n\n\n@login_required\n@permission_required(\"authentication.add_allowedusername\", return_403=True)\ndef members_new(request):\n \"\"\"\n Create new allowedusername form and handling\n \"\"\"\n if not has_access(request):\n raise PermissionDenied\n\n context = get_base_context(request)\n\n return render(request, 'auth/dashboard/members_new.html', context)\n", "path": "apps/authentication/dashboard/views.py"}], "after_files": [{"content": "# -*- encoding: utf-8 -*-\n\nimport json\n\nfrom django.conf import settings\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.auth.models import Group\nfrom django.core.exceptions import PermissionDenied\nfrom django.core.paginator import Paginator\nfrom django.core.urlresolvers import reverse, reverse_lazy\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404, render\nfrom django.views.generic import DeleteView, DetailView, ListView, UpdateView\nfrom guardian.decorators import permission_required\nfrom watson.views import SearchView\n\nfrom apps.authentication.forms import UserUpdateForm\nfrom apps.authentication.models import OnlineUser as User\nfrom apps.authentication.models import AllowedUsername\nfrom apps.dashboard.tools import DashboardPermissionMixin, get_base_context, has_access\n\n\n@login_required\ndef index(request):\n \"\"\"\n This is the main dashboard view\n \"\"\"\n\n if not has_access(request):\n raise PermissionDenied\n\n context = get_base_context(request)\n\n return render(request, 'auth/dashboard/index.html', context)\n\n\n# GROUP MODULE VIEWS\n@login_required\n@permission_required('authentication.change_onlineuser', return_403=True)\ndef groups_index(request):\n \"\"\"\n Group module in dashboard that lists groups.\n \"\"\"\n\n if not has_access(request):\n raise PermissionDenied\n\n context = get_base_context(request)\n\n context['groups'] = list(Group.objects.all())\n context['groups'].sort(key=lambda x: str(x).lower())\n\n return render(request, 'auth/dashboard/groups_index.html', context)\n\n\n@login_required\n@permission_required('authentication.change_onlineuser', return_403=True)\ndef groups_detail(request, pk):\n \"\"\"\n Group module in dashboard that lists groups.\n \"\"\"\n\n if not has_access(request):\n raise PermissionDenied\n\n context = get_base_context(request)\n\n context['group'] = get_object_or_404(Group, pk=pk)\n\n # AJAX\n if request.method == 'POST':\n if request.is_ajax and 'action' in request.POST:\n resp = {'status': 200}\n if request.POST['action'] == 'remove_user':\n user = get_object_or_404(User, pk=int(request.POST['user_id']))\n context['group'].user_set.remove(user)\n resp['message'] = '%s ble fjernet fra %s' % (user.get_full_name(), context['group'].name)\n resp['users'] = [{'user': u.get_full_name(), 'id': u.id} for u in context['group'].user_set.all()]\n resp['users'].sort(key=lambda x: x['user'])\n\n return HttpResponse(json.dumps(resp), status=200)\n elif request.POST['action'] == 'add_user':\n user = get_object_or_404(User, pk=int(request.POST['user_id']))\n context['group'].user_set.add(user)\n resp['full_name'] = user.get_full_name()\n resp['users'] = [{'user': u.get_full_name(), 'id': u.id} for u in context['group'].user_set.all()]\n resp['users'].sort(key=lambda x: x['user'])\n resp['message'] = '%s ble lagt til i %s' % (resp['full_name'], context['group'].name)\n\n return HttpResponse(json.dumps(resp), status=200)\n\n return HttpResponse('Ugyldig handling.', status=400)\n\n if hasattr(settings, 'GROUP_SYNCER') and settings.GROUP_SYNCER:\n group_id = int(pk)\n # Groups that list this one as their destination\n context['sync_group_from'] = []\n # Groups that list this one as one of their sources\n context['sync_group_to'] = []\n\n # Make a dict that simply maps {id: name} for all groups\n groups = {g.id: g.name for g in Group.objects.all().order_by('id')}\n\n for job in settings.GROUP_SYNCER:\n if group_id in job['source']:\n context['sync_group_to'].extend([groups[g_id] for g_id in job['destination']])\n if group_id in job['destination']:\n context['sync_group_from'].extend([groups[g_id] for g_id in job['source']])\n\n context['group_users'] = list(context['group'].user_set.all())\n\n context['group_permissions'] = list(context['group'].permissions.all())\n\n context['group_users'].sort(key=lambda x: str(x).lower())\n context['group_permissions'].sort(key=lambda x: str(x))\n\n return render(request, 'auth/dashboard/groups_detail.html', context)\n\n\n@login_required\n@permission_required(\"authentication.view_allowedusername\", return_403=True)\ndef members_index(request):\n\n \"\"\"\n Index overview for allowedusernames in dashboard\n \"\"\"\n\n if not has_access(request):\n raise PermissionDenied\n\n def merge_names(members):\n for i in members:\n user = list(User.objects.filter(ntnu_username=i.username))\n if user:\n i.full_name = user[0].get_full_name()\n return members\n\n context = get_base_context(request)\n members = AllowedUsername.objects.all()\n context['members'] = merge_names(members)\n\n return render(request, 'auth/dashboard/user_list.html', context)\n\n\nclass UserListView(DashboardPermissionMixin, ListView):\n model = User\n queryset = User.objects.all().exclude(id=-1)\n paginate_by = 25\n paginator_class = Paginator\n permission_required = 'authentication.view_onlineuser'\n template_name = 'auth/dashboard/user_list.html'\n\n\nclass UserSearchView(DashboardPermissionMixin, SearchView):\n model = User\n queryset = User.objects.all().exclude(id=-1)\n paginate_by = 25\n paginator_class = Paginator\n permission_required = 'authentication.view_onlineuser'\n template_name = 'auth/dashboard/user_list.html'\n empty_query_redirect = reverse_lazy('user_list')\n\n\nclass UserDetailView(DashboardPermissionMixin, DetailView):\n model = User\n context_object_name = 'user'\n permission_required = 'authentication.view_onlineuser'\n pk_url_kwarg = 'user_id'\n template_name = 'auth/dashboard/user_detail.html'\n\n\nclass UserUpdateView(DashboardPermissionMixin, UpdateView):\n form_class = UserUpdateForm\n model = User\n context_object_name = 'user'\n permission_required = 'authentication.change_onlineuser'\n pk_url_kwarg = 'user_id'\n template_name = 'auth/dashboard/user_edit.html'\n\n def get_success_url(self):\n return reverse('dashboard_user_detail', kwargs={'user_id': self.kwargs.get('user_id')})\n\n\nclass UserDeleteView(DashboardPermissionMixin, DeleteView):\n model = User\n permission_required = 'authentication.delete_onlineuser'\n pk_url_kwarg = 'user_id'\n success_url = reverse_lazy('user_list')\n\n\n@login_required\n@permission_required(\"authentication.add_allowedusername\", return_403=True)\ndef members_new(request):\n \"\"\"\n Create new allowedusername form and handling\n \"\"\"\n if not has_access(request):\n raise PermissionDenied\n\n context = get_base_context(request)\n\n return render(request, 'auth/dashboard/members_new.html', context)\n", "path": "apps/authentication/dashboard/views.py"}]} | 2,327 | 177 |
gh_patches_debug_34688 | rasdani/github-patches | git_diff | tensorflow__addons-271 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Automate Build Process
Currently we have no automated process for building Addons across python version and operating systems. Going forward we'll want this process to be automated.. but it may be challenging for us to start builds without access to the Google internal tooling.
We could conceivably use Travis... but if we can keep consistent CI that would be ideal.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """TensorFlow Addons
16
17 TensorFlow Addons is a repository of contributions that conform to
18 well-established API patterns,but implement new functionality not available in
19 core TensorFlow.TensorFlow natively supports a large number of operators,
20 layers, metrics, losses, and optimizers. However, in a fast movingfield like
21 ML, there are many interesting new developments that cannot be integrated into
22 core TensorFlow (because their broad applicability is not yet clear, or it is
23 mostly used by a smallersubset of the community).
24 """
25
26 from __future__ import absolute_import
27 from __future__ import division
28 from __future__ import print_function
29
30 import os
31
32 from setuptools import find_packages
33 from setuptools import setup
34 from setuptools.dist import Distribution
35
36 DOCLINES = __doc__.split('\n')
37
38 version = {}
39 base_dir = os.path.dirname(os.path.abspath(__file__))
40 with open(os.path.join(base_dir, "tensorflow_addons", "version.py")) as fp:
41 # yapf: disable
42 exec(fp.read(), version)
43 # yapf: enable
44
45 REQUIRED_PACKAGES = [
46 'six >= 1.10.0',
47 ]
48
49 project_name = 'tensorflow-addons'
50
51
52 class BinaryDistribution(Distribution):
53 """This class is needed in order to create OS specific wheels."""
54
55 def has_ext_modules(self):
56 return True
57
58
59 setup(
60 name=project_name,
61 version=version['__version__'],
62 description=DOCLINES[0],
63 long_description='\n'.join(DOCLINES[2:]),
64 author='Google Inc.',
65 author_email='[email protected]',
66 packages=find_packages(),
67 install_requires=REQUIRED_PACKAGES,
68 include_package_data=True,
69 zip_safe=False,
70 distclass=BinaryDistribution,
71 classifiers=[
72 'Development Status :: 4 - Beta',
73 'Intended Audience :: Developers',
74 'Intended Audience :: Education',
75 'Intended Audience :: Science/Research',
76 'License :: OSI Approved :: Apache Software License',
77 'Programming Language :: Python :: 2.7',
78 'Programming Language :: Python :: 3.4',
79 'Programming Language :: Python :: 3.5',
80 'Programming Language :: Python :: 3.6',
81 'Programming Language :: Python :: 3.7',
82 'Topic :: Scientific/Engineering :: Mathematics',
83 'Topic :: Software Development :: Libraries :: Python Modules',
84 'Topic :: Software Development :: Libraries',
85 ],
86 license='Apache 2.0',
87 keywords='tensorflow addons machine learning',
88 )
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -17,10 +17,10 @@
TensorFlow Addons is a repository of contributions that conform to
well-established API patterns,but implement new functionality not available in
core TensorFlow.TensorFlow natively supports a large number of operators,
-layers, metrics, losses, and optimizers. However, in a fast movingfield like
+layers, metrics, losses, and optimizers. However, in a fast moving field like
ML, there are many interesting new developments that cannot be integrated into
core TensorFlow (because their broad applicability is not yet clear, or it is
-mostly used by a smallersubset of the community).
+mostly used by a smaller subset of the community).
"""
from __future__ import absolute_import
@@ -28,7 +28,9 @@
from __future__ import print_function
import os
+import sys
+from datetime import datetime
from setuptools import find_packages
from setuptools import setup
from setuptools.dist import Distribution
@@ -46,7 +48,13 @@
'six >= 1.10.0',
]
-project_name = 'tensorflow-addons'
+if '--nightly' in sys.argv:
+ project_name = 'tfa-nightly'
+ nightly_idx = sys.argv.index('--nightly')
+ sys.argv.pop(nightly_idx)
+ version['__version__'] += datetime.strftime(datetime.today(), "%Y%m%d")
+else:
+ project_name = 'tensorflow-addons'
class BinaryDistribution(Distribution):
@@ -78,7 +86,6 @@
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
- 'Programming Language :: Python :: 3.7',
'Topic :: Scientific/Engineering :: Mathematics',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Software Development :: Libraries',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -17,10 +17,10 @@\n TensorFlow Addons is a repository of contributions that conform to\n well-established API patterns,but implement new functionality not available in\n core TensorFlow.TensorFlow natively supports a large number of operators,\n-layers, metrics, losses, and optimizers. However, in a fast movingfield like\n+layers, metrics, losses, and optimizers. However, in a fast moving field like\n ML, there are many interesting new developments that cannot be integrated into\n core TensorFlow (because their broad applicability is not yet clear, or it is\n-mostly used by a smallersubset of the community).\n+mostly used by a smaller subset of the community).\n \"\"\"\n \n from __future__ import absolute_import\n@@ -28,7 +28,9 @@\n from __future__ import print_function\n \n import os\n+import sys\n \n+from datetime import datetime\n from setuptools import find_packages\n from setuptools import setup\n from setuptools.dist import Distribution\n@@ -46,7 +48,13 @@\n 'six >= 1.10.0',\n ]\n \n-project_name = 'tensorflow-addons'\n+if '--nightly' in sys.argv:\n+ project_name = 'tfa-nightly'\n+ nightly_idx = sys.argv.index('--nightly')\n+ sys.argv.pop(nightly_idx)\n+ version['__version__'] += datetime.strftime(datetime.today(), \"%Y%m%d\")\n+else:\n+ project_name = 'tensorflow-addons'\n \n \n class BinaryDistribution(Distribution):\n@@ -78,7 +86,6 @@\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n- 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n", "issue": "Automate Build Process\nCurrently we have no automated process for building Addons across python version and operating systems. Going forward we'll want this process to be automated.. but it may be challenging for us to start builds without access to the Google internal tooling.\r\n\r\nWe could conceivably use Travis... but if we can keep consistent CI that would be ideal.\r\n\r\n\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"TensorFlow Addons \n\nTensorFlow Addons is a repository of contributions that conform to\nwell-established API patterns,but implement new functionality not available in\ncore TensorFlow.TensorFlow natively supports a large number of operators,\nlayers, metrics, losses, and optimizers. However, in a fast movingfield like\nML, there are many interesting new developments that cannot be integrated into\ncore TensorFlow (because their broad applicability is not yet clear, or it is\nmostly used by a smallersubset of the community).\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\n\nfrom setuptools import find_packages\nfrom setuptools import setup\nfrom setuptools.dist import Distribution\n\nDOCLINES = __doc__.split('\\n')\n\nversion = {}\nbase_dir = os.path.dirname(os.path.abspath(__file__))\nwith open(os.path.join(base_dir, \"tensorflow_addons\", \"version.py\")) as fp:\n # yapf: disable\n exec(fp.read(), version)\n # yapf: enable\n\nREQUIRED_PACKAGES = [\n 'six >= 1.10.0',\n]\n\nproject_name = 'tensorflow-addons'\n\n\nclass BinaryDistribution(Distribution):\n \"\"\"This class is needed in order to create OS specific wheels.\"\"\"\n\n def has_ext_modules(self):\n return True\n\n\nsetup(\n name=project_name,\n version=version['__version__'],\n description=DOCLINES[0],\n long_description='\\n'.join(DOCLINES[2:]),\n author='Google Inc.',\n author_email='[email protected]',\n packages=find_packages(),\n install_requires=REQUIRED_PACKAGES,\n include_package_data=True,\n zip_safe=False,\n distclass=BinaryDistribution,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n license='Apache 2.0',\n keywords='tensorflow addons machine learning',\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"TensorFlow Addons \n\nTensorFlow Addons is a repository of contributions that conform to\nwell-established API patterns,but implement new functionality not available in\ncore TensorFlow.TensorFlow natively supports a large number of operators,\nlayers, metrics, losses, and optimizers. However, in a fast moving field like\nML, there are many interesting new developments that cannot be integrated into\ncore TensorFlow (because their broad applicability is not yet clear, or it is\nmostly used by a smaller subset of the community).\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport sys\n\nfrom datetime import datetime\nfrom setuptools import find_packages\nfrom setuptools import setup\nfrom setuptools.dist import Distribution\n\nDOCLINES = __doc__.split('\\n')\n\nversion = {}\nbase_dir = os.path.dirname(os.path.abspath(__file__))\nwith open(os.path.join(base_dir, \"tensorflow_addons\", \"version.py\")) as fp:\n # yapf: disable\n exec(fp.read(), version)\n # yapf: enable\n\nREQUIRED_PACKAGES = [\n 'six >= 1.10.0',\n]\n\nif '--nightly' in sys.argv:\n project_name = 'tfa-nightly'\n nightly_idx = sys.argv.index('--nightly')\n sys.argv.pop(nightly_idx)\n version['__version__'] += datetime.strftime(datetime.today(), \"%Y%m%d\")\nelse:\n project_name = 'tensorflow-addons'\n\n\nclass BinaryDistribution(Distribution):\n \"\"\"This class is needed in order to create OS specific wheels.\"\"\"\n\n def has_ext_modules(self):\n return True\n\n\nsetup(\n name=project_name,\n version=version['__version__'],\n description=DOCLINES[0],\n long_description='\\n'.join(DOCLINES[2:]),\n author='Google Inc.',\n author_email='[email protected]',\n packages=find_packages(),\n install_requires=REQUIRED_PACKAGES,\n include_package_data=True,\n zip_safe=False,\n distclass=BinaryDistribution,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n license='Apache 2.0',\n keywords='tensorflow addons machine learning',\n)\n", "path": "setup.py"}]} | 1,169 | 431 |
gh_patches_debug_55042 | rasdani/github-patches | git_diff | pallets__werkzeug-1798 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
New Microsoft Edge User Agent
## Background
Microsoft Edge now based on Chromium and the user agent string is updated.
`Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36 Edg/81.0.416.68`
## Simple Code
```python
@app.route('/browser')
def browser():
from flask import request
ua = request.user_agent
return jsonify({
'browser': ua.browser,
'platform': ua.platform,
'user_agent': ua.string,
'version': ua.version,
})
```
## Expected Result
```json
{
"browser": "edge",
"platform": "windows",
"user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36 Edg/81.0.416.68",
"version": "81.0.416.68"
}
```
| Key | Value |
| --- | --- |
| browser | **edge** |
| platform | windows |
| user_agent | Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36 Edg/81.0.416.68 |
| version | **81.0.416.68** |
## Actual Result
```json
{
"browser": "chrome",
"platform": "windows",
"user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36 Edg/81.0.416.68",
"version": "81.0.4044.129"
}
```
| Key | Value |
| --- | --- |
| browser | **chrome** |
| platform | windows |
| user_agent | Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36 Edg/81.0.416.68 |
| version | **81.0.4044.129** |
## Environment
- Windows 10 Pro 1909
- Python 3.6.6
- Werkzeug 0.16.1
- Flask 1.1.1
### Related Issues
#818, #1556
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/werkzeug/useragents.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 werkzeug.useragents
4 ~~~~~~~~~~~~~~~~~~~
5
6 This module provides a helper to inspect user agent strings. This module
7 is far from complete but should work for most of the currently available
8 browsers.
9
10
11 :copyright: 2007 Pallets
12 :license: BSD-3-Clause
13 """
14 import re
15
16
17 class UserAgentParser(object):
18 """A simple user agent parser. Used by the `UserAgent`."""
19
20 platforms = (
21 (" cros ", "chromeos"),
22 ("iphone|ios", "iphone"),
23 ("ipad", "ipad"),
24 (r"darwin|mac|os\s*x", "macos"),
25 ("win", "windows"),
26 (r"android", "android"),
27 ("netbsd", "netbsd"),
28 ("openbsd", "openbsd"),
29 ("freebsd", "freebsd"),
30 ("dragonfly", "dragonflybsd"),
31 ("(sun|i86)os", "solaris"),
32 (r"x11|lin(\b|ux)?", "linux"),
33 (r"nintendo\s+wii", "wii"),
34 ("irix", "irix"),
35 ("hp-?ux", "hpux"),
36 ("aix", "aix"),
37 ("sco|unix_sv", "sco"),
38 ("bsd", "bsd"),
39 ("amiga", "amiga"),
40 ("blackberry|playbook", "blackberry"),
41 ("symbian", "symbian"),
42 )
43 browsers = (
44 ("googlebot", "google"),
45 ("msnbot", "msn"),
46 ("yahoo", "yahoo"),
47 ("ask jeeves", "ask"),
48 (r"aol|america\s+online\s+browser", "aol"),
49 (r"opera|opr", "opera"),
50 ("edge", "edge"),
51 ("chrome|crios", "chrome"),
52 ("seamonkey", "seamonkey"),
53 ("firefox|firebird|phoenix|iceweasel", "firefox"),
54 ("galeon", "galeon"),
55 ("safari|version", "safari"),
56 ("webkit", "webkit"),
57 ("camino", "camino"),
58 ("konqueror", "konqueror"),
59 ("k-meleon", "kmeleon"),
60 ("netscape", "netscape"),
61 (r"msie|microsoft\s+internet\s+explorer|trident/.+? rv:", "msie"),
62 ("lynx", "lynx"),
63 ("links", "links"),
64 ("Baiduspider", "baidu"),
65 ("bingbot", "bing"),
66 ("mozilla", "mozilla"),
67 )
68
69 _browser_version_re = r"(?:%s)[/\sa-z(]*(\d+[.\da-z]+)?"
70 _language_re = re.compile(
71 r"(?:;\s*|\s+)(\b\w{2}\b(?:-\b\w{2}\b)?)\s*;|"
72 r"(?:\(|\[|;)\s*(\b\w{2}\b(?:-\b\w{2}\b)?)\s*(?:\]|\)|;)"
73 )
74
75 def __init__(self):
76 self.platforms = [(b, re.compile(a, re.I)) for a, b in self.platforms]
77 self.browsers = [
78 (b, re.compile(self._browser_version_re % a, re.I))
79 for a, b in self.browsers
80 ]
81
82 def __call__(self, user_agent):
83 for platform, regex in self.platforms: # noqa: B007
84 match = regex.search(user_agent)
85 if match is not None:
86 break
87 else:
88 platform = None
89 for browser, regex in self.browsers: # noqa: B007
90 match = regex.search(user_agent)
91 if match is not None:
92 version = match.group(1)
93 break
94 else:
95 browser = version = None
96 match = self._language_re.search(user_agent)
97 if match is not None:
98 language = match.group(1) or match.group(2)
99 else:
100 language = None
101 return platform, browser, version, language
102
103
104 class UserAgent(object):
105 """Represents a user agent. Pass it a WSGI environment or a user agent
106 string and you can inspect some of the details from the user agent
107 string via the attributes. The following attributes exist:
108
109 .. attribute:: string
110
111 the raw user agent string
112
113 .. attribute:: platform
114
115 the browser platform. ``None`` if not recognized.
116 The following platforms are currently recognized:
117
118 - `aix`
119 - `amiga`
120 - `android`
121 - `blackberry`
122 - `bsd`
123 - `chromeos`
124 - `dragonflybsd`
125 - `freebsd`
126 - `hpux`
127 - `ipad`
128 - `iphone`
129 - `irix`
130 - `linux`
131 - `macos`
132 - `netbsd`
133 - `openbsd`
134 - `sco`
135 - `solaris`
136 - `symbian`
137 - `wii`
138 - `windows`
139
140 .. attribute:: browser
141
142 the name of the browser. ``None`` if not recognized.
143 The following browsers are currently recognized:
144
145 - `aol` *
146 - `ask` *
147 - `baidu` *
148 - `bing` *
149 - `camino`
150 - `chrome`
151 - `edge`
152 - `firefox`
153 - `galeon`
154 - `google` *
155 - `kmeleon`
156 - `konqueror`
157 - `links`
158 - `lynx`
159 - `mozilla`
160 - `msie`
161 - `msn`
162 - `netscape`
163 - `opera`
164 - `safari`
165 - `seamonkey`
166 - `webkit`
167 - `yahoo` *
168
169 (Browsers marked with a star (``*``) are crawlers.)
170
171 .. attribute:: version
172
173 the version of the browser. ``None`` if not recognized.
174
175 .. attribute:: language
176
177 the language of the browser. ``None`` if not recognized.
178 """
179
180 _parser = UserAgentParser()
181
182 def __init__(self, environ_or_string):
183 if isinstance(environ_or_string, dict):
184 environ_or_string = environ_or_string.get("HTTP_USER_AGENT", "")
185 self.string = environ_or_string
186 self.platform, self.browser, self.version, self.language = self._parser(
187 environ_or_string
188 )
189
190 def to_header(self):
191 return self.string
192
193 def __str__(self):
194 return self.string
195
196 def __nonzero__(self):
197 return bool(self.browser)
198
199 __bool__ = __nonzero__
200
201 def __repr__(self):
202 return "<%s %r/%s>" % (self.__class__.__name__, self.browser, self.version)
203
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/werkzeug/useragents.py b/src/werkzeug/useragents.py
--- a/src/werkzeug/useragents.py
+++ b/src/werkzeug/useragents.py
@@ -47,7 +47,7 @@
("ask jeeves", "ask"),
(r"aol|america\s+online\s+browser", "aol"),
(r"opera|opr", "opera"),
- ("edge", "edge"),
+ ("edge|edg", "edge"),
("chrome|crios", "chrome"),
("seamonkey", "seamonkey"),
("firefox|firebird|phoenix|iceweasel", "firefox"),
| {"golden_diff": "diff --git a/src/werkzeug/useragents.py b/src/werkzeug/useragents.py\n--- a/src/werkzeug/useragents.py\n+++ b/src/werkzeug/useragents.py\n@@ -47,7 +47,7 @@\n (\"ask jeeves\", \"ask\"),\n (r\"aol|america\\s+online\\s+browser\", \"aol\"),\n (r\"opera|opr\", \"opera\"),\n- (\"edge\", \"edge\"),\n+ (\"edge|edg\", \"edge\"),\n (\"chrome|crios\", \"chrome\"),\n (\"seamonkey\", \"seamonkey\"),\n (\"firefox|firebird|phoenix|iceweasel\", \"firefox\"),\n", "issue": "New Microsoft Edge User Agent\n## Background\r\nMicrosoft Edge now based on Chromium and the user agent string is updated.\r\n`Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36 Edg/81.0.416.68`\r\n\r\n## Simple Code\r\n```python\r\[email protected]('/browser')\r\ndef browser():\r\n from flask import request\r\n ua = request.user_agent\r\n return jsonify({\r\n 'browser': ua.browser,\r\n 'platform': ua.platform,\r\n 'user_agent': ua.string,\r\n 'version': ua.version,\r\n })\r\n```\r\n\r\n## Expected Result\r\n```json\r\n{\r\n \"browser\": \"edge\", \r\n \"platform\": \"windows\", \r\n \"user_agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36 Edg/81.0.416.68\", \r\n \"version\": \"81.0.416.68\"\r\n}\r\n```\r\n\r\n| Key | Value |\r\n| --- | --- |\r\n| browser | **edge** |\r\n| platform | windows |\r\n| user_agent | Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36 Edg/81.0.416.68 |\r\n| version | **81.0.416.68** |\r\n\r\n\r\n## Actual Result\r\n```json\r\n{\r\n \"browser\": \"chrome\", \r\n \"platform\": \"windows\", \r\n \"user_agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36 Edg/81.0.416.68\", \r\n \"version\": \"81.0.4044.129\"\r\n}\r\n```\r\n\r\n| Key | Value |\r\n| --- | --- |\r\n| browser | **chrome** |\r\n| platform | windows |\r\n| user_agent | Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36 Edg/81.0.416.68 |\r\n| version | **81.0.4044.129** |\r\n\r\n## Environment\r\n- Windows 10 Pro 1909\r\n- Python 3.6.6\r\n- Werkzeug 0.16.1\r\n- Flask 1.1.1\r\n\r\n### Related Issues\r\n#818, #1556\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n werkzeug.useragents\n ~~~~~~~~~~~~~~~~~~~\n\n This module provides a helper to inspect user agent strings. This module\n is far from complete but should work for most of the currently available\n browsers.\n\n\n :copyright: 2007 Pallets\n :license: BSD-3-Clause\n\"\"\"\nimport re\n\n\nclass UserAgentParser(object):\n \"\"\"A simple user agent parser. Used by the `UserAgent`.\"\"\"\n\n platforms = (\n (\" cros \", \"chromeos\"),\n (\"iphone|ios\", \"iphone\"),\n (\"ipad\", \"ipad\"),\n (r\"darwin|mac|os\\s*x\", \"macos\"),\n (\"win\", \"windows\"),\n (r\"android\", \"android\"),\n (\"netbsd\", \"netbsd\"),\n (\"openbsd\", \"openbsd\"),\n (\"freebsd\", \"freebsd\"),\n (\"dragonfly\", \"dragonflybsd\"),\n (\"(sun|i86)os\", \"solaris\"),\n (r\"x11|lin(\\b|ux)?\", \"linux\"),\n (r\"nintendo\\s+wii\", \"wii\"),\n (\"irix\", \"irix\"),\n (\"hp-?ux\", \"hpux\"),\n (\"aix\", \"aix\"),\n (\"sco|unix_sv\", \"sco\"),\n (\"bsd\", \"bsd\"),\n (\"amiga\", \"amiga\"),\n (\"blackberry|playbook\", \"blackberry\"),\n (\"symbian\", \"symbian\"),\n )\n browsers = (\n (\"googlebot\", \"google\"),\n (\"msnbot\", \"msn\"),\n (\"yahoo\", \"yahoo\"),\n (\"ask jeeves\", \"ask\"),\n (r\"aol|america\\s+online\\s+browser\", \"aol\"),\n (r\"opera|opr\", \"opera\"),\n (\"edge\", \"edge\"),\n (\"chrome|crios\", \"chrome\"),\n (\"seamonkey\", \"seamonkey\"),\n (\"firefox|firebird|phoenix|iceweasel\", \"firefox\"),\n (\"galeon\", \"galeon\"),\n (\"safari|version\", \"safari\"),\n (\"webkit\", \"webkit\"),\n (\"camino\", \"camino\"),\n (\"konqueror\", \"konqueror\"),\n (\"k-meleon\", \"kmeleon\"),\n (\"netscape\", \"netscape\"),\n (r\"msie|microsoft\\s+internet\\s+explorer|trident/.+? rv:\", \"msie\"),\n (\"lynx\", \"lynx\"),\n (\"links\", \"links\"),\n (\"Baiduspider\", \"baidu\"),\n (\"bingbot\", \"bing\"),\n (\"mozilla\", \"mozilla\"),\n )\n\n _browser_version_re = r\"(?:%s)[/\\sa-z(]*(\\d+[.\\da-z]+)?\"\n _language_re = re.compile(\n r\"(?:;\\s*|\\s+)(\\b\\w{2}\\b(?:-\\b\\w{2}\\b)?)\\s*;|\"\n r\"(?:\\(|\\[|;)\\s*(\\b\\w{2}\\b(?:-\\b\\w{2}\\b)?)\\s*(?:\\]|\\)|;)\"\n )\n\n def __init__(self):\n self.platforms = [(b, re.compile(a, re.I)) for a, b in self.platforms]\n self.browsers = [\n (b, re.compile(self._browser_version_re % a, re.I))\n for a, b in self.browsers\n ]\n\n def __call__(self, user_agent):\n for platform, regex in self.platforms: # noqa: B007\n match = regex.search(user_agent)\n if match is not None:\n break\n else:\n platform = None\n for browser, regex in self.browsers: # noqa: B007\n match = regex.search(user_agent)\n if match is not None:\n version = match.group(1)\n break\n else:\n browser = version = None\n match = self._language_re.search(user_agent)\n if match is not None:\n language = match.group(1) or match.group(2)\n else:\n language = None\n return platform, browser, version, language\n\n\nclass UserAgent(object):\n \"\"\"Represents a user agent. Pass it a WSGI environment or a user agent\n string and you can inspect some of the details from the user agent\n string via the attributes. The following attributes exist:\n\n .. attribute:: string\n\n the raw user agent string\n\n .. attribute:: platform\n\n the browser platform. ``None`` if not recognized.\n The following platforms are currently recognized:\n\n - `aix`\n - `amiga`\n - `android`\n - `blackberry`\n - `bsd`\n - `chromeos`\n - `dragonflybsd`\n - `freebsd`\n - `hpux`\n - `ipad`\n - `iphone`\n - `irix`\n - `linux`\n - `macos`\n - `netbsd`\n - `openbsd`\n - `sco`\n - `solaris`\n - `symbian`\n - `wii`\n - `windows`\n\n .. attribute:: browser\n\n the name of the browser. ``None`` if not recognized.\n The following browsers are currently recognized:\n\n - `aol` *\n - `ask` *\n - `baidu` *\n - `bing` *\n - `camino`\n - `chrome`\n - `edge`\n - `firefox`\n - `galeon`\n - `google` *\n - `kmeleon`\n - `konqueror`\n - `links`\n - `lynx`\n - `mozilla`\n - `msie`\n - `msn`\n - `netscape`\n - `opera`\n - `safari`\n - `seamonkey`\n - `webkit`\n - `yahoo` *\n\n (Browsers marked with a star (``*``) are crawlers.)\n\n .. attribute:: version\n\n the version of the browser. ``None`` if not recognized.\n\n .. attribute:: language\n\n the language of the browser. ``None`` if not recognized.\n \"\"\"\n\n _parser = UserAgentParser()\n\n def __init__(self, environ_or_string):\n if isinstance(environ_or_string, dict):\n environ_or_string = environ_or_string.get(\"HTTP_USER_AGENT\", \"\")\n self.string = environ_or_string\n self.platform, self.browser, self.version, self.language = self._parser(\n environ_or_string\n )\n\n def to_header(self):\n return self.string\n\n def __str__(self):\n return self.string\n\n def __nonzero__(self):\n return bool(self.browser)\n\n __bool__ = __nonzero__\n\n def __repr__(self):\n return \"<%s %r/%s>\" % (self.__class__.__name__, self.browser, self.version)\n", "path": "src/werkzeug/useragents.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n werkzeug.useragents\n ~~~~~~~~~~~~~~~~~~~\n\n This module provides a helper to inspect user agent strings. This module\n is far from complete but should work for most of the currently available\n browsers.\n\n\n :copyright: 2007 Pallets\n :license: BSD-3-Clause\n\"\"\"\nimport re\n\n\nclass UserAgentParser(object):\n \"\"\"A simple user agent parser. Used by the `UserAgent`.\"\"\"\n\n platforms = (\n (\" cros \", \"chromeos\"),\n (\"iphone|ios\", \"iphone\"),\n (\"ipad\", \"ipad\"),\n (r\"darwin|mac|os\\s*x\", \"macos\"),\n (\"win\", \"windows\"),\n (r\"android\", \"android\"),\n (\"netbsd\", \"netbsd\"),\n (\"openbsd\", \"openbsd\"),\n (\"freebsd\", \"freebsd\"),\n (\"dragonfly\", \"dragonflybsd\"),\n (\"(sun|i86)os\", \"solaris\"),\n (r\"x11|lin(\\b|ux)?\", \"linux\"),\n (r\"nintendo\\s+wii\", \"wii\"),\n (\"irix\", \"irix\"),\n (\"hp-?ux\", \"hpux\"),\n (\"aix\", \"aix\"),\n (\"sco|unix_sv\", \"sco\"),\n (\"bsd\", \"bsd\"),\n (\"amiga\", \"amiga\"),\n (\"blackberry|playbook\", \"blackberry\"),\n (\"symbian\", \"symbian\"),\n )\n browsers = (\n (\"googlebot\", \"google\"),\n (\"msnbot\", \"msn\"),\n (\"yahoo\", \"yahoo\"),\n (\"ask jeeves\", \"ask\"),\n (r\"aol|america\\s+online\\s+browser\", \"aol\"),\n (r\"opera|opr\", \"opera\"),\n (\"edge|edg\", \"edge\"),\n (\"chrome|crios\", \"chrome\"),\n (\"seamonkey\", \"seamonkey\"),\n (\"firefox|firebird|phoenix|iceweasel\", \"firefox\"),\n (\"galeon\", \"galeon\"),\n (\"safari|version\", \"safari\"),\n (\"webkit\", \"webkit\"),\n (\"camino\", \"camino\"),\n (\"konqueror\", \"konqueror\"),\n (\"k-meleon\", \"kmeleon\"),\n (\"netscape\", \"netscape\"),\n (r\"msie|microsoft\\s+internet\\s+explorer|trident/.+? rv:\", \"msie\"),\n (\"lynx\", \"lynx\"),\n (\"links\", \"links\"),\n (\"Baiduspider\", \"baidu\"),\n (\"bingbot\", \"bing\"),\n (\"mozilla\", \"mozilla\"),\n )\n\n _browser_version_re = r\"(?:%s)[/\\sa-z(]*(\\d+[.\\da-z]+)?\"\n _language_re = re.compile(\n r\"(?:;\\s*|\\s+)(\\b\\w{2}\\b(?:-\\b\\w{2}\\b)?)\\s*;|\"\n r\"(?:\\(|\\[|;)\\s*(\\b\\w{2}\\b(?:-\\b\\w{2}\\b)?)\\s*(?:\\]|\\)|;)\"\n )\n\n def __init__(self):\n self.platforms = [(b, re.compile(a, re.I)) for a, b in self.platforms]\n self.browsers = [\n (b, re.compile(self._browser_version_re % a, re.I))\n for a, b in self.browsers\n ]\n\n def __call__(self, user_agent):\n for platform, regex in self.platforms: # noqa: B007\n match = regex.search(user_agent)\n if match is not None:\n break\n else:\n platform = None\n for browser, regex in self.browsers: # noqa: B007\n match = regex.search(user_agent)\n if match is not None:\n version = match.group(1)\n break\n else:\n browser = version = None\n match = self._language_re.search(user_agent)\n if match is not None:\n language = match.group(1) or match.group(2)\n else:\n language = None\n return platform, browser, version, language\n\n\nclass UserAgent(object):\n \"\"\"Represents a user agent. Pass it a WSGI environment or a user agent\n string and you can inspect some of the details from the user agent\n string via the attributes. The following attributes exist:\n\n .. attribute:: string\n\n the raw user agent string\n\n .. attribute:: platform\n\n the browser platform. ``None`` if not recognized.\n The following platforms are currently recognized:\n\n - `aix`\n - `amiga`\n - `android`\n - `blackberry`\n - `bsd`\n - `chromeos`\n - `dragonflybsd`\n - `freebsd`\n - `hpux`\n - `ipad`\n - `iphone`\n - `irix`\n - `linux`\n - `macos`\n - `netbsd`\n - `openbsd`\n - `sco`\n - `solaris`\n - `symbian`\n - `wii`\n - `windows`\n\n .. attribute:: browser\n\n the name of the browser. ``None`` if not recognized.\n The following browsers are currently recognized:\n\n - `aol` *\n - `ask` *\n - `baidu` *\n - `bing` *\n - `camino`\n - `chrome`\n - `edge`\n - `firefox`\n - `galeon`\n - `google` *\n - `kmeleon`\n - `konqueror`\n - `links`\n - `lynx`\n - `mozilla`\n - `msie`\n - `msn`\n - `netscape`\n - `opera`\n - `safari`\n - `seamonkey`\n - `webkit`\n - `yahoo` *\n\n (Browsers marked with a star (``*``) are crawlers.)\n\n .. attribute:: version\n\n the version of the browser. ``None`` if not recognized.\n\n .. attribute:: language\n\n the language of the browser. ``None`` if not recognized.\n \"\"\"\n\n _parser = UserAgentParser()\n\n def __init__(self, environ_or_string):\n if isinstance(environ_or_string, dict):\n environ_or_string = environ_or_string.get(\"HTTP_USER_AGENT\", \"\")\n self.string = environ_or_string\n self.platform, self.browser, self.version, self.language = self._parser(\n environ_or_string\n )\n\n def to_header(self):\n return self.string\n\n def __str__(self):\n return self.string\n\n def __nonzero__(self):\n return bool(self.browser)\n\n __bool__ = __nonzero__\n\n def __repr__(self):\n return \"<%s %r/%s>\" % (self.__class__.__name__, self.browser, self.version)\n", "path": "src/werkzeug/useragents.py"}]} | 3,071 | 151 |
gh_patches_debug_35290 | rasdani/github-patches | git_diff | docarray__docarray-979 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bug(v2): relative file paths in url types
Passing relative file paths gives a validation error:
```python
from docarray import Image
url = 'Test/05978.jpg'
img = Image(url=url)
```
```text
Test/05978.jpg
Traceback (most recent call last):
File "/home/johannes/.config/JetBrains/PyCharmCE2022.3/scratches/scratch_116.py", line 12, in <module>
img = Image(url=url)
File "pydantic/main.py", line 342, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Image
url
unsupported operand type(s) for +: 'NoneType' and 'str' (type=type_error)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docarray/typing/url/any_url.py`
Content:
```
1 from typing import TYPE_CHECKING, Type, TypeVar
2
3 from pydantic import AnyUrl as BaseAnyUrl
4 from pydantic import errors, parse_obj_as
5
6 from docarray.typing.abstract_type import AbstractType
7
8 if TYPE_CHECKING:
9 from pydantic.networks import Parts
10
11 from docarray.proto import NodeProto
12
13 T = TypeVar('T', bound='AnyUrl')
14
15
16 class AnyUrl(BaseAnyUrl, AbstractType):
17 host_required = (
18 False # turn off host requirement to allow passing of local paths as URL
19 )
20
21 def _to_node_protobuf(self) -> 'NodeProto':
22 """Convert Document into a NodeProto protobuf message. This function should
23 be called when the Document is nested into another Document that need to
24 be converted into a protobuf
25
26 :return: the nested item protobuf message
27 """
28 from docarray.proto import NodeProto
29
30 return NodeProto(any_url=str(self))
31
32 @classmethod
33 def validate_parts(cls, parts: 'Parts', validate_port: bool = True) -> 'Parts':
34 """
35 A method used to validate parts of a URL.
36 Our URLs should be able to function both in local and remote settings.
37 Therefore, we allow missing `scheme`, making it possible to pass a file path.
38 """
39 scheme = parts['scheme']
40 if scheme is None:
41 pass # allow missing scheme, unlike pydantic
42
43 elif cls.allowed_schemes and scheme.lower() not in cls.allowed_schemes:
44 raise errors.UrlSchemePermittedError(set(cls.allowed_schemes))
45
46 if validate_port:
47 cls._validate_port(parts['port'])
48
49 user = parts['user']
50 if cls.user_required and user is None:
51 raise errors.UrlUserInfoError()
52
53 return parts
54
55 @classmethod
56 def from_protobuf(cls: Type[T], pb_msg: 'str') -> T:
57 """
58 read url from a proto msg
59 :param pb_msg:
60 :return: url
61 """
62 return parse_obj_as(cls, pb_msg)
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docarray/typing/url/any_url.py b/docarray/typing/url/any_url.py
--- a/docarray/typing/url/any_url.py
+++ b/docarray/typing/url/any_url.py
@@ -1,4 +1,4 @@
-from typing import TYPE_CHECKING, Type, TypeVar
+from typing import TYPE_CHECKING, Optional, Type, TypeVar
from pydantic import AnyUrl as BaseAnyUrl
from pydantic import errors, parse_obj_as
@@ -34,11 +34,14 @@
"""
A method used to validate parts of a URL.
Our URLs should be able to function both in local and remote settings.
- Therefore, we allow missing `scheme`, making it possible to pass a file path.
+ Therefore, we allow missing `scheme`, making it possible to pass a file
+ path without prefix.
+ If `scheme` is missing, we assume it is a local file path.
"""
scheme = parts['scheme']
if scheme is None:
- pass # allow missing scheme, unlike pydantic
+ # allow missing scheme, unlike pydantic
+ pass
elif cls.allowed_schemes and scheme.lower() not in cls.allowed_schemes:
raise errors.UrlSchemePermittedError(set(cls.allowed_schemes))
@@ -52,6 +55,44 @@
return parts
+ @classmethod
+ def build(
+ cls,
+ *,
+ scheme: str,
+ user: Optional[str] = None,
+ password: Optional[str] = None,
+ host: str,
+ port: Optional[str] = None,
+ path: Optional[str] = None,
+ query: Optional[str] = None,
+ fragment: Optional[str] = None,
+ **_kwargs: str,
+ ) -> str:
+ """
+ Build a URL from its parts.
+ The only difference from the pydantic implementation is that we allow
+ missing `scheme`, making it possible to pass a file path without prefix.
+ """
+
+ # allow missing scheme, unlike pydantic
+ scheme_ = scheme if scheme is not None else ''
+ url = super().build(
+ scheme=scheme_,
+ user=user,
+ password=password,
+ host=host,
+ port=port,
+ path=path,
+ query=query,
+ fragment=fragment,
+ **_kwargs,
+ )
+ if scheme is None and url.startswith('://'):
+ # remove the `://` prefix, since scheme is missing
+ url = url[3:]
+ return url
+
@classmethod
def from_protobuf(cls: Type[T], pb_msg: 'str') -> T:
"""
| {"golden_diff": "diff --git a/docarray/typing/url/any_url.py b/docarray/typing/url/any_url.py\n--- a/docarray/typing/url/any_url.py\n+++ b/docarray/typing/url/any_url.py\n@@ -1,4 +1,4 @@\n-from typing import TYPE_CHECKING, Type, TypeVar\n+from typing import TYPE_CHECKING, Optional, Type, TypeVar\n \n from pydantic import AnyUrl as BaseAnyUrl\n from pydantic import errors, parse_obj_as\n@@ -34,11 +34,14 @@\n \"\"\"\n A method used to validate parts of a URL.\n Our URLs should be able to function both in local and remote settings.\n- Therefore, we allow missing `scheme`, making it possible to pass a file path.\n+ Therefore, we allow missing `scheme`, making it possible to pass a file\n+ path without prefix.\n+ If `scheme` is missing, we assume it is a local file path.\n \"\"\"\n scheme = parts['scheme']\n if scheme is None:\n- pass # allow missing scheme, unlike pydantic\n+ # allow missing scheme, unlike pydantic\n+ pass\n \n elif cls.allowed_schemes and scheme.lower() not in cls.allowed_schemes:\n raise errors.UrlSchemePermittedError(set(cls.allowed_schemes))\n@@ -52,6 +55,44 @@\n \n return parts\n \n+ @classmethod\n+ def build(\n+ cls,\n+ *,\n+ scheme: str,\n+ user: Optional[str] = None,\n+ password: Optional[str] = None,\n+ host: str,\n+ port: Optional[str] = None,\n+ path: Optional[str] = None,\n+ query: Optional[str] = None,\n+ fragment: Optional[str] = None,\n+ **_kwargs: str,\n+ ) -> str:\n+ \"\"\"\n+ Build a URL from its parts.\n+ The only difference from the pydantic implementation is that we allow\n+ missing `scheme`, making it possible to pass a file path without prefix.\n+ \"\"\"\n+\n+ # allow missing scheme, unlike pydantic\n+ scheme_ = scheme if scheme is not None else ''\n+ url = super().build(\n+ scheme=scheme_,\n+ user=user,\n+ password=password,\n+ host=host,\n+ port=port,\n+ path=path,\n+ query=query,\n+ fragment=fragment,\n+ **_kwargs,\n+ )\n+ if scheme is None and url.startswith('://'):\n+ # remove the `://` prefix, since scheme is missing\n+ url = url[3:]\n+ return url\n+\n @classmethod\n def from_protobuf(cls: Type[T], pb_msg: 'str') -> T:\n \"\"\"\n", "issue": "bug(v2): relative file paths in url types\nPassing relative file paths gives a validation error:\n\n```python\nfrom docarray import Image\n\nurl = 'Test/05978.jpg'\nimg = Image(url=url)\n```\n\n```text\nTest/05978.jpg\nTraceback (most recent call last):\n File \"/home/johannes/.config/JetBrains/PyCharmCE2022.3/scratches/scratch_116.py\", line 12, in <module>\n img = Image(url=url)\n File \"pydantic/main.py\", line 342, in pydantic.main.BaseModel.__init__\npydantic.error_wrappers.ValidationError: 1 validation error for Image\nurl\n unsupported operand type(s) for +: 'NoneType' and 'str' (type=type_error)\n```\n\n\n", "before_files": [{"content": "from typing import TYPE_CHECKING, Type, TypeVar\n\nfrom pydantic import AnyUrl as BaseAnyUrl\nfrom pydantic import errors, parse_obj_as\n\nfrom docarray.typing.abstract_type import AbstractType\n\nif TYPE_CHECKING:\n from pydantic.networks import Parts\n\n from docarray.proto import NodeProto\n\nT = TypeVar('T', bound='AnyUrl')\n\n\nclass AnyUrl(BaseAnyUrl, AbstractType):\n host_required = (\n False # turn off host requirement to allow passing of local paths as URL\n )\n\n def _to_node_protobuf(self) -> 'NodeProto':\n \"\"\"Convert Document into a NodeProto protobuf message. This function should\n be called when the Document is nested into another Document that need to\n be converted into a protobuf\n\n :return: the nested item protobuf message\n \"\"\"\n from docarray.proto import NodeProto\n\n return NodeProto(any_url=str(self))\n\n @classmethod\n def validate_parts(cls, parts: 'Parts', validate_port: bool = True) -> 'Parts':\n \"\"\"\n A method used to validate parts of a URL.\n Our URLs should be able to function both in local and remote settings.\n Therefore, we allow missing `scheme`, making it possible to pass a file path.\n \"\"\"\n scheme = parts['scheme']\n if scheme is None:\n pass # allow missing scheme, unlike pydantic\n\n elif cls.allowed_schemes and scheme.lower() not in cls.allowed_schemes:\n raise errors.UrlSchemePermittedError(set(cls.allowed_schemes))\n\n if validate_port:\n cls._validate_port(parts['port'])\n\n user = parts['user']\n if cls.user_required and user is None:\n raise errors.UrlUserInfoError()\n\n return parts\n\n @classmethod\n def from_protobuf(cls: Type[T], pb_msg: 'str') -> T:\n \"\"\"\n read url from a proto msg\n :param pb_msg:\n :return: url\n \"\"\"\n return parse_obj_as(cls, pb_msg)\n", "path": "docarray/typing/url/any_url.py"}], "after_files": [{"content": "from typing import TYPE_CHECKING, Optional, Type, TypeVar\n\nfrom pydantic import AnyUrl as BaseAnyUrl\nfrom pydantic import errors, parse_obj_as\n\nfrom docarray.typing.abstract_type import AbstractType\n\nif TYPE_CHECKING:\n from pydantic.networks import Parts\n\n from docarray.proto import NodeProto\n\nT = TypeVar('T', bound='AnyUrl')\n\n\nclass AnyUrl(BaseAnyUrl, AbstractType):\n host_required = (\n False # turn off host requirement to allow passing of local paths as URL\n )\n\n def _to_node_protobuf(self) -> 'NodeProto':\n \"\"\"Convert Document into a NodeProto protobuf message. This function should\n be called when the Document is nested into another Document that need to\n be converted into a protobuf\n\n :return: the nested item protobuf message\n \"\"\"\n from docarray.proto import NodeProto\n\n return NodeProto(any_url=str(self))\n\n @classmethod\n def validate_parts(cls, parts: 'Parts', validate_port: bool = True) -> 'Parts':\n \"\"\"\n A method used to validate parts of a URL.\n Our URLs should be able to function both in local and remote settings.\n Therefore, we allow missing `scheme`, making it possible to pass a file\n path without prefix.\n If `scheme` is missing, we assume it is a local file path.\n \"\"\"\n scheme = parts['scheme']\n if scheme is None:\n # allow missing scheme, unlike pydantic\n pass\n\n elif cls.allowed_schemes and scheme.lower() not in cls.allowed_schemes:\n raise errors.UrlSchemePermittedError(set(cls.allowed_schemes))\n\n if validate_port:\n cls._validate_port(parts['port'])\n\n user = parts['user']\n if cls.user_required and user is None:\n raise errors.UrlUserInfoError()\n\n return parts\n\n @classmethod\n def build(\n cls,\n *,\n scheme: str,\n user: Optional[str] = None,\n password: Optional[str] = None,\n host: str,\n port: Optional[str] = None,\n path: Optional[str] = None,\n query: Optional[str] = None,\n fragment: Optional[str] = None,\n **_kwargs: str,\n ) -> str:\n \"\"\"\n Build a URL from its parts.\n The only difference from the pydantic implementation is that we allow\n missing `scheme`, making it possible to pass a file path without prefix.\n \"\"\"\n\n # allow missing scheme, unlike pydantic\n scheme_ = scheme if scheme is not None else ''\n url = super().build(\n scheme=scheme_,\n user=user,\n password=password,\n host=host,\n port=port,\n path=path,\n query=query,\n fragment=fragment,\n **_kwargs,\n )\n if scheme is None and url.startswith('://'):\n # remove the `://` prefix, since scheme is missing\n url = url[3:]\n return url\n\n @classmethod\n def from_protobuf(cls: Type[T], pb_msg: 'str') -> T:\n \"\"\"\n read url from a proto msg\n :param pb_msg:\n :return: url\n \"\"\"\n return parse_obj_as(cls, pb_msg)\n", "path": "docarray/typing/url/any_url.py"}]} | 1,010 | 609 |
gh_patches_debug_8271 | rasdani/github-patches | git_diff | Cloud-CV__EvalAI-1310 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Increase the data upload limit.
## Current Behaviour
By default django supports only 2.5 MB of the data to be uploaded on the web app. Refer [here](https://docs.djangoproject.com/en/1.10/ref/settings/#data-upload-max-memory-size) for more info.
## Effects
Due to the low upload limit the file in challenge creation using zip isn't being uploaded on the app as the size exceeds.
## Expected Behaviour
The upload limit must be increased to 10MB.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `settings/common.py`
Content:
```
1 """
2 Django settings for evalai project.
3
4 Generated by 'django-admin startproject' using Django 1.10.2.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/1.10/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/1.10/ref/settings/
11 """
12
13 import datetime
14 import os
15 import sys
16
17 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
18 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
19 APPS_DIR = os.path.join(BASE_DIR, 'apps')
20
21 sys.path.append(APPS_DIR)
22
23 # Quick-start development settings - unsuitable for production
24 # See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/
25
26 # SECURITY WARNING: keep the secret key used in production secret!
27 SECRET_KEY = os.environ.get('SECRET_KEY', 'random_secret_key')
28
29 # SECURITY WARNING: don't run with debug turned on in production!
30 DEBUG = True
31
32 ALLOWED_HOSTS = []
33
34
35 # Application definition
36
37 DEFAULT_APPS = [
38 'django.contrib.admin',
39 'django.contrib.auth',
40 'django.contrib.contenttypes',
41 'django.contrib.sessions',
42 'django.contrib.messages',
43 'django.contrib.staticfiles',
44 'django.contrib.sites',
45 ]
46
47 OUR_APPS = [
48 'accounts',
49 'analytics',
50 'base',
51 'challenges',
52 'hosts',
53 'jobs',
54 'participants',
55 'web',
56 ]
57
58 THIRD_PARTY_APPS = [
59 'allauth',
60 'allauth.account',
61 'corsheaders',
62 'import_export',
63 'rest_auth',
64 'rest_auth.registration',
65 'rest_framework.authtoken',
66 'rest_framework',
67 'rest_framework_docs',
68 'rest_framework_expiring_authtoken',
69 ]
70
71 INSTALLED_APPS = DEFAULT_APPS + OUR_APPS + THIRD_PARTY_APPS
72
73 MIDDLEWARE = [
74 'corsheaders.middleware.CorsMiddleware',
75 'django.middleware.security.SecurityMiddleware',
76 'django.contrib.sessions.middleware.SessionMiddleware',
77 'django.middleware.common.CommonMiddleware',
78 'django.middleware.csrf.CsrfViewMiddleware',
79 'django.contrib.auth.middleware.AuthenticationMiddleware',
80 'django.contrib.messages.middleware.MessageMiddleware',
81 'django.middleware.clickjacking.XFrameOptionsMiddleware',
82 ]
83
84 ROOT_URLCONF = 'evalai.urls'
85
86
87 TEMPLATES = [
88 {
89 'BACKEND': 'django.template.backends.django.DjangoTemplates',
90 'DIRS': [],
91 'APP_DIRS': True,
92 'OPTIONS': {
93 'context_processors': [
94 'django.template.context_processors.debug',
95 'django.template.context_processors.request',
96 'django.contrib.auth.context_processors.auth',
97 'django.contrib.messages.context_processors.messages',
98 ],
99 },
100 },
101 ]
102
103 WSGI_APPLICATION = 'evalai.wsgi.application'
104
105
106 # Password validation
107 # https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators
108
109 AUTH_PASSWORD_VALIDATORS = [
110 {
111 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', # noqa
112 },
113 {
114 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', # noqa
115 },
116 {
117 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', # noqa
118 },
119 {
120 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', # noqa
121 },
122 ]
123
124
125 # Internationalization
126 # https://docs.djangoproject.com/en/1.10/topics/i18n/
127
128 LANGUAGE_CODE = 'en-us'
129
130 TIME_ZONE = 'UTC'
131
132 USE_I18N = True
133
134 USE_L10N = True
135
136 USE_TZ = True
137
138 # Static files (CSS, JavaScript, Images)
139 # https://docs.djangoproject.com/en/1.10/howto/static-files/
140
141 STATIC_URL = '/static/'
142 STATIC_ROOT = os.path.join(BASE_DIR, 'static')
143 MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
144 MEDIA_URL = "/media/"
145
146 SITE_ID = 1
147
148 REST_FRAMEWORK = {
149 'DEFAULT_PAGINATION_CLASS': (
150 'rest_framework.pagination.LimitOffsetPagination'),
151 'PAGE_SIZE': 10,
152 'DEFAULT_PERMISSION_CLASSES': [
153 'rest_framework.permissions.IsAuthenticatedOrReadOnly'
154 ],
155 'DEFAULT_AUTHENTICATION_CLASSES': [
156 'rest_framework_expiring_authtoken.authentication.ExpiringTokenAuthentication',
157 ],
158 'TEST_REQUEST_DEFAULT_FORMAT': 'json',
159 'DEFAULT_THROTTLE_CLASSES': (
160 'rest_framework.throttling.AnonRateThrottle',
161 'rest_framework.throttling.UserRateThrottle'
162 ),
163 'DEFAULT_THROTTLE_RATES': {
164 'anon': '100/minute',
165 'user': '100/minute'
166 },
167 'DEFAULT_RENDERER_CLASSES': (
168 'rest_framework.renderers.JSONRenderer',
169 )
170 }
171
172 # ALLAUTH SETTINGS
173 ACCOUNT_EMAIL_REQUIRED = True
174 OLD_PASSWORD_FIELD_ENABLED = True
175 ACCOUNT_CONFIRM_EMAIL_ON_GET = True
176 ACCOUNT_EMAIL_CONFIRMATION_ANONYMOUS_REDIRECT_URL = '/api/auth/email-confirmed/'
177 ACCOUNT_EMAIL_CONFIRMATION_AUTHENTICATED_REDIRECT_URL = '/api/auth/email-confirmed/'
178
179 AUTHENTICATION_BACKENDS = (
180 # Needed to login by username in Django admin, regardless of `allauth`
181 'django.contrib.auth.backends.ModelBackend',
182 # `allauth` specific authentication methods, such as login by e-mail
183 'allauth.account.auth_backends.AuthenticationBackend',
184 )
185
186 # CORS Settings
187 CORS_ORIGIN_ALLOW_ALL = True
188
189 # REST Framework Expiring Tokens Configuration
190 EXPIRING_TOKEN_LIFESPAN = datetime.timedelta(days=7)
191
192 # Logging
193 LOGGING = {
194 'version': 1,
195 'disable_existing_loggers': False,
196 'root': {
197 'level': 'INFO',
198 'handlers': ['console'],
199 },
200 'filters': {
201 'require_debug_false': {
202 '()': 'django.utils.log.RequireDebugFalse',
203 },
204 'require_debug_true': {
205 '()': 'django.utils.log.RequireDebugTrue',
206 }
207 },
208 'formatters': {
209 'simple': {
210 'format': '[%(asctime)s] %(levelname)s %(message)s',
211 'datefmt': '%Y-%m-%d %H:%M:%S'
212 },
213 'verbose': {
214 'format': '[%(asctime)s] %(levelname)s %(module)s %(message)s',
215 'datefmt': '%Y-%m-%d %H:%M:%S'
216 }
217 },
218 'handlers': {
219 'console': {
220 'level': 'INFO',
221 'filters': ['require_debug_true'],
222 'class': 'logging.StreamHandler',
223 'formatter': 'simple'
224 },
225 'logfile': {
226 'level': 'DEBUG',
227 'class': 'logging.handlers.RotatingFileHandler',
228 'filename': "/tmp/logfile",
229 'maxBytes': 50000,
230 'backupCount': 10,
231 'formatter': 'verbose'
232 },
233 'mail_admins': {
234 'level': 'ERROR',
235 'class': 'django.utils.log.AdminEmailHandler',
236 'filters': ['require_debug_false'],
237 }
238 },
239 'loggers': {
240 'django': {
241 'handlers': ['console'],
242 'propagate': False,
243 },
244 'django.request': {
245 'handlers': ['mail_admins'],
246 'level': 'ERROR',
247 'propagate': False,
248 },
249 'django.security': {
250 'handlers': ['mail_admins'],
251 'level': 'ERROR',
252 'propagate': False,
253 },
254 'django.db.backends': {
255 'handlers': ['mail_admins'],
256 'level': 'ERROR',
257 'propagate': False,
258 }
259 }
260 }
261
262 CACHES = {
263 'default': {
264 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
265 }
266 }
267
268 RABBITMQ_PARAMETERS = {
269 'HOST': os.environ.get("RABBITMQ_HOST", 'localhost'),
270 'EVALAI_EXCHANGE': {
271 'NAME': 'evalai_submissions',
272 'TYPE': 'topic',
273 },
274 'SUBMISSION_QUEUE': 'submission_task_queue',
275 }
276
277 # To make usermame field read-only, customized serializer is defined.
278 REST_AUTH_SERIALIZERS = {
279 'USER_DETAILS_SERIALIZER': 'accounts.serializers.ProfileSerializer',
280 }
281
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/settings/common.py b/settings/common.py
--- a/settings/common.py
+++ b/settings/common.py
@@ -274,6 +274,11 @@
'SUBMISSION_QUEUE': 'submission_task_queue',
}
+# The maximum size in bytes for request body
+# https://docs.djangoproject.com/en/1.10/ref/settings/#data-upload-max-memory-size
+FILE_UPLOAD_MAX_MEMORY_SIZE = 524288000 # 500 MB
+DATA_UPLOAD_MAX_MEMORY_SIZE = 524288000 # 500 MB
+
# To make usermame field read-only, customized serializer is defined.
REST_AUTH_SERIALIZERS = {
'USER_DETAILS_SERIALIZER': 'accounts.serializers.ProfileSerializer',
| {"golden_diff": "diff --git a/settings/common.py b/settings/common.py\n--- a/settings/common.py\n+++ b/settings/common.py\n@@ -274,6 +274,11 @@\n 'SUBMISSION_QUEUE': 'submission_task_queue',\n }\n \n+# The maximum size in bytes for request body\n+# https://docs.djangoproject.com/en/1.10/ref/settings/#data-upload-max-memory-size\n+FILE_UPLOAD_MAX_MEMORY_SIZE = 524288000 # 500 MB\n+DATA_UPLOAD_MAX_MEMORY_SIZE = 524288000 # 500 MB\n+\n # To make usermame field read-only, customized serializer is defined.\n REST_AUTH_SERIALIZERS = {\n 'USER_DETAILS_SERIALIZER': 'accounts.serializers.ProfileSerializer',\n", "issue": "Increase the data upload limit.\n## Current Behaviour\r\nBy default django supports only 2.5 MB of the data to be uploaded on the web app. Refer [here](https://docs.djangoproject.com/en/1.10/ref/settings/#data-upload-max-memory-size) for more info.\r\n\r\n## Effects\r\nDue to the low upload limit the file in challenge creation using zip isn't being uploaded on the app as the size exceeds.\r\n\r\n## Expected Behaviour\r\nThe upload limit must be increased to 10MB.\n", "before_files": [{"content": "\"\"\"\nDjango settings for evalai project.\n\nGenerated by 'django-admin startproject' using Django 1.10.2.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.10/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.10/ref/settings/\n\"\"\"\n\nimport datetime\nimport os\nimport sys\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nAPPS_DIR = os.path.join(BASE_DIR, 'apps')\n\nsys.path.append(APPS_DIR)\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.environ.get('SECRET_KEY', 'random_secret_key')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = []\n\n\n# Application definition\n\nDEFAULT_APPS = [\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'django.contrib.sites',\n]\n\nOUR_APPS = [\n 'accounts',\n 'analytics',\n 'base',\n 'challenges',\n 'hosts',\n 'jobs',\n 'participants',\n 'web',\n]\n\nTHIRD_PARTY_APPS = [\n 'allauth',\n 'allauth.account',\n 'corsheaders',\n 'import_export',\n 'rest_auth',\n 'rest_auth.registration',\n 'rest_framework.authtoken',\n 'rest_framework',\n 'rest_framework_docs',\n 'rest_framework_expiring_authtoken',\n]\n\nINSTALLED_APPS = DEFAULT_APPS + OUR_APPS + THIRD_PARTY_APPS\n\nMIDDLEWARE = [\n 'corsheaders.middleware.CorsMiddleware',\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'evalai.urls'\n\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'evalai.wsgi.application'\n\n\n# Password validation\n# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', # noqa\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.10/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.10/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static')\nMEDIA_ROOT = os.path.join(BASE_DIR, 'media')\nMEDIA_URL = \"/media/\"\n\nSITE_ID = 1\n\nREST_FRAMEWORK = {\n 'DEFAULT_PAGINATION_CLASS': (\n 'rest_framework.pagination.LimitOffsetPagination'),\n 'PAGE_SIZE': 10,\n 'DEFAULT_PERMISSION_CLASSES': [\n 'rest_framework.permissions.IsAuthenticatedOrReadOnly'\n ],\n 'DEFAULT_AUTHENTICATION_CLASSES': [\n 'rest_framework_expiring_authtoken.authentication.ExpiringTokenAuthentication',\n ],\n 'TEST_REQUEST_DEFAULT_FORMAT': 'json',\n 'DEFAULT_THROTTLE_CLASSES': (\n 'rest_framework.throttling.AnonRateThrottle',\n 'rest_framework.throttling.UserRateThrottle'\n ),\n 'DEFAULT_THROTTLE_RATES': {\n 'anon': '100/minute',\n 'user': '100/minute'\n },\n 'DEFAULT_RENDERER_CLASSES': (\n 'rest_framework.renderers.JSONRenderer',\n )\n}\n\n# ALLAUTH SETTINGS\nACCOUNT_EMAIL_REQUIRED = True\nOLD_PASSWORD_FIELD_ENABLED = True\nACCOUNT_CONFIRM_EMAIL_ON_GET = True\nACCOUNT_EMAIL_CONFIRMATION_ANONYMOUS_REDIRECT_URL = '/api/auth/email-confirmed/'\nACCOUNT_EMAIL_CONFIRMATION_AUTHENTICATED_REDIRECT_URL = '/api/auth/email-confirmed/'\n\nAUTHENTICATION_BACKENDS = (\n # Needed to login by username in Django admin, regardless of `allauth`\n 'django.contrib.auth.backends.ModelBackend',\n # `allauth` specific authentication methods, such as login by e-mail\n 'allauth.account.auth_backends.AuthenticationBackend',\n)\n\n# CORS Settings\nCORS_ORIGIN_ALLOW_ALL = True\n\n# REST Framework Expiring Tokens Configuration\nEXPIRING_TOKEN_LIFESPAN = datetime.timedelta(days=7)\n\n# Logging\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'root': {\n 'level': 'INFO',\n 'handlers': ['console'],\n },\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse',\n },\n 'require_debug_true': {\n '()': 'django.utils.log.RequireDebugTrue',\n }\n },\n 'formatters': {\n 'simple': {\n 'format': '[%(asctime)s] %(levelname)s %(message)s',\n 'datefmt': '%Y-%m-%d %H:%M:%S'\n },\n 'verbose': {\n 'format': '[%(asctime)s] %(levelname)s %(module)s %(message)s',\n 'datefmt': '%Y-%m-%d %H:%M:%S'\n }\n },\n 'handlers': {\n 'console': {\n 'level': 'INFO',\n 'filters': ['require_debug_true'],\n 'class': 'logging.StreamHandler',\n 'formatter': 'simple'\n },\n 'logfile': {\n 'level': 'DEBUG',\n 'class': 'logging.handlers.RotatingFileHandler',\n 'filename': \"/tmp/logfile\",\n 'maxBytes': 50000,\n 'backupCount': 10,\n 'formatter': 'verbose'\n },\n 'mail_admins': {\n 'level': 'ERROR',\n 'class': 'django.utils.log.AdminEmailHandler',\n 'filters': ['require_debug_false'],\n }\n },\n 'loggers': {\n 'django': {\n 'handlers': ['console'],\n 'propagate': False,\n },\n 'django.request': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n },\n 'django.security': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n },\n 'django.db.backends': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n }\n }\n}\n\nCACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n }\n}\n\nRABBITMQ_PARAMETERS = {\n 'HOST': os.environ.get(\"RABBITMQ_HOST\", 'localhost'),\n 'EVALAI_EXCHANGE': {\n 'NAME': 'evalai_submissions',\n 'TYPE': 'topic',\n },\n 'SUBMISSION_QUEUE': 'submission_task_queue',\n}\n\n# To make usermame field read-only, customized serializer is defined.\nREST_AUTH_SERIALIZERS = {\n 'USER_DETAILS_SERIALIZER': 'accounts.serializers.ProfileSerializer',\n}\n", "path": "settings/common.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for evalai project.\n\nGenerated by 'django-admin startproject' using Django 1.10.2.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.10/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.10/ref/settings/\n\"\"\"\n\nimport datetime\nimport os\nimport sys\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nAPPS_DIR = os.path.join(BASE_DIR, 'apps')\n\nsys.path.append(APPS_DIR)\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.environ.get('SECRET_KEY', 'random_secret_key')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = []\n\n\n# Application definition\n\nDEFAULT_APPS = [\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'django.contrib.sites',\n]\n\nOUR_APPS = [\n 'accounts',\n 'analytics',\n 'base',\n 'challenges',\n 'hosts',\n 'jobs',\n 'participants',\n 'web',\n]\n\nTHIRD_PARTY_APPS = [\n 'allauth',\n 'allauth.account',\n 'corsheaders',\n 'import_export',\n 'rest_auth',\n 'rest_auth.registration',\n 'rest_framework.authtoken',\n 'rest_framework',\n 'rest_framework_docs',\n 'rest_framework_expiring_authtoken',\n]\n\nINSTALLED_APPS = DEFAULT_APPS + OUR_APPS + THIRD_PARTY_APPS\n\nMIDDLEWARE = [\n 'corsheaders.middleware.CorsMiddleware',\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'evalai.urls'\n\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'evalai.wsgi.application'\n\n\n# Password validation\n# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', # noqa\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.10/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.10/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static')\nMEDIA_ROOT = os.path.join(BASE_DIR, 'media')\nMEDIA_URL = \"/media/\"\n\nSITE_ID = 1\n\nREST_FRAMEWORK = {\n 'DEFAULT_PAGINATION_CLASS': (\n 'rest_framework.pagination.LimitOffsetPagination'),\n 'PAGE_SIZE': 10,\n 'DEFAULT_PERMISSION_CLASSES': [\n 'rest_framework.permissions.IsAuthenticatedOrReadOnly'\n ],\n 'DEFAULT_AUTHENTICATION_CLASSES': [\n 'rest_framework_expiring_authtoken.authentication.ExpiringTokenAuthentication',\n ],\n 'TEST_REQUEST_DEFAULT_FORMAT': 'json',\n 'DEFAULT_THROTTLE_CLASSES': (\n 'rest_framework.throttling.AnonRateThrottle',\n 'rest_framework.throttling.UserRateThrottle'\n ),\n 'DEFAULT_THROTTLE_RATES': {\n 'anon': '100/minute',\n 'user': '100/minute'\n },\n 'DEFAULT_RENDERER_CLASSES': (\n 'rest_framework.renderers.JSONRenderer',\n )\n}\n\n# ALLAUTH SETTINGS\nACCOUNT_EMAIL_REQUIRED = True\nOLD_PASSWORD_FIELD_ENABLED = True\nACCOUNT_CONFIRM_EMAIL_ON_GET = True\nACCOUNT_EMAIL_CONFIRMATION_ANONYMOUS_REDIRECT_URL = '/api/auth/email-confirmed/'\nACCOUNT_EMAIL_CONFIRMATION_AUTHENTICATED_REDIRECT_URL = '/api/auth/email-confirmed/'\n\nAUTHENTICATION_BACKENDS = (\n # Needed to login by username in Django admin, regardless of `allauth`\n 'django.contrib.auth.backends.ModelBackend',\n # `allauth` specific authentication methods, such as login by e-mail\n 'allauth.account.auth_backends.AuthenticationBackend',\n)\n\n# CORS Settings\nCORS_ORIGIN_ALLOW_ALL = True\n\n# REST Framework Expiring Tokens Configuration\nEXPIRING_TOKEN_LIFESPAN = datetime.timedelta(days=7)\n\n# Logging\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'root': {\n 'level': 'INFO',\n 'handlers': ['console'],\n },\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse',\n },\n 'require_debug_true': {\n '()': 'django.utils.log.RequireDebugTrue',\n }\n },\n 'formatters': {\n 'simple': {\n 'format': '[%(asctime)s] %(levelname)s %(message)s',\n 'datefmt': '%Y-%m-%d %H:%M:%S'\n },\n 'verbose': {\n 'format': '[%(asctime)s] %(levelname)s %(module)s %(message)s',\n 'datefmt': '%Y-%m-%d %H:%M:%S'\n }\n },\n 'handlers': {\n 'console': {\n 'level': 'INFO',\n 'filters': ['require_debug_true'],\n 'class': 'logging.StreamHandler',\n 'formatter': 'simple'\n },\n 'logfile': {\n 'level': 'DEBUG',\n 'class': 'logging.handlers.RotatingFileHandler',\n 'filename': \"/tmp/logfile\",\n 'maxBytes': 50000,\n 'backupCount': 10,\n 'formatter': 'verbose'\n },\n 'mail_admins': {\n 'level': 'ERROR',\n 'class': 'django.utils.log.AdminEmailHandler',\n 'filters': ['require_debug_false'],\n }\n },\n 'loggers': {\n 'django': {\n 'handlers': ['console'],\n 'propagate': False,\n },\n 'django.request': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n },\n 'django.security': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n },\n 'django.db.backends': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n }\n }\n}\n\nCACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n }\n}\n\nRABBITMQ_PARAMETERS = {\n 'HOST': os.environ.get(\"RABBITMQ_HOST\", 'localhost'),\n 'EVALAI_EXCHANGE': {\n 'NAME': 'evalai_submissions',\n 'TYPE': 'topic',\n },\n 'SUBMISSION_QUEUE': 'submission_task_queue',\n}\n\n# The maximum size in bytes for request body\n# https://docs.djangoproject.com/en/1.10/ref/settings/#data-upload-max-memory-size\nFILE_UPLOAD_MAX_MEMORY_SIZE = 524288000 # 500 MB\nDATA_UPLOAD_MAX_MEMORY_SIZE = 524288000 # 500 MB\n\n# To make usermame field read-only, customized serializer is defined.\nREST_AUTH_SERIALIZERS = {\n 'USER_DETAILS_SERIALIZER': 'accounts.serializers.ProfileSerializer',\n}\n", "path": "settings/common.py"}]} | 2,893 | 171 |
gh_patches_debug_19647 | rasdani/github-patches | git_diff | cobbler__cobbler-3581 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Backport] /usr/lib/PXELINUX/linux.c32 does not exist, can't create a symlink to it
### Original feature issue
- Issue: #3574
- PR: #3576
### Target release
- [x] release33
- [ ] release32
- [ ] release30
### Reason
Stabilization for Debian of Cobbler 3.3.x.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cobbler/actions/mkloaders.py`
Content:
```
1 """Cobbler action to create bootable Grub2 images.
2
3 This action calls grub2-mkimage for all bootloader formats configured in
4 Cobbler's settings. See man(1) grub2-mkimage for available formats.
5 """
6 import logging
7 import pathlib
8 import re
9 import subprocess
10 import sys
11 import typing
12
13 from cobbler import utils
14
15
16 # NOTE: does not warrant being a class, but all Cobbler actions use a class's ".run()" as the entrypoint
17 class MkLoaders:
18 """
19 Action to create bootloader images.
20 """
21
22 def __init__(self, api):
23 """
24 MkLoaders constructor.
25
26 :param api: CobblerAPI instance for accessing settings
27 """
28 self.logger = logging.getLogger()
29 self.bootloaders_dir = pathlib.Path(api.settings().bootloaders_dir)
30 # GRUB 2
31 self.grub2_mod_dir = pathlib.Path(api.settings().grub2_mod_dir)
32 self.boot_loaders_formats: typing.Dict = api.settings().bootloaders_formats
33 self.modules: typing.List = api.settings().bootloaders_modules
34 # Syslinux
35 self.syslinux_folder = pathlib.Path(api.settings().syslinux_dir)
36 self.syslinux_memdisk_folder = pathlib.Path(api.settings().syslinux_memdisk_folder)
37 self.syslinux_pxelinux_folder = pathlib.Path(api.settings().syslinux_pxelinux_folder)
38 # Shim
39 self.shim_glob = pathlib.Path(api.settings().bootloaders_shim_folder)
40 self.shim_regex = re.compile(api.settings().bootloaders_shim_file)
41 # iPXE
42 self.ipxe_folder = pathlib.Path(api.settings().bootloaders_ipxe_folder)
43
44 def run(self):
45 """
46 Run GrubImages action. If the files or executables for the bootloader is not available we bail out and skip the
47 creation after it is logged that this is not available.
48 """
49 self.create_directories()
50
51 self.make_shim()
52 self.make_ipxe()
53 self.make_syslinux()
54 self.make_grub()
55
56 def make_shim(self):
57 """
58 Create symlink of the shim bootloader in case it is available on the system.
59 """
60 # Check well-known locations
61 # Absolute paths are not supported BUT we can get around that: https://stackoverflow.com/a/51108375/4730773
62 parts = self.shim_glob.parts
63 start_at = 1 if self.shim_glob.is_absolute() else 0
64 bootloader_path_parts = pathlib.Path(*parts[start_at:])
65 results = sorted(pathlib.Path(self.shim_glob.root).glob(str(bootloader_path_parts)))
66 # If no match, then report and bail out.
67 if len(results) <= 0:
68 self.logger.info('Unable to find the folder which should be scanned for "shim.efi"! Bailing out of linking '
69 'the shim!')
70 return
71 # Now scan the folders with the regex
72 target_shim = None
73 for possible_folder in results:
74 for child in possible_folder.iterdir():
75 if self.shim_regex.search(str(child)):
76 target_shim = child.resolve()
77 break
78 # If no match is found report and return
79 if target_shim is None:
80 self.logger.info('Unable to find "shim.efi" file. Please adjust "bootloaders_shim_file" regex. Bailing out '
81 'of linking the shim!')
82 return
83 # Symlink the absolute target of the match
84 symlink(
85 target_shim,
86 self.bootloaders_dir.joinpath(pathlib.Path("grub/shim.efi")),
87 skip_existing=True
88 )
89
90 def make_ipxe(self):
91 """
92 Create symlink of the iPXE bootloader in case it is available on the system.
93 """
94 if not self.ipxe_folder.exists():
95 self.logger.info('ipxe directory did not exist. Please adjust the "bootloaders_ipxe_folder". Bailing out '
96 'of iPXE setup!')
97 return
98 symlink(
99 self.ipxe_folder.joinpath("undionly.kpxe"),
100 self.bootloaders_dir.joinpath(pathlib.Path("undionly.pxe")),
101 skip_existing=True
102 )
103
104 def make_syslinux(self):
105 """
106 Create symlink of the important syslinux bootloader files in case they are available on the system.
107 """
108 if not utils.command_existing("syslinux"):
109 self.logger.info("syslinux command not available. Bailing out of syslinux setup!")
110 return
111 syslinux_version = get_syslinux_version()
112 # Make modules
113 symlink(
114 self.syslinux_folder.joinpath("menu.c32"),
115 self.bootloaders_dir.joinpath("menu.c32"),
116 skip_existing=True
117 )
118 # According to https://wiki.syslinux.org/wiki/index.php?title=Library_modules,
119 # 'menu.c32' depends on 'libutil.c32'.
120 libutil_c32_path = self.syslinux_folder.joinpath("libutil.c32")
121 if syslinux_version > 4 and libutil_c32_path.exists():
122 symlink(
123 libutil_c32_path,
124 self.bootloaders_dir.joinpath("libutil.c32"),
125 skip_existing=True,
126 )
127 if syslinux_version < 5:
128 # This file is only required for Syslinux 5 and newer.
129 # Source: https://wiki.syslinux.org/wiki/index.php?title=Library_modules
130 self.logger.info('syslinux version 4 detected! Skip making symlink of "ldlinux.c32" file!')
131 else:
132 symlink(
133 self.syslinux_folder.joinpath("ldlinux.c32"),
134 self.bootloaders_dir.joinpath("ldlinux.c32"),
135 skip_existing=True
136 )
137 # Make memdisk
138 symlink(
139 self.syslinux_memdisk_folder.joinpath("memdisk"),
140 self.bootloaders_dir.joinpath("memdisk"),
141 skip_existing=True
142 )
143 # Make pxelinux.0
144 symlink(
145 self.syslinux_pxelinux_folder.joinpath("pxelinux.0"),
146 self.bootloaders_dir.joinpath("pxelinux.0"),
147 skip_existing=True
148 )
149 # Make linux.c32 for syslinux + wimboot
150 libcom32_c32_path = self.syslinux_folder.joinpath("libcom32.c32")
151 if syslinux_version > 4 and libcom32_c32_path.exists():
152 symlink(
153 self.syslinux_pxelinux_folder.joinpath("linux.c32"),
154 self.bootloaders_dir.joinpath("linux.c32"),
155 skip_existing=True,
156 )
157 # Make libcom32.c32
158 # 'linux.c32' depends on 'libcom32.c32'
159 symlink(
160 self.syslinux_pxelinux_folder.joinpath("libcom32.c32"),
161 self.bootloaders_dir.joinpath("libcom32.c32"),
162 skip_existing=True,
163 )
164
165 def make_grub(self):
166 """
167 Create symlink of the GRUB 2 bootloader in case it is available on the system. Additionally build the loaders
168 for other architectures if the modules to do so are available.
169 """
170 if not utils.command_existing("grub2-mkimage"):
171 self.logger.info("grub2-mkimage command not available. Bailing out of GRUB2 generation!")
172 return
173
174 for image_format, options in self.boot_loaders_formats.items():
175 bl_mod_dir = options.get("mod_dir", image_format)
176 mod_dir = self.grub2_mod_dir.joinpath(bl_mod_dir)
177 if not mod_dir.exists():
178 self.logger.info(
179 'GRUB2 modules directory for arch "%s" did no exist. Skipping GRUB2 creation',
180 image_format
181 )
182 continue
183 try:
184 mkimage(
185 image_format,
186 self.bootloaders_dir.joinpath("grub", options["binary_name"]),
187 self.modules + options.get("extra_modules", []),
188 )
189 except subprocess.CalledProcessError:
190 self.logger.info('grub2-mkimage failed for arch "%s"! Maybe you did forget to install the grub modules '
191 'for the architecture?', image_format)
192 utils.log_exc()
193 # don't create module symlinks if grub2-mkimage is unsuccessful
194 continue
195 self.logger.info('Successfully built bootloader for arch "%s"!', image_format)
196
197 # Create a symlink for GRUB 2 modules
198 # assumes a single GRUB can be used to boot all kinds of distros
199 # if this assumption turns out incorrect, individual "grub" subdirectories are needed
200 symlink(
201 mod_dir,
202 self.bootloaders_dir.joinpath("grub", bl_mod_dir),
203 skip_existing=True
204 )
205
206 def create_directories(self):
207 """
208 Create the required directories so that this succeeds. If existing, do nothing. This should create the tree for
209 all supported bootloaders, regardless of the capabilities to symlink/install/build them.
210 """
211 if not self.bootloaders_dir.exists():
212 raise FileNotFoundError("Main bootloader directory not found! Please create it yourself!")
213
214 grub_dir = self.bootloaders_dir.joinpath("grub")
215 if not grub_dir.exists():
216 grub_dir.mkdir(mode=0o644)
217
218
219 # NOTE: move this to cobbler.utils?
220 # cobbler.utils.linkfile does a lot of things, it might be worth it to have a
221 # function just for symbolic links
222 def symlink(target: pathlib.Path, link: pathlib.Path, skip_existing: bool = False):
223 """Create a symlink LINK pointing to TARGET.
224
225 :param target: File/directory that the link will point to. The file/directory must exist.
226 :param link: Filename for the link.
227 :param skip_existing: Controls if existing links are skipped, defaults to False.
228 :raises FileNotFoundError: ``target`` is not an existing file.
229 :raises FileExistsError: ``skip_existing`` is False and ``link`` already exists.
230 """
231
232 if not target.exists():
233 raise FileNotFoundError(
234 f"{target} does not exist, can't create a symlink to it."
235 )
236 try:
237 link.symlink_to(target)
238 except FileExistsError:
239 if not skip_existing:
240 raise
241
242
243 def mkimage(image_format: str, image_filename: pathlib.Path, modules: typing.List):
244 """Create a bootable image of GRUB using grub2-mkimage.
245
246 :param image_format: Format of the image that is being created. See man(1)
247 grub2-mkimage for a list of supported formats.
248 :param image_filename: Location of the image that is being created.
249 :param modules: List of GRUB modules to include into the image
250 :raises subprocess.CalledProcessError: Error raised by ``subprocess.run``.
251 """
252
253 if not image_filename.parent.exists():
254 image_filename.parent.mkdir(parents=True)
255
256 cmd = ["grub2-mkimage"]
257 cmd.extend(("--format", image_format))
258 cmd.extend(("--output", str(image_filename)))
259 cmd.append("--prefix=")
260 cmd.extend(modules)
261
262 # The Exception raised by subprocess already contains everything useful, it's simpler to use that than roll our
263 # own custom exception together with cobbler.utils.subprocess_* functions
264 subprocess.run(cmd, check=True)
265
266
267 def get_syslinux_version() -> int:
268 """
269 This calls syslinux and asks for the version number.
270
271 :return: The major syslinux release number.
272 :raises subprocess.CalledProcessError: Error raised by ``subprocess.run`` in case syslinux does not return zero.
273 """
274 # Example output: "syslinux 4.04 Copyright 1994-2011 H. Peter Anvin et al"
275 cmd = ["syslinux", "-v"]
276 completed_process = subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
277 encoding=sys.getdefaultencoding())
278 output = completed_process.stdout.split()
279 return int(float(output[1]))
280
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cobbler/actions/mkloaders.py b/cobbler/actions/mkloaders.py
--- a/cobbler/actions/mkloaders.py
+++ b/cobbler/actions/mkloaders.py
@@ -150,14 +150,14 @@
libcom32_c32_path = self.syslinux_folder.joinpath("libcom32.c32")
if syslinux_version > 4 and libcom32_c32_path.exists():
symlink(
- self.syslinux_pxelinux_folder.joinpath("linux.c32"),
+ self.syslinux_folder.joinpath("linux.c32"),
self.bootloaders_dir.joinpath("linux.c32"),
skip_existing=True,
)
# Make libcom32.c32
# 'linux.c32' depends on 'libcom32.c32'
symlink(
- self.syslinux_pxelinux_folder.joinpath("libcom32.c32"),
+ self.syslinux_folder.joinpath("libcom32.c32"),
self.bootloaders_dir.joinpath("libcom32.c32"),
skip_existing=True,
)
| {"golden_diff": "diff --git a/cobbler/actions/mkloaders.py b/cobbler/actions/mkloaders.py\n--- a/cobbler/actions/mkloaders.py\n+++ b/cobbler/actions/mkloaders.py\n@@ -150,14 +150,14 @@\n libcom32_c32_path = self.syslinux_folder.joinpath(\"libcom32.c32\")\n if syslinux_version > 4 and libcom32_c32_path.exists():\n symlink(\n- self.syslinux_pxelinux_folder.joinpath(\"linux.c32\"),\n+ self.syslinux_folder.joinpath(\"linux.c32\"),\n self.bootloaders_dir.joinpath(\"linux.c32\"),\n skip_existing=True,\n )\n # Make libcom32.c32\n # 'linux.c32' depends on 'libcom32.c32'\n symlink(\n- self.syslinux_pxelinux_folder.joinpath(\"libcom32.c32\"),\n+ self.syslinux_folder.joinpath(\"libcom32.c32\"),\n self.bootloaders_dir.joinpath(\"libcom32.c32\"),\n skip_existing=True,\n )\n", "issue": "[Backport] /usr/lib/PXELINUX/linux.c32 does not exist, can't create a symlink to it\n### Original feature issue\r\n\r\n- Issue: #3574\r\n- PR: #3576 \r\n\r\n### Target release\r\n\r\n- [x] release33\r\n- [ ] release32\r\n- [ ] release30\r\n\r\n### Reason\r\n\r\nStabilization for Debian of Cobbler 3.3.x.\r\n\n", "before_files": [{"content": "\"\"\"Cobbler action to create bootable Grub2 images.\n\nThis action calls grub2-mkimage for all bootloader formats configured in\nCobbler's settings. See man(1) grub2-mkimage for available formats.\n\"\"\"\nimport logging\nimport pathlib\nimport re\nimport subprocess\nimport sys\nimport typing\n\nfrom cobbler import utils\n\n\n# NOTE: does not warrant being a class, but all Cobbler actions use a class's \".run()\" as the entrypoint\nclass MkLoaders:\n \"\"\"\n Action to create bootloader images.\n \"\"\"\n\n def __init__(self, api):\n \"\"\"\n MkLoaders constructor.\n\n :param api: CobblerAPI instance for accessing settings\n \"\"\"\n self.logger = logging.getLogger()\n self.bootloaders_dir = pathlib.Path(api.settings().bootloaders_dir)\n # GRUB 2\n self.grub2_mod_dir = pathlib.Path(api.settings().grub2_mod_dir)\n self.boot_loaders_formats: typing.Dict = api.settings().bootloaders_formats\n self.modules: typing.List = api.settings().bootloaders_modules\n # Syslinux\n self.syslinux_folder = pathlib.Path(api.settings().syslinux_dir)\n self.syslinux_memdisk_folder = pathlib.Path(api.settings().syslinux_memdisk_folder)\n self.syslinux_pxelinux_folder = pathlib.Path(api.settings().syslinux_pxelinux_folder)\n # Shim\n self.shim_glob = pathlib.Path(api.settings().bootloaders_shim_folder)\n self.shim_regex = re.compile(api.settings().bootloaders_shim_file)\n # iPXE\n self.ipxe_folder = pathlib.Path(api.settings().bootloaders_ipxe_folder)\n\n def run(self):\n \"\"\"\n Run GrubImages action. If the files or executables for the bootloader is not available we bail out and skip the\n creation after it is logged that this is not available.\n \"\"\"\n self.create_directories()\n\n self.make_shim()\n self.make_ipxe()\n self.make_syslinux()\n self.make_grub()\n\n def make_shim(self):\n \"\"\"\n Create symlink of the shim bootloader in case it is available on the system.\n \"\"\"\n # Check well-known locations\n # Absolute paths are not supported BUT we can get around that: https://stackoverflow.com/a/51108375/4730773\n parts = self.shim_glob.parts\n start_at = 1 if self.shim_glob.is_absolute() else 0\n bootloader_path_parts = pathlib.Path(*parts[start_at:])\n results = sorted(pathlib.Path(self.shim_glob.root).glob(str(bootloader_path_parts)))\n # If no match, then report and bail out.\n if len(results) <= 0:\n self.logger.info('Unable to find the folder which should be scanned for \"shim.efi\"! Bailing out of linking '\n 'the shim!')\n return\n # Now scan the folders with the regex\n target_shim = None\n for possible_folder in results:\n for child in possible_folder.iterdir():\n if self.shim_regex.search(str(child)):\n target_shim = child.resolve()\n break\n # If no match is found report and return\n if target_shim is None:\n self.logger.info('Unable to find \"shim.efi\" file. Please adjust \"bootloaders_shim_file\" regex. Bailing out '\n 'of linking the shim!')\n return\n # Symlink the absolute target of the match\n symlink(\n target_shim,\n self.bootloaders_dir.joinpath(pathlib.Path(\"grub/shim.efi\")),\n skip_existing=True\n )\n\n def make_ipxe(self):\n \"\"\"\n Create symlink of the iPXE bootloader in case it is available on the system.\n \"\"\"\n if not self.ipxe_folder.exists():\n self.logger.info('ipxe directory did not exist. Please adjust the \"bootloaders_ipxe_folder\". Bailing out '\n 'of iPXE setup!')\n return\n symlink(\n self.ipxe_folder.joinpath(\"undionly.kpxe\"),\n self.bootloaders_dir.joinpath(pathlib.Path(\"undionly.pxe\")),\n skip_existing=True\n )\n\n def make_syslinux(self):\n \"\"\"\n Create symlink of the important syslinux bootloader files in case they are available on the system.\n \"\"\"\n if not utils.command_existing(\"syslinux\"):\n self.logger.info(\"syslinux command not available. Bailing out of syslinux setup!\")\n return\n syslinux_version = get_syslinux_version()\n # Make modules\n symlink(\n self.syslinux_folder.joinpath(\"menu.c32\"),\n self.bootloaders_dir.joinpath(\"menu.c32\"),\n skip_existing=True\n )\n # According to https://wiki.syslinux.org/wiki/index.php?title=Library_modules,\n # 'menu.c32' depends on 'libutil.c32'.\n libutil_c32_path = self.syslinux_folder.joinpath(\"libutil.c32\")\n if syslinux_version > 4 and libutil_c32_path.exists():\n symlink(\n libutil_c32_path,\n self.bootloaders_dir.joinpath(\"libutil.c32\"),\n skip_existing=True,\n )\n if syslinux_version < 5:\n # This file is only required for Syslinux 5 and newer.\n # Source: https://wiki.syslinux.org/wiki/index.php?title=Library_modules\n self.logger.info('syslinux version 4 detected! Skip making symlink of \"ldlinux.c32\" file!')\n else:\n symlink(\n self.syslinux_folder.joinpath(\"ldlinux.c32\"),\n self.bootloaders_dir.joinpath(\"ldlinux.c32\"),\n skip_existing=True\n )\n # Make memdisk\n symlink(\n self.syslinux_memdisk_folder.joinpath(\"memdisk\"),\n self.bootloaders_dir.joinpath(\"memdisk\"),\n skip_existing=True\n )\n # Make pxelinux.0\n symlink(\n self.syslinux_pxelinux_folder.joinpath(\"pxelinux.0\"),\n self.bootloaders_dir.joinpath(\"pxelinux.0\"),\n skip_existing=True\n )\n # Make linux.c32 for syslinux + wimboot\n libcom32_c32_path = self.syslinux_folder.joinpath(\"libcom32.c32\")\n if syslinux_version > 4 and libcom32_c32_path.exists():\n symlink(\n self.syslinux_pxelinux_folder.joinpath(\"linux.c32\"),\n self.bootloaders_dir.joinpath(\"linux.c32\"),\n skip_existing=True,\n )\n # Make libcom32.c32\n # 'linux.c32' depends on 'libcom32.c32'\n symlink(\n self.syslinux_pxelinux_folder.joinpath(\"libcom32.c32\"),\n self.bootloaders_dir.joinpath(\"libcom32.c32\"),\n skip_existing=True,\n )\n\n def make_grub(self):\n \"\"\"\n Create symlink of the GRUB 2 bootloader in case it is available on the system. Additionally build the loaders\n for other architectures if the modules to do so are available.\n \"\"\"\n if not utils.command_existing(\"grub2-mkimage\"):\n self.logger.info(\"grub2-mkimage command not available. Bailing out of GRUB2 generation!\")\n return\n\n for image_format, options in self.boot_loaders_formats.items():\n bl_mod_dir = options.get(\"mod_dir\", image_format)\n mod_dir = self.grub2_mod_dir.joinpath(bl_mod_dir)\n if not mod_dir.exists():\n self.logger.info(\n 'GRUB2 modules directory for arch \"%s\" did no exist. Skipping GRUB2 creation',\n image_format\n )\n continue\n try:\n mkimage(\n image_format,\n self.bootloaders_dir.joinpath(\"grub\", options[\"binary_name\"]),\n self.modules + options.get(\"extra_modules\", []),\n )\n except subprocess.CalledProcessError:\n self.logger.info('grub2-mkimage failed for arch \"%s\"! Maybe you did forget to install the grub modules '\n 'for the architecture?', image_format)\n utils.log_exc()\n # don't create module symlinks if grub2-mkimage is unsuccessful\n continue\n self.logger.info('Successfully built bootloader for arch \"%s\"!', image_format)\n\n # Create a symlink for GRUB 2 modules\n # assumes a single GRUB can be used to boot all kinds of distros\n # if this assumption turns out incorrect, individual \"grub\" subdirectories are needed\n symlink(\n mod_dir,\n self.bootloaders_dir.joinpath(\"grub\", bl_mod_dir),\n skip_existing=True\n )\n\n def create_directories(self):\n \"\"\"\n Create the required directories so that this succeeds. If existing, do nothing. This should create the tree for\n all supported bootloaders, regardless of the capabilities to symlink/install/build them.\n \"\"\"\n if not self.bootloaders_dir.exists():\n raise FileNotFoundError(\"Main bootloader directory not found! Please create it yourself!\")\n\n grub_dir = self.bootloaders_dir.joinpath(\"grub\")\n if not grub_dir.exists():\n grub_dir.mkdir(mode=0o644)\n\n\n# NOTE: move this to cobbler.utils?\n# cobbler.utils.linkfile does a lot of things, it might be worth it to have a\n# function just for symbolic links\ndef symlink(target: pathlib.Path, link: pathlib.Path, skip_existing: bool = False):\n \"\"\"Create a symlink LINK pointing to TARGET.\n\n :param target: File/directory that the link will point to. The file/directory must exist.\n :param link: Filename for the link.\n :param skip_existing: Controls if existing links are skipped, defaults to False.\n :raises FileNotFoundError: ``target`` is not an existing file.\n :raises FileExistsError: ``skip_existing`` is False and ``link`` already exists.\n \"\"\"\n\n if not target.exists():\n raise FileNotFoundError(\n f\"{target} does not exist, can't create a symlink to it.\"\n )\n try:\n link.symlink_to(target)\n except FileExistsError:\n if not skip_existing:\n raise\n\n\ndef mkimage(image_format: str, image_filename: pathlib.Path, modules: typing.List):\n \"\"\"Create a bootable image of GRUB using grub2-mkimage.\n\n :param image_format: Format of the image that is being created. See man(1)\n grub2-mkimage for a list of supported formats.\n :param image_filename: Location of the image that is being created.\n :param modules: List of GRUB modules to include into the image\n :raises subprocess.CalledProcessError: Error raised by ``subprocess.run``.\n \"\"\"\n\n if not image_filename.parent.exists():\n image_filename.parent.mkdir(parents=True)\n\n cmd = [\"grub2-mkimage\"]\n cmd.extend((\"--format\", image_format))\n cmd.extend((\"--output\", str(image_filename)))\n cmd.append(\"--prefix=\")\n cmd.extend(modules)\n\n # The Exception raised by subprocess already contains everything useful, it's simpler to use that than roll our\n # own custom exception together with cobbler.utils.subprocess_* functions\n subprocess.run(cmd, check=True)\n\n\ndef get_syslinux_version() -> int:\n \"\"\"\n This calls syslinux and asks for the version number.\n\n :return: The major syslinux release number.\n :raises subprocess.CalledProcessError: Error raised by ``subprocess.run`` in case syslinux does not return zero.\n \"\"\"\n # Example output: \"syslinux 4.04 Copyright 1994-2011 H. Peter Anvin et al\"\n cmd = [\"syslinux\", \"-v\"]\n completed_process = subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,\n encoding=sys.getdefaultencoding())\n output = completed_process.stdout.split()\n return int(float(output[1]))\n", "path": "cobbler/actions/mkloaders.py"}], "after_files": [{"content": "\"\"\"Cobbler action to create bootable Grub2 images.\n\nThis action calls grub2-mkimage for all bootloader formats configured in\nCobbler's settings. See man(1) grub2-mkimage for available formats.\n\"\"\"\nimport logging\nimport pathlib\nimport re\nimport subprocess\nimport sys\nimport typing\n\nfrom cobbler import utils\n\n\n# NOTE: does not warrant being a class, but all Cobbler actions use a class's \".run()\" as the entrypoint\nclass MkLoaders:\n \"\"\"\n Action to create bootloader images.\n \"\"\"\n\n def __init__(self, api):\n \"\"\"\n MkLoaders constructor.\n\n :param api: CobblerAPI instance for accessing settings\n \"\"\"\n self.logger = logging.getLogger()\n self.bootloaders_dir = pathlib.Path(api.settings().bootloaders_dir)\n # GRUB 2\n self.grub2_mod_dir = pathlib.Path(api.settings().grub2_mod_dir)\n self.boot_loaders_formats: typing.Dict = api.settings().bootloaders_formats\n self.modules: typing.List = api.settings().bootloaders_modules\n # Syslinux\n self.syslinux_folder = pathlib.Path(api.settings().syslinux_dir)\n self.syslinux_memdisk_folder = pathlib.Path(api.settings().syslinux_memdisk_folder)\n self.syslinux_pxelinux_folder = pathlib.Path(api.settings().syslinux_pxelinux_folder)\n # Shim\n self.shim_glob = pathlib.Path(api.settings().bootloaders_shim_folder)\n self.shim_regex = re.compile(api.settings().bootloaders_shim_file)\n # iPXE\n self.ipxe_folder = pathlib.Path(api.settings().bootloaders_ipxe_folder)\n\n def run(self):\n \"\"\"\n Run GrubImages action. If the files or executables for the bootloader is not available we bail out and skip the\n creation after it is logged that this is not available.\n \"\"\"\n self.create_directories()\n\n self.make_shim()\n self.make_ipxe()\n self.make_syslinux()\n self.make_grub()\n\n def make_shim(self):\n \"\"\"\n Create symlink of the shim bootloader in case it is available on the system.\n \"\"\"\n # Check well-known locations\n # Absolute paths are not supported BUT we can get around that: https://stackoverflow.com/a/51108375/4730773\n parts = self.shim_glob.parts\n start_at = 1 if self.shim_glob.is_absolute() else 0\n bootloader_path_parts = pathlib.Path(*parts[start_at:])\n results = sorted(pathlib.Path(self.shim_glob.root).glob(str(bootloader_path_parts)))\n # If no match, then report and bail out.\n if len(results) <= 0:\n self.logger.info('Unable to find the folder which should be scanned for \"shim.efi\"! Bailing out of linking '\n 'the shim!')\n return\n # Now scan the folders with the regex\n target_shim = None\n for possible_folder in results:\n for child in possible_folder.iterdir():\n if self.shim_regex.search(str(child)):\n target_shim = child.resolve()\n break\n # If no match is found report and return\n if target_shim is None:\n self.logger.info('Unable to find \"shim.efi\" file. Please adjust \"bootloaders_shim_file\" regex. Bailing out '\n 'of linking the shim!')\n return\n # Symlink the absolute target of the match\n symlink(\n target_shim,\n self.bootloaders_dir.joinpath(pathlib.Path(\"grub/shim.efi\")),\n skip_existing=True\n )\n\n def make_ipxe(self):\n \"\"\"\n Create symlink of the iPXE bootloader in case it is available on the system.\n \"\"\"\n if not self.ipxe_folder.exists():\n self.logger.info('ipxe directory did not exist. Please adjust the \"bootloaders_ipxe_folder\". Bailing out '\n 'of iPXE setup!')\n return\n symlink(\n self.ipxe_folder.joinpath(\"undionly.kpxe\"),\n self.bootloaders_dir.joinpath(pathlib.Path(\"undionly.pxe\")),\n skip_existing=True\n )\n\n def make_syslinux(self):\n \"\"\"\n Create symlink of the important syslinux bootloader files in case they are available on the system.\n \"\"\"\n if not utils.command_existing(\"syslinux\"):\n self.logger.info(\"syslinux command not available. Bailing out of syslinux setup!\")\n return\n syslinux_version = get_syslinux_version()\n # Make modules\n symlink(\n self.syslinux_folder.joinpath(\"menu.c32\"),\n self.bootloaders_dir.joinpath(\"menu.c32\"),\n skip_existing=True\n )\n # According to https://wiki.syslinux.org/wiki/index.php?title=Library_modules,\n # 'menu.c32' depends on 'libutil.c32'.\n libutil_c32_path = self.syslinux_folder.joinpath(\"libutil.c32\")\n if syslinux_version > 4 and libutil_c32_path.exists():\n symlink(\n libutil_c32_path,\n self.bootloaders_dir.joinpath(\"libutil.c32\"),\n skip_existing=True,\n )\n if syslinux_version < 5:\n # This file is only required for Syslinux 5 and newer.\n # Source: https://wiki.syslinux.org/wiki/index.php?title=Library_modules\n self.logger.info('syslinux version 4 detected! Skip making symlink of \"ldlinux.c32\" file!')\n else:\n symlink(\n self.syslinux_folder.joinpath(\"ldlinux.c32\"),\n self.bootloaders_dir.joinpath(\"ldlinux.c32\"),\n skip_existing=True\n )\n # Make memdisk\n symlink(\n self.syslinux_memdisk_folder.joinpath(\"memdisk\"),\n self.bootloaders_dir.joinpath(\"memdisk\"),\n skip_existing=True\n )\n # Make pxelinux.0\n symlink(\n self.syslinux_pxelinux_folder.joinpath(\"pxelinux.0\"),\n self.bootloaders_dir.joinpath(\"pxelinux.0\"),\n skip_existing=True\n )\n # Make linux.c32 for syslinux + wimboot\n libcom32_c32_path = self.syslinux_folder.joinpath(\"libcom32.c32\")\n if syslinux_version > 4 and libcom32_c32_path.exists():\n symlink(\n self.syslinux_folder.joinpath(\"linux.c32\"),\n self.bootloaders_dir.joinpath(\"linux.c32\"),\n skip_existing=True,\n )\n # Make libcom32.c32\n # 'linux.c32' depends on 'libcom32.c32'\n symlink(\n self.syslinux_folder.joinpath(\"libcom32.c32\"),\n self.bootloaders_dir.joinpath(\"libcom32.c32\"),\n skip_existing=True,\n )\n\n def make_grub(self):\n \"\"\"\n Create symlink of the GRUB 2 bootloader in case it is available on the system. Additionally build the loaders\n for other architectures if the modules to do so are available.\n \"\"\"\n if not utils.command_existing(\"grub2-mkimage\"):\n self.logger.info(\"grub2-mkimage command not available. Bailing out of GRUB2 generation!\")\n return\n\n for image_format, options in self.boot_loaders_formats.items():\n bl_mod_dir = options.get(\"mod_dir\", image_format)\n mod_dir = self.grub2_mod_dir.joinpath(bl_mod_dir)\n if not mod_dir.exists():\n self.logger.info(\n 'GRUB2 modules directory for arch \"%s\" did no exist. Skipping GRUB2 creation',\n image_format\n )\n continue\n try:\n mkimage(\n image_format,\n self.bootloaders_dir.joinpath(\"grub\", options[\"binary_name\"]),\n self.modules + options.get(\"extra_modules\", []),\n )\n except subprocess.CalledProcessError:\n self.logger.info('grub2-mkimage failed for arch \"%s\"! Maybe you did forget to install the grub modules '\n 'for the architecture?', image_format)\n utils.log_exc()\n # don't create module symlinks if grub2-mkimage is unsuccessful\n continue\n self.logger.info('Successfully built bootloader for arch \"%s\"!', image_format)\n\n # Create a symlink for GRUB 2 modules\n # assumes a single GRUB can be used to boot all kinds of distros\n # if this assumption turns out incorrect, individual \"grub\" subdirectories are needed\n symlink(\n mod_dir,\n self.bootloaders_dir.joinpath(\"grub\", bl_mod_dir),\n skip_existing=True\n )\n\n def create_directories(self):\n \"\"\"\n Create the required directories so that this succeeds. If existing, do nothing. This should create the tree for\n all supported bootloaders, regardless of the capabilities to symlink/install/build them.\n \"\"\"\n if not self.bootloaders_dir.exists():\n raise FileNotFoundError(\"Main bootloader directory not found! Please create it yourself!\")\n\n grub_dir = self.bootloaders_dir.joinpath(\"grub\")\n if not grub_dir.exists():\n grub_dir.mkdir(mode=0o644)\n\n\n# NOTE: move this to cobbler.utils?\n# cobbler.utils.linkfile does a lot of things, it might be worth it to have a\n# function just for symbolic links\ndef symlink(target: pathlib.Path, link: pathlib.Path, skip_existing: bool = False):\n \"\"\"Create a symlink LINK pointing to TARGET.\n\n :param target: File/directory that the link will point to. The file/directory must exist.\n :param link: Filename for the link.\n :param skip_existing: Controls if existing links are skipped, defaults to False.\n :raises FileNotFoundError: ``target`` is not an existing file.\n :raises FileExistsError: ``skip_existing`` is False and ``link`` already exists.\n \"\"\"\n\n if not target.exists():\n raise FileNotFoundError(\n f\"{target} does not exist, can't create a symlink to it.\"\n )\n try:\n link.symlink_to(target)\n except FileExistsError:\n if not skip_existing:\n raise\n\n\ndef mkimage(image_format: str, image_filename: pathlib.Path, modules: typing.List):\n \"\"\"Create a bootable image of GRUB using grub2-mkimage.\n\n :param image_format: Format of the image that is being created. See man(1)\n grub2-mkimage for a list of supported formats.\n :param image_filename: Location of the image that is being created.\n :param modules: List of GRUB modules to include into the image\n :raises subprocess.CalledProcessError: Error raised by ``subprocess.run``.\n \"\"\"\n\n if not image_filename.parent.exists():\n image_filename.parent.mkdir(parents=True)\n\n cmd = [\"grub2-mkimage\"]\n cmd.extend((\"--format\", image_format))\n cmd.extend((\"--output\", str(image_filename)))\n cmd.append(\"--prefix=\")\n cmd.extend(modules)\n\n # The Exception raised by subprocess already contains everything useful, it's simpler to use that than roll our\n # own custom exception together with cobbler.utils.subprocess_* functions\n subprocess.run(cmd, check=True)\n\n\ndef get_syslinux_version() -> int:\n \"\"\"\n This calls syslinux and asks for the version number.\n\n :return: The major syslinux release number.\n :raises subprocess.CalledProcessError: Error raised by ``subprocess.run`` in case syslinux does not return zero.\n \"\"\"\n # Example output: \"syslinux 4.04 Copyright 1994-2011 H. Peter Anvin et al\"\n cmd = [\"syslinux\", \"-v\"]\n completed_process = subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,\n encoding=sys.getdefaultencoding())\n output = completed_process.stdout.split()\n return int(float(output[1]))\n", "path": "cobbler/actions/mkloaders.py"}]} | 3,693 | 257 |
gh_patches_debug_6924 | rasdani/github-patches | git_diff | paperless-ngx__paperless-ngx-55 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Other] Allow access to webhooks for readthedocs
I recently set up the [readthedocs site](https://paperless-ngx.readthedocs.io/en/latest/). Unfortunately, I don't have access to the webhooks settings of the project.
I requested access to it from the project owner. When granted, the docs will automatically update.
Also we should change some items in https://github.com/paperless-ngx/paperless-ngx/blob/master/docs/conf.py
(namely `project = u'Paperless-ng'` and `copyright = u'2021, Daniel Quinn, Jonas Winkler'`)
Should we just add `paperless-ngx team` to it?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 import sphinx_rtd_theme
2
3
4 __version__ = None
5 exec(open("../src/paperless/version.py").read())
6
7
8 extensions = [
9 'sphinx.ext.autodoc',
10 'sphinx.ext.intersphinx',
11 'sphinx.ext.todo',
12 'sphinx.ext.imgmath',
13 'sphinx.ext.viewcode',
14 'sphinx_rtd_theme',
15 ]
16
17 # Add any paths that contain templates here, relative to this directory.
18 # templates_path = ['_templates']
19
20 # The suffix of source filenames.
21 source_suffix = '.rst'
22
23 # The encoding of source files.
24 #source_encoding = 'utf-8-sig'
25
26 # The master toctree document.
27 master_doc = 'index'
28
29 # General information about the project.
30 project = u'Paperless-ng'
31 copyright = u'2021, Daniel Quinn, Jonas Winkler'
32
33 # The version info for the project you're documenting, acts as replacement for
34 # |version| and |release|, also used in various other places throughout the
35 # built documents.
36 #
37
38 #
39 # If the build process ever explodes here, it's because you've set the version
40 # number in paperless.version to a tuple with 3 numbers in it.
41 #
42
43 # The short X.Y version.
44 version = ".".join([str(_) for _ in __version__[:2]])
45 # The full version, including alpha/beta/rc tags.
46 release = ".".join([str(_) for _ in __version__[:3]])
47
48 # The language for content autogenerated by Sphinx. Refer to documentation
49 # for a list of supported languages.
50 #language = None
51
52 # There are two options for replacing |today|: either, you set today to some
53 # non-false value, then it is used:
54 #today = ''
55 # Else, today_fmt is used as the format for a strftime call.
56 #today_fmt = '%B %d, %Y'
57
58 # List of patterns, relative to source directory, that match files and
59 # directories to ignore when looking for source files.
60 exclude_patterns = ['_build']
61
62 # The reST default role (used for this markup: `text`) to use for all
63 # documents.
64 #default_role = None
65
66 # If true, '()' will be appended to :func: etc. cross-reference text.
67 #add_function_parentheses = True
68
69 # If true, the current module name will be prepended to all description
70 # unit titles (such as .. function::).
71 #add_module_names = True
72
73 # If true, sectionauthor and moduleauthor directives will be shown in the
74 # output. They are ignored by default.
75 #show_authors = False
76
77 # The name of the Pygments (syntax highlighting) style to use.
78 pygments_style = 'sphinx'
79
80 # A list of ignored prefixes for module index sorting.
81 #modindex_common_prefix = []
82
83 # If true, keep warnings as "system message" paragraphs in the built documents.
84 #keep_warnings = False
85
86
87 # -- Options for HTML output ----------------------------------------------
88
89 # The theme to use for HTML and HTML Help pages. See the documentation for
90 # a list of builtin themes.
91 html_theme = 'sphinx_rtd_theme'
92
93 # Theme options are theme-specific and customize the look and feel of a theme
94 # further. For a list of options available for each theme, see the
95 # documentation.
96 #html_theme_options = {}
97
98 # Add any paths that contain custom themes here, relative to this directory.
99 html_theme_path = []
100
101 # The name for this set of Sphinx documents. If None, it defaults to
102 # "<project> v<release> documentation".
103 #html_title = None
104
105 # A shorter title for the navigation bar. Default is the same as html_title.
106 #html_short_title = None
107
108 # The name of an image file (relative to this directory) to place at the top
109 # of the sidebar.
110 #html_logo = None
111
112 # The name of an image file (within the static path) to use as favicon of the
113 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
114 # pixels large.
115 #html_favicon = None
116
117 # Add any paths that contain custom static files (such as style sheets) here,
118 # relative to this directory. They are copied after the builtin static files,
119 # so a file named "default.css" will overwrite the builtin "default.css".
120 html_static_path = ['_static']
121
122 # Add any extra paths that contain custom files (such as robots.txt or
123 # .htaccess) here, relative to this directory. These files are copied
124 # directly to the root of the documentation.
125 #html_extra_path = []
126
127 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
128 # using the given strftime format.
129 #html_last_updated_fmt = '%b %d, %Y'
130
131 # If true, SmartyPants will be used to convert quotes and dashes to
132 # typographically correct entities.
133 #html_use_smartypants = True
134
135 # Custom sidebar templates, maps document names to template names.
136 #html_sidebars = {}
137
138 # Additional templates that should be rendered to pages, maps page names to
139 # template names.
140 #html_additional_pages = {}
141
142 # If false, no module index is generated.
143 #html_domain_indices = True
144
145 # If false, no index is generated.
146 #html_use_index = True
147
148 # If true, the index is split into individual pages for each letter.
149 #html_split_index = False
150
151 # If true, links to the reST sources are added to the pages.
152 #html_show_sourcelink = True
153
154 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
155 #html_show_sphinx = True
156
157 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
158 #html_show_copyright = True
159
160 # If true, an OpenSearch description file will be output, and all pages will
161 # contain a <link> tag referring to it. The value of this option must be the
162 # base URL from which the finished HTML is served.
163 #html_use_opensearch = ''
164
165 # This is the file name suffix for HTML files (e.g. ".xhtml").
166 #html_file_suffix = None
167
168 # Output file base name for HTML help builder.
169 htmlhelp_basename = 'paperless'
170
171 # -- Options for LaTeX output ---------------------------------------------
172
173 latex_elements = {
174 # The paper size ('letterpaper' or 'a4paper').
175 #'papersize': 'letterpaper',
176
177 # The font size ('10pt', '11pt' or '12pt').
178 #'pointsize': '10pt',
179
180 # Additional stuff for the LaTeX preamble.
181 #'preamble': '',
182 }
183
184 # Grouping the document tree into LaTeX files. List of tuples
185 # (source start file, target name, title,
186 # author, documentclass [howto, manual, or own class]).
187 latex_documents = [
188 ('index', 'paperless.tex', u'Paperless Documentation',
189 u'Daniel Quinn', 'manual'),
190 ]
191
192 # The name of an image file (relative to this directory) to place at the top of
193 # the title page.
194 #latex_logo = None
195
196 # For "manual" documents, if this is true, then toplevel headings are parts,
197 # not chapters.
198 #latex_use_parts = False
199
200 # If true, show page references after internal links.
201 #latex_show_pagerefs = False
202
203 # If true, show URL addresses after external links.
204 #latex_show_urls = False
205
206 # Documents to append as an appendix to all manuals.
207 #latex_appendices = []
208
209 # If false, no module index is generated.
210 #latex_domain_indices = True
211
212
213 # -- Options for manual page output ---------------------------------------
214
215 # One entry per manual page. List of tuples
216 # (source start file, name, description, authors, manual section).
217 man_pages = [
218 ('index', 'paperless', u'Paperless Documentation',
219 [u'Daniel Quinn'], 1)
220 ]
221
222 # If true, show URL addresses after external links.
223 #man_show_urls = False
224
225
226 # -- Options for Texinfo output -------------------------------------------
227
228 # Grouping the document tree into Texinfo files. List of tuples
229 # (source start file, target name, title, author,
230 # dir menu entry, description, category)
231 texinfo_documents = [
232 ('index', 'Paperless', u'Paperless Documentation',
233 u'Daniel Quinn', 'paperless', 'Scan, index, and archive all of your paper documents.',
234 'Miscellaneous'),
235 ]
236
237 # Documents to append as an appendix to all manuals.
238 #texinfo_appendices = []
239
240 # If false, no module index is generated.
241 #texinfo_domain_indices = True
242
243 # How to display URL addresses: 'footnote', 'no', or 'inline'.
244 #texinfo_show_urls = 'footnote'
245
246 # If true, do not generate a @detailmenu in the "Top" node's menu.
247 #texinfo_no_detailmenu = False
248
249
250 # -- Options for Epub output ----------------------------------------------
251
252 # Bibliographic Dublin Core info.
253 epub_title = u'Paperless'
254 epub_author = u'Daniel Quinn'
255 epub_publisher = u'Daniel Quinn'
256 epub_copyright = u'2015, Daniel Quinn'
257
258 # The basename for the epub file. It defaults to the project name.
259 #epub_basename = u'Paperless'
260
261 # The HTML theme for the epub output. Since the default themes are not optimized
262 # for small screen space, using the same theme for HTML and epub output is
263 # usually not wise. This defaults to 'epub', a theme designed to save visual
264 # space.
265 #epub_theme = 'epub'
266
267 # The language of the text. It defaults to the language option
268 # or en if the language is not set.
269 #epub_language = ''
270
271 # The scheme of the identifier. Typical schemes are ISBN or URL.
272 #epub_scheme = ''
273
274 # The unique identifier of the text. This can be a ISBN number
275 # or the project homepage.
276 #epub_identifier = ''
277
278 # A unique identification for the text.
279 #epub_uid = ''
280
281 # A tuple containing the cover image and cover page html template filenames.
282 #epub_cover = ()
283
284 # A sequence of (type, uri, title) tuples for the guide element of content.opf.
285 #epub_guide = ()
286
287 # HTML files that should be inserted before the pages created by sphinx.
288 # The format is a list of tuples containing the path and title.
289 #epub_pre_files = []
290
291 # HTML files shat should be inserted after the pages created by sphinx.
292 # The format is a list of tuples containing the path and title.
293 #epub_post_files = []
294
295 # A list of files that should not be packed into the epub file.
296 epub_exclude_files = ['search.html']
297
298 # The depth of the table of contents in toc.ncx.
299 #epub_tocdepth = 3
300
301 # Allow duplicate toc entries.
302 #epub_tocdup = True
303
304 # Choose between 'default' and 'includehidden'.
305 #epub_tocscope = 'default'
306
307 # Fix unsupported image types using the PIL.
308 #epub_fix_images = False
309
310 # Scale large images.
311 #epub_max_image_width = 0
312
313 # How to display URL addresses: 'footnote', 'no', or 'inline'.
314 #epub_show_urls = 'inline'
315
316 # If false, no index is generated.
317 #epub_use_index = True
318
319
320 # Example configuration for intersphinx: refer to the Python standard library.
321 intersphinx_mapping = {'http://docs.python.org/': None}
322
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -27,8 +27,8 @@
master_doc = 'index'
# General information about the project.
-project = u'Paperless-ng'
-copyright = u'2021, Daniel Quinn, Jonas Winkler'
+project = u'Paperless-ngx'
+copyright = u'2015-2022, Daniel Quinn, Jonas Winkler, and the paperless-ngx team'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -27,8 +27,8 @@\n master_doc = 'index'\n \n # General information about the project.\n-project = u'Paperless-ng'\n-copyright = u'2021, Daniel Quinn, Jonas Winkler'\n+project = u'Paperless-ngx'\n+copyright = u'2015-2022, Daniel Quinn, Jonas Winkler, and the paperless-ngx team'\n \n # The version info for the project you're documenting, acts as replacement for\n # |version| and |release|, also used in various other places throughout the\n", "issue": "[Other] Allow access to webhooks for readthedocs\nI recently set up the [readthedocs site](https://paperless-ngx.readthedocs.io/en/latest/). Unfortunately, I don't have access to the webhooks settings of the project. \r\nI requested access to it from the project owner. When granted, the docs will automatically update.\r\nAlso we should change some items in https://github.com/paperless-ngx/paperless-ngx/blob/master/docs/conf.py \r\n(namely `project = u'Paperless-ng'` and `copyright = u'2021, Daniel Quinn, Jonas Winkler'`)\r\n\r\nShould we just add `paperless-ngx team` to it?\n", "before_files": [{"content": "import sphinx_rtd_theme\n\n\n__version__ = None\nexec(open(\"../src/paperless/version.py\").read())\n\n\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n 'sphinx.ext.imgmath',\n 'sphinx.ext.viewcode',\n 'sphinx_rtd_theme',\n]\n\n# Add any paths that contain templates here, relative to this directory.\n# templates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Paperless-ng'\ncopyright = u'2021, Daniel Quinn, Jonas Winkler'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n\n#\n# If the build process ever explodes here, it's because you've set the version\n# number in paperless.version to a tuple with 3 numbers in it.\n#\n\n# The short X.Y version.\nversion = \".\".join([str(_) for _ in __version__[:2]])\n# The full version, including alpha/beta/rc tags.\nrelease = \".\".join([str(_) for _ in __version__[:3]])\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'sphinx_rtd_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\nhtml_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'paperless'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'paperless.tex', u'Paperless Documentation',\n u'Daniel Quinn', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'paperless', u'Paperless Documentation',\n [u'Daniel Quinn'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'Paperless', u'Paperless Documentation',\n u'Daniel Quinn', 'paperless', 'Scan, index, and archive all of your paper documents.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\n# -- Options for Epub output ----------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = u'Paperless'\nepub_author = u'Daniel Quinn'\nepub_publisher = u'Daniel Quinn'\nepub_copyright = u'2015, Daniel Quinn'\n\n# The basename for the epub file. It defaults to the project name.\n#epub_basename = u'Paperless'\n\n# The HTML theme for the epub output. Since the default themes are not optimized\n# for small screen space, using the same theme for HTML and epub output is\n# usually not wise. This defaults to 'epub', a theme designed to save visual\n# space.\n#epub_theme = 'epub'\n\n# The language of the text. It defaults to the language option\n# or en if the language is not set.\n#epub_language = ''\n\n# The scheme of the identifier. Typical schemes are ISBN or URL.\n#epub_scheme = ''\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#epub_identifier = ''\n\n# A unique identification for the text.\n#epub_uid = ''\n\n# A tuple containing the cover image and cover page html template filenames.\n#epub_cover = ()\n\n# A sequence of (type, uri, title) tuples for the guide element of content.opf.\n#epub_guide = ()\n\n# HTML files that should be inserted before the pages created by sphinx.\n# The format is a list of tuples containing the path and title.\n#epub_pre_files = []\n\n# HTML files shat should be inserted after the pages created by sphinx.\n# The format is a list of tuples containing the path and title.\n#epub_post_files = []\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\n# The depth of the table of contents in toc.ncx.\n#epub_tocdepth = 3\n\n# Allow duplicate toc entries.\n#epub_tocdup = True\n\n# Choose between 'default' and 'includehidden'.\n#epub_tocscope = 'default'\n\n# Fix unsupported image types using the PIL.\n#epub_fix_images = False\n\n# Scale large images.\n#epub_max_image_width = 0\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#epub_show_urls = 'inline'\n\n# If false, no index is generated.\n#epub_use_index = True\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'http://docs.python.org/': None}\n", "path": "docs/conf.py"}], "after_files": [{"content": "import sphinx_rtd_theme\n\n\n__version__ = None\nexec(open(\"../src/paperless/version.py\").read())\n\n\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n 'sphinx.ext.imgmath',\n 'sphinx.ext.viewcode',\n 'sphinx_rtd_theme',\n]\n\n# Add any paths that contain templates here, relative to this directory.\n# templates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Paperless-ngx'\ncopyright = u'2015-2022, Daniel Quinn, Jonas Winkler, and the paperless-ngx team'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n\n#\n# If the build process ever explodes here, it's because you've set the version\n# number in paperless.version to a tuple with 3 numbers in it.\n#\n\n# The short X.Y version.\nversion = \".\".join([str(_) for _ in __version__[:2]])\n# The full version, including alpha/beta/rc tags.\nrelease = \".\".join([str(_) for _ in __version__[:3]])\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'sphinx_rtd_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\nhtml_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'paperless'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'paperless.tex', u'Paperless Documentation',\n u'Daniel Quinn', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'paperless', u'Paperless Documentation',\n [u'Daniel Quinn'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'Paperless', u'Paperless Documentation',\n u'Daniel Quinn', 'paperless', 'Scan, index, and archive all of your paper documents.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\n# -- Options for Epub output ----------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = u'Paperless'\nepub_author = u'Daniel Quinn'\nepub_publisher = u'Daniel Quinn'\nepub_copyright = u'2015, Daniel Quinn'\n\n# The basename for the epub file. It defaults to the project name.\n#epub_basename = u'Paperless'\n\n# The HTML theme for the epub output. Since the default themes are not optimized\n# for small screen space, using the same theme for HTML and epub output is\n# usually not wise. This defaults to 'epub', a theme designed to save visual\n# space.\n#epub_theme = 'epub'\n\n# The language of the text. It defaults to the language option\n# or en if the language is not set.\n#epub_language = ''\n\n# The scheme of the identifier. Typical schemes are ISBN or URL.\n#epub_scheme = ''\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#epub_identifier = ''\n\n# A unique identification for the text.\n#epub_uid = ''\n\n# A tuple containing the cover image and cover page html template filenames.\n#epub_cover = ()\n\n# A sequence of (type, uri, title) tuples for the guide element of content.opf.\n#epub_guide = ()\n\n# HTML files that should be inserted before the pages created by sphinx.\n# The format is a list of tuples containing the path and title.\n#epub_pre_files = []\n\n# HTML files shat should be inserted after the pages created by sphinx.\n# The format is a list of tuples containing the path and title.\n#epub_post_files = []\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\n# The depth of the table of contents in toc.ncx.\n#epub_tocdepth = 3\n\n# Allow duplicate toc entries.\n#epub_tocdup = True\n\n# Choose between 'default' and 'includehidden'.\n#epub_tocscope = 'default'\n\n# Fix unsupported image types using the PIL.\n#epub_fix_images = False\n\n# Scale large images.\n#epub_max_image_width = 0\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#epub_show_urls = 'inline'\n\n# If false, no index is generated.\n#epub_use_index = True\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'http://docs.python.org/': None}\n", "path": "docs/conf.py"}]} | 3,768 | 152 |
gh_patches_debug_22838 | rasdani/github-patches | git_diff | kartoza__prj.app-485 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Certificate logos need to be rendered larger
<img width="1651" alt="screen shot 2017-07-31 at 9 02 22 am" src="https://user-images.githubusercontent.com/178003/28766753-3d9d6c24-75d1-11e7-8222-5c26ac826c7a.png">
Can you make the project and certifying organization logos bigger in the layout please? Maybe 2x vertical and horizontal?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django_project/certification/views/certificate.py`
Content:
```
1 # coding=utf-8
2 from django.http import Http404, HttpResponse
3 from django.views.generic import CreateView, DetailView
4 from django.core.urlresolvers import reverse
5 from braces.views import LoginRequiredMixin
6 from reportlab.pdfgen import canvas
7 from reportlab.lib.pagesizes import A4, landscape
8 from reportlab.lib.utils import ImageReader
9 from ..models import Certificate, Course, Attendee
10 from ..forms import CertificateForm
11 from base.models.project import Project
12
13
14 class CertificateMixin(object):
15 """Mixin class to provide standard settings for Certificate."""
16
17 model = Certificate
18 form_class = CertificateForm
19
20
21 class CertificateCreateView(
22 LoginRequiredMixin, CertificateMixin, CreateView):
23 """Create view for Certificate."""
24
25 context_object_name = 'certificate'
26 template_name = 'certificate/create.html'
27
28 def get_success_url(self):
29 """Define the redirect URL.
30
31 After successful creation of the object, the User will be redirected
32 to the Course detail page.
33
34 :returns: URL
35 :rtype: HttpResponse
36 """
37
38 return reverse('course-detail', kwargs={
39 'project_slug': self.project_slug,
40 'organisation_slug': self.organisation_slug,
41 'slug': self.course_slug
42 })
43
44 def get_context_data(self, **kwargs):
45 """Get the context data which is passed to a template.
46
47 :param kwargs: Any arguments to pass to the superclass.
48 :type kwargs: dict
49
50 :returns: Context data which will be passed to the template.
51 :rtype: dict
52 """
53
54 context = super(
55 CertificateCreateView, self).get_context_data(**kwargs)
56 context['course'] = Course.objects.get(slug=self.course_slug)
57 context['attendee'] = Attendee.objects.get(pk=self.pk)
58 return context
59
60 def get_form_kwargs(self):
61 """Get keyword arguments from form.
62
63 :returns keyword argument from the form
64 :rtype: dict
65 """
66
67 kwargs = super(CertificateCreateView, self).get_form_kwargs()
68 self.project_slug = self.kwargs.get('project_slug', None)
69 self.organisation_slug = self.kwargs.get('organisation_slug', None)
70 self.course_slug = self.kwargs.get('course_slug', None)
71 self.pk = self.kwargs.get('pk', None)
72 self.course = Course.objects.get(slug=self.course_slug)
73 self.attendee = Attendee.objects.get(pk=self.pk)
74 kwargs.update({
75 'user': self.request.user,
76 'course': self.course,
77 'attendee': self.attendee,
78 })
79 return kwargs
80
81
82 class CertificateDetailView(DetailView):
83 """Detail view for Certificate."""
84
85 model = Certificate
86 context_object_name = 'certificate'
87 template_name = 'certificate/detail.html'
88
89 def get_context_data(self, **kwargs):
90 """Get the context data which is passed to a template.
91
92 :param kwargs: Any arguments to pass to the superclass.
93 :type kwargs: dict
94
95 :returns: Context data which will be passed to the template.
96 :rtype: dict
97 """
98
99 self.certificateID = self.kwargs.get('id', None)
100 self.project_slug = self.kwargs.get('project_slug', None)
101 context = super(
102 CertificateDetailView, self).get_context_data(**kwargs)
103 issued_id = \
104 Certificate.objects.all().values_list('certificateID', flat=True)
105 if self.certificateID in issued_id:
106 context['certificate'] = \
107 Certificate.objects.get(certificateID=self.certificateID)
108 context['project_slug'] = self.project_slug
109 return context
110
111 def get_queryset(self):
112 """Get the queryset for this view.
113
114 :returns: Queryset which is all certificate in the
115 corresponding organisation.
116 :rtype: QuerySet
117 """
118
119 qs = Certificate.objects.all()
120 return qs
121
122 def get_object(self, queryset=None):
123 """Get the object for this view.
124
125 :param queryset: A query set
126 :type queryset: QuerySet
127
128 :returns: Queryset which is filtered to only show a certificate
129 depends on the input certificate ID.
130 :rtype: QuerySet
131 :raises: Http404
132 """
133
134 if queryset is None:
135 queryset = self.get_queryset()
136 certificateID = self.kwargs.get('id', None)
137 if certificateID:
138 try:
139 obj = queryset.get(certificateID=certificateID)
140 return obj
141 except Certificate.DoesNotExist:
142 return None
143 else:
144 raise Http404('Sorry! Certificate by this ID is not exist.')
145
146
147 def certificate_pdf_view(request, **kwargs):
148
149 project_slug = kwargs.pop('project_slug')
150 course_slug = kwargs.pop('course_slug')
151 pk = kwargs.pop('pk')
152 project = Project.objects.get(slug=project_slug)
153 course = Course.objects.get(slug=course_slug)
154 attendee = Attendee.objects.get(pk=pk)
155 certificate = Certificate.objects.get(course=course, attendee=attendee)
156 current_site = request.META['HTTP_HOST']
157
158 # Create the HttpResponse object with the appropriate PDF headers.
159 response = HttpResponse(content_type='application/pdf')
160 response['Content-Disposition'] = 'filename="certificate.pdf"'
161
162 # Create the PDF object, using the response object as its "file."
163 page = canvas.Canvas(response, pagesize=landscape(A4))
164 width, height = A4
165 center = height * 0.5
166
167 if project.image_file:
168 project_logo = ImageReader(project.image_file)
169 else:
170 project_logo = None
171
172 if course.certifying_organisation.logo:
173 organisation_logo = ImageReader(course.certifying_organisation.logo)
174 else:
175 organisation_logo = None
176
177 if project.signature:
178 project_owner_signature = ImageReader(project.signature)
179 else:
180 project_owner_signature = None
181
182 if course.course_convener.signature:
183 convener_signature = ImageReader(course.course_convener.signature)
184 else:
185 convener_signature = None
186
187 if course.template_certificate:
188 background = ImageReader(course.template_certificate)
189 else:
190 background = None
191
192 # Certificate margin.
193 margin_right = height - 50
194 margin_left = 50
195 margin_bottom = 50
196 max_left = margin_right - 50
197
198 # Draw things on the PDF. Here's where the PDF generation happens.
199 # See the ReportLab documentation for the full list of functionality.
200 if background is not None:
201 page.drawImage(
202 background, 0, 0, height=width, width=height,
203 preserveAspectRatio=True, mask='auto')
204 page.setFillColorRGB(0.1, 0.1, 0.1)
205 page.setFont('Times-Roman', 18)
206 # page.drawString(margin_left, 480, project.name)
207 # page.drawRightString(
208 # (margin_right), 480, course.certifying_organisation.name)
209
210 if project_logo is not None:
211 page.drawImage(
212 project_logo, 50, 500, width=50, height=50,
213 preserveAspectRatio=True, mask='auto')
214
215 if organisation_logo is not None:
216 page.drawImage(
217 organisation_logo, max_left, 500, height=50, width=50,
218 preserveAspectRatio=True, anchor='c', mask='auto')
219
220 page.setFont('Times-Bold', 26)
221 page.drawCentredString(center, 480, 'Certificate of Completion')
222 page.drawCentredString(
223 center, 400, '%s %s' % (attendee.firstname, attendee.surname))
224 page.setFont('Times-Roman', 16)
225 page.drawCentredString(
226 center, 360, 'Has attended and completed the course:')
227 page.setFont('Times-Bold', 20)
228 page.drawCentredString(center, 300, course.course_type.name)
229 page.setFont('Times-Roman', 16)
230 page.drawCentredString(
231 center, 270,
232 'From %s %s %s to %s %s %s'
233 % (course.start_date.day, course.start_date.strftime('%B'),
234 course.start_date.year, course.end_date.day,
235 course.end_date.strftime('%B'), course.end_date.year))
236 page.setFillColorRGB(0.1, 0.1, 0.1)
237 page.drawCentredString(
238 center, 220, 'Convened by %s %s at %s' % (
239 course.course_convener.user.first_name,
240 course.course_convener.user.last_name, course.training_center))
241
242 if project_owner_signature is not None:
243 page.drawImage(
244 project_owner_signature,
245 (margin_left + 100), (margin_bottom + 70), width=100, height=70,
246 preserveAspectRatio=True, anchor='s', mask='auto')
247
248 if convener_signature is not None:
249 page.drawImage(
250 convener_signature, (margin_right - 200), (margin_bottom + 70),
251 width=100, height=70, preserveAspectRatio=True, anchor='s',
252 mask='auto')
253
254 page.setFont('Times-Italic', 12)
255 page.drawCentredString(
256 (margin_left + 150), (margin_bottom + 60),
257 '%s %s' % (project.owner.first_name, project.owner.last_name))
258 page.drawCentredString(
259 (margin_right - 150), (margin_bottom + 60),
260 '%s %s' % (
261 course.course_convener.user.first_name,
262 course.course_convener.user.last_name))
263 page.line(
264 (margin_left + 70), (margin_bottom + 55),
265 (margin_left + 230), (margin_bottom + 55))
266 page.line(
267 (margin_right - 70), (margin_bottom + 55),
268 (margin_right - 230), (margin_bottom + 55))
269 page.setFont('Times-Roman', 13)
270 page.drawCentredString(
271 (margin_left + 150), (margin_bottom + 40), 'Project Owner')
272 page.drawCentredString(
273 (margin_right - 150), (margin_bottom + 40), 'Convener')
274
275 # Footnotes.
276 page.setFont('Times-Roman', 14)
277 page.drawString(
278 margin_left, margin_bottom - 10, 'ID: %s' % certificate.certificateID)
279 page.setFont('Times-Roman', 8)
280 page.drawString(
281 margin_left, (margin_bottom - 20),
282 'You can verify this certificate by visiting '
283 'http://%s/en/%s/certificate/%s/.'
284 % (current_site, project.slug, certificate.certificateID))
285
286 # Close the PDF object cleanly.
287 page.showPage()
288 page.save()
289 return response
290
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django_project/certification/views/certificate.py b/django_project/certification/views/certificate.py
--- a/django_project/certification/views/certificate.py
+++ b/django_project/certification/views/certificate.py
@@ -193,7 +193,7 @@
margin_right = height - 50
margin_left = 50
margin_bottom = 50
- max_left = margin_right - 50
+ max_left = margin_right - 100
# Draw things on the PDF. Here's where the PDF generation happens.
# See the ReportLab documentation for the full list of functionality.
@@ -209,12 +209,12 @@
if project_logo is not None:
page.drawImage(
- project_logo, 50, 500, width=50, height=50,
+ project_logo, 50, 450, width=100, height=100,
preserveAspectRatio=True, mask='auto')
if organisation_logo is not None:
page.drawImage(
- organisation_logo, max_left, 500, height=50, width=50,
+ organisation_logo, max_left, 450, height=100, width=100,
preserveAspectRatio=True, anchor='c', mask='auto')
page.setFont('Times-Bold', 26)
| {"golden_diff": "diff --git a/django_project/certification/views/certificate.py b/django_project/certification/views/certificate.py\n--- a/django_project/certification/views/certificate.py\n+++ b/django_project/certification/views/certificate.py\n@@ -193,7 +193,7 @@\n margin_right = height - 50\n margin_left = 50\n margin_bottom = 50\n- max_left = margin_right - 50\n+ max_left = margin_right - 100\n \n # Draw things on the PDF. Here's where the PDF generation happens.\n # See the ReportLab documentation for the full list of functionality.\n@@ -209,12 +209,12 @@\n \n if project_logo is not None:\n page.drawImage(\n- project_logo, 50, 500, width=50, height=50,\n+ project_logo, 50, 450, width=100, height=100,\n preserveAspectRatio=True, mask='auto')\n \n if organisation_logo is not None:\n page.drawImage(\n- organisation_logo, max_left, 500, height=50, width=50,\n+ organisation_logo, max_left, 450, height=100, width=100,\n preserveAspectRatio=True, anchor='c', mask='auto')\n \n page.setFont('Times-Bold', 26)\n", "issue": "Certificate logos need to be rendered larger\n<img width=\"1651\" alt=\"screen shot 2017-07-31 at 9 02 22 am\" src=\"https://user-images.githubusercontent.com/178003/28766753-3d9d6c24-75d1-11e7-8222-5c26ac826c7a.png\">\r\n\r\n\r\nCan you make the project and certifying organization logos bigger in the layout please? Maybe 2x vertical and horizontal?\n", "before_files": [{"content": "# coding=utf-8\nfrom django.http import Http404, HttpResponse\nfrom django.views.generic import CreateView, DetailView\nfrom django.core.urlresolvers import reverse\nfrom braces.views import LoginRequiredMixin\nfrom reportlab.pdfgen import canvas\nfrom reportlab.lib.pagesizes import A4, landscape\nfrom reportlab.lib.utils import ImageReader\nfrom ..models import Certificate, Course, Attendee\nfrom ..forms import CertificateForm\nfrom base.models.project import Project\n\n\nclass CertificateMixin(object):\n \"\"\"Mixin class to provide standard settings for Certificate.\"\"\"\n\n model = Certificate\n form_class = CertificateForm\n\n\nclass CertificateCreateView(\n LoginRequiredMixin, CertificateMixin, CreateView):\n \"\"\"Create view for Certificate.\"\"\"\n\n context_object_name = 'certificate'\n template_name = 'certificate/create.html'\n\n def get_success_url(self):\n \"\"\"Define the redirect URL.\n\n After successful creation of the object, the User will be redirected\n to the Course detail page.\n\n :returns: URL\n :rtype: HttpResponse\n \"\"\"\n\n return reverse('course-detail', kwargs={\n 'project_slug': self.project_slug,\n 'organisation_slug': self.organisation_slug,\n 'slug': self.course_slug\n })\n\n def get_context_data(self, **kwargs):\n \"\"\"Get the context data which is passed to a template.\n\n :param kwargs: Any arguments to pass to the superclass.\n :type kwargs: dict\n\n :returns: Context data which will be passed to the template.\n :rtype: dict\n \"\"\"\n\n context = super(\n CertificateCreateView, self).get_context_data(**kwargs)\n context['course'] = Course.objects.get(slug=self.course_slug)\n context['attendee'] = Attendee.objects.get(pk=self.pk)\n return context\n\n def get_form_kwargs(self):\n \"\"\"Get keyword arguments from form.\n\n :returns keyword argument from the form\n :rtype: dict\n \"\"\"\n\n kwargs = super(CertificateCreateView, self).get_form_kwargs()\n self.project_slug = self.kwargs.get('project_slug', None)\n self.organisation_slug = self.kwargs.get('organisation_slug', None)\n self.course_slug = self.kwargs.get('course_slug', None)\n self.pk = self.kwargs.get('pk', None)\n self.course = Course.objects.get(slug=self.course_slug)\n self.attendee = Attendee.objects.get(pk=self.pk)\n kwargs.update({\n 'user': self.request.user,\n 'course': self.course,\n 'attendee': self.attendee,\n })\n return kwargs\n\n\nclass CertificateDetailView(DetailView):\n \"\"\"Detail view for Certificate.\"\"\"\n\n model = Certificate\n context_object_name = 'certificate'\n template_name = 'certificate/detail.html'\n\n def get_context_data(self, **kwargs):\n \"\"\"Get the context data which is passed to a template.\n\n :param kwargs: Any arguments to pass to the superclass.\n :type kwargs: dict\n\n :returns: Context data which will be passed to the template.\n :rtype: dict\n \"\"\"\n\n self.certificateID = self.kwargs.get('id', None)\n self.project_slug = self.kwargs.get('project_slug', None)\n context = super(\n CertificateDetailView, self).get_context_data(**kwargs)\n issued_id = \\\n Certificate.objects.all().values_list('certificateID', flat=True)\n if self.certificateID in issued_id:\n context['certificate'] = \\\n Certificate.objects.get(certificateID=self.certificateID)\n context['project_slug'] = self.project_slug\n return context\n\n def get_queryset(self):\n \"\"\"Get the queryset for this view.\n\n :returns: Queryset which is all certificate in the\n corresponding organisation.\n :rtype: QuerySet\n \"\"\"\n\n qs = Certificate.objects.all()\n return qs\n\n def get_object(self, queryset=None):\n \"\"\"Get the object for this view.\n\n :param queryset: A query set\n :type queryset: QuerySet\n\n :returns: Queryset which is filtered to only show a certificate\n depends on the input certificate ID.\n :rtype: QuerySet\n :raises: Http404\n \"\"\"\n\n if queryset is None:\n queryset = self.get_queryset()\n certificateID = self.kwargs.get('id', None)\n if certificateID:\n try:\n obj = queryset.get(certificateID=certificateID)\n return obj\n except Certificate.DoesNotExist:\n return None\n else:\n raise Http404('Sorry! Certificate by this ID is not exist.')\n\n\ndef certificate_pdf_view(request, **kwargs):\n\n project_slug = kwargs.pop('project_slug')\n course_slug = kwargs.pop('course_slug')\n pk = kwargs.pop('pk')\n project = Project.objects.get(slug=project_slug)\n course = Course.objects.get(slug=course_slug)\n attendee = Attendee.objects.get(pk=pk)\n certificate = Certificate.objects.get(course=course, attendee=attendee)\n current_site = request.META['HTTP_HOST']\n\n # Create the HttpResponse object with the appropriate PDF headers.\n response = HttpResponse(content_type='application/pdf')\n response['Content-Disposition'] = 'filename=\"certificate.pdf\"'\n\n # Create the PDF object, using the response object as its \"file.\"\n page = canvas.Canvas(response, pagesize=landscape(A4))\n width, height = A4\n center = height * 0.5\n\n if project.image_file:\n project_logo = ImageReader(project.image_file)\n else:\n project_logo = None\n\n if course.certifying_organisation.logo:\n organisation_logo = ImageReader(course.certifying_organisation.logo)\n else:\n organisation_logo = None\n\n if project.signature:\n project_owner_signature = ImageReader(project.signature)\n else:\n project_owner_signature = None\n\n if course.course_convener.signature:\n convener_signature = ImageReader(course.course_convener.signature)\n else:\n convener_signature = None\n\n if course.template_certificate:\n background = ImageReader(course.template_certificate)\n else:\n background = None\n\n # Certificate margin.\n margin_right = height - 50\n margin_left = 50\n margin_bottom = 50\n max_left = margin_right - 50\n\n # Draw things on the PDF. Here's where the PDF generation happens.\n # See the ReportLab documentation for the full list of functionality.\n if background is not None:\n page.drawImage(\n background, 0, 0, height=width, width=height,\n preserveAspectRatio=True, mask='auto')\n page.setFillColorRGB(0.1, 0.1, 0.1)\n page.setFont('Times-Roman', 18)\n # page.drawString(margin_left, 480, project.name)\n # page.drawRightString(\n # (margin_right), 480, course.certifying_organisation.name)\n\n if project_logo is not None:\n page.drawImage(\n project_logo, 50, 500, width=50, height=50,\n preserveAspectRatio=True, mask='auto')\n\n if organisation_logo is not None:\n page.drawImage(\n organisation_logo, max_left, 500, height=50, width=50,\n preserveAspectRatio=True, anchor='c', mask='auto')\n\n page.setFont('Times-Bold', 26)\n page.drawCentredString(center, 480, 'Certificate of Completion')\n page.drawCentredString(\n center, 400, '%s %s' % (attendee.firstname, attendee.surname))\n page.setFont('Times-Roman', 16)\n page.drawCentredString(\n center, 360, 'Has attended and completed the course:')\n page.setFont('Times-Bold', 20)\n page.drawCentredString(center, 300, course.course_type.name)\n page.setFont('Times-Roman', 16)\n page.drawCentredString(\n center, 270,\n 'From %s %s %s to %s %s %s'\n % (course.start_date.day, course.start_date.strftime('%B'),\n course.start_date.year, course.end_date.day,\n course.end_date.strftime('%B'), course.end_date.year))\n page.setFillColorRGB(0.1, 0.1, 0.1)\n page.drawCentredString(\n center, 220, 'Convened by %s %s at %s' % (\n course.course_convener.user.first_name,\n course.course_convener.user.last_name, course.training_center))\n\n if project_owner_signature is not None:\n page.drawImage(\n project_owner_signature,\n (margin_left + 100), (margin_bottom + 70), width=100, height=70,\n preserveAspectRatio=True, anchor='s', mask='auto')\n\n if convener_signature is not None:\n page.drawImage(\n convener_signature, (margin_right - 200), (margin_bottom + 70),\n width=100, height=70, preserveAspectRatio=True, anchor='s',\n mask='auto')\n\n page.setFont('Times-Italic', 12)\n page.drawCentredString(\n (margin_left + 150), (margin_bottom + 60),\n '%s %s' % (project.owner.first_name, project.owner.last_name))\n page.drawCentredString(\n (margin_right - 150), (margin_bottom + 60),\n '%s %s' % (\n course.course_convener.user.first_name,\n course.course_convener.user.last_name))\n page.line(\n (margin_left + 70), (margin_bottom + 55),\n (margin_left + 230), (margin_bottom + 55))\n page.line(\n (margin_right - 70), (margin_bottom + 55),\n (margin_right - 230), (margin_bottom + 55))\n page.setFont('Times-Roman', 13)\n page.drawCentredString(\n (margin_left + 150), (margin_bottom + 40), 'Project Owner')\n page.drawCentredString(\n (margin_right - 150), (margin_bottom + 40), 'Convener')\n\n # Footnotes.\n page.setFont('Times-Roman', 14)\n page.drawString(\n margin_left, margin_bottom - 10, 'ID: %s' % certificate.certificateID)\n page.setFont('Times-Roman', 8)\n page.drawString(\n margin_left, (margin_bottom - 20),\n 'You can verify this certificate by visiting '\n 'http://%s/en/%s/certificate/%s/.'\n % (current_site, project.slug, certificate.certificateID))\n\n # Close the PDF object cleanly.\n page.showPage()\n page.save()\n return response\n", "path": "django_project/certification/views/certificate.py"}], "after_files": [{"content": "# coding=utf-8\nfrom django.http import Http404, HttpResponse\nfrom django.views.generic import CreateView, DetailView\nfrom django.core.urlresolvers import reverse\nfrom braces.views import LoginRequiredMixin\nfrom reportlab.pdfgen import canvas\nfrom reportlab.lib.pagesizes import A4, landscape\nfrom reportlab.lib.utils import ImageReader\nfrom ..models import Certificate, Course, Attendee\nfrom ..forms import CertificateForm\nfrom base.models.project import Project\n\n\nclass CertificateMixin(object):\n \"\"\"Mixin class to provide standard settings for Certificate.\"\"\"\n\n model = Certificate\n form_class = CertificateForm\n\n\nclass CertificateCreateView(\n LoginRequiredMixin, CertificateMixin, CreateView):\n \"\"\"Create view for Certificate.\"\"\"\n\n context_object_name = 'certificate'\n template_name = 'certificate/create.html'\n\n def get_success_url(self):\n \"\"\"Define the redirect URL.\n\n After successful creation of the object, the User will be redirected\n to the Course detail page.\n\n :returns: URL\n :rtype: HttpResponse\n \"\"\"\n\n return reverse('course-detail', kwargs={\n 'project_slug': self.project_slug,\n 'organisation_slug': self.organisation_slug,\n 'slug': self.course_slug\n })\n\n def get_context_data(self, **kwargs):\n \"\"\"Get the context data which is passed to a template.\n\n :param kwargs: Any arguments to pass to the superclass.\n :type kwargs: dict\n\n :returns: Context data which will be passed to the template.\n :rtype: dict\n \"\"\"\n\n context = super(\n CertificateCreateView, self).get_context_data(**kwargs)\n context['course'] = Course.objects.get(slug=self.course_slug)\n context['attendee'] = Attendee.objects.get(pk=self.pk)\n return context\n\n def get_form_kwargs(self):\n \"\"\"Get keyword arguments from form.\n\n :returns keyword argument from the form\n :rtype: dict\n \"\"\"\n\n kwargs = super(CertificateCreateView, self).get_form_kwargs()\n self.project_slug = self.kwargs.get('project_slug', None)\n self.organisation_slug = self.kwargs.get('organisation_slug', None)\n self.course_slug = self.kwargs.get('course_slug', None)\n self.pk = self.kwargs.get('pk', None)\n self.course = Course.objects.get(slug=self.course_slug)\n self.attendee = Attendee.objects.get(pk=self.pk)\n kwargs.update({\n 'user': self.request.user,\n 'course': self.course,\n 'attendee': self.attendee,\n })\n return kwargs\n\n\nclass CertificateDetailView(DetailView):\n \"\"\"Detail view for Certificate.\"\"\"\n\n model = Certificate\n context_object_name = 'certificate'\n template_name = 'certificate/detail.html'\n\n def get_context_data(self, **kwargs):\n \"\"\"Get the context data which is passed to a template.\n\n :param kwargs: Any arguments to pass to the superclass.\n :type kwargs: dict\n\n :returns: Context data which will be passed to the template.\n :rtype: dict\n \"\"\"\n\n self.certificateID = self.kwargs.get('id', None)\n self.project_slug = self.kwargs.get('project_slug', None)\n context = super(\n CertificateDetailView, self).get_context_data(**kwargs)\n issued_id = \\\n Certificate.objects.all().values_list('certificateID', flat=True)\n if self.certificateID in issued_id:\n context['certificate'] = \\\n Certificate.objects.get(certificateID=self.certificateID)\n context['project_slug'] = self.project_slug\n return context\n\n def get_queryset(self):\n \"\"\"Get the queryset for this view.\n\n :returns: Queryset which is all certificate in the\n corresponding organisation.\n :rtype: QuerySet\n \"\"\"\n\n qs = Certificate.objects.all()\n return qs\n\n def get_object(self, queryset=None):\n \"\"\"Get the object for this view.\n\n :param queryset: A query set\n :type queryset: QuerySet\n\n :returns: Queryset which is filtered to only show a certificate\n depends on the input certificate ID.\n :rtype: QuerySet\n :raises: Http404\n \"\"\"\n\n if queryset is None:\n queryset = self.get_queryset()\n certificateID = self.kwargs.get('id', None)\n if certificateID:\n try:\n obj = queryset.get(certificateID=certificateID)\n return obj\n except Certificate.DoesNotExist:\n return None\n else:\n raise Http404('Sorry! Certificate by this ID is not exist.')\n\n\ndef certificate_pdf_view(request, **kwargs):\n\n project_slug = kwargs.pop('project_slug')\n course_slug = kwargs.pop('course_slug')\n pk = kwargs.pop('pk')\n project = Project.objects.get(slug=project_slug)\n course = Course.objects.get(slug=course_slug)\n attendee = Attendee.objects.get(pk=pk)\n certificate = Certificate.objects.get(course=course, attendee=attendee)\n current_site = request.META['HTTP_HOST']\n\n # Create the HttpResponse object with the appropriate PDF headers.\n response = HttpResponse(content_type='application/pdf')\n response['Content-Disposition'] = 'filename=\"certificate.pdf\"'\n\n # Create the PDF object, using the response object as its \"file.\"\n page = canvas.Canvas(response, pagesize=landscape(A4))\n width, height = A4\n center = height * 0.5\n\n if project.image_file:\n project_logo = ImageReader(project.image_file)\n else:\n project_logo = None\n\n if course.certifying_organisation.logo:\n organisation_logo = ImageReader(course.certifying_organisation.logo)\n else:\n organisation_logo = None\n\n if project.signature:\n project_owner_signature = ImageReader(project.signature)\n else:\n project_owner_signature = None\n\n if course.course_convener.signature:\n convener_signature = ImageReader(course.course_convener.signature)\n else:\n convener_signature = None\n\n if course.template_certificate:\n background = ImageReader(course.template_certificate)\n else:\n background = None\n\n # Certificate margin.\n margin_right = height - 50\n margin_left = 50\n margin_bottom = 50\n max_left = margin_right - 100\n\n # Draw things on the PDF. Here's where the PDF generation happens.\n # See the ReportLab documentation for the full list of functionality.\n if background is not None:\n page.drawImage(\n background, 0, 0, height=width, width=height,\n preserveAspectRatio=True, mask='auto')\n page.setFillColorRGB(0.1, 0.1, 0.1)\n page.setFont('Times-Roman', 18)\n # page.drawString(margin_left, 480, project.name)\n # page.drawRightString(\n # (margin_right), 480, course.certifying_organisation.name)\n\n if project_logo is not None:\n page.drawImage(\n project_logo, 50, 450, width=100, height=100,\n preserveAspectRatio=True, mask='auto')\n\n if organisation_logo is not None:\n page.drawImage(\n organisation_logo, max_left, 450, height=100, width=100,\n preserveAspectRatio=True, anchor='c', mask='auto')\n\n page.setFont('Times-Bold', 26)\n page.drawCentredString(center, 480, 'Certificate of Completion')\n page.drawCentredString(\n center, 400, '%s %s' % (attendee.firstname, attendee.surname))\n page.setFont('Times-Roman', 16)\n page.drawCentredString(\n center, 360, 'Has attended and completed the course:')\n page.setFont('Times-Bold', 20)\n page.drawCentredString(center, 300, course.course_type.name)\n page.setFont('Times-Roman', 16)\n page.drawCentredString(\n center, 270,\n 'From %s %s %s to %s %s %s'\n % (course.start_date.day, course.start_date.strftime('%B'),\n course.start_date.year, course.end_date.day,\n course.end_date.strftime('%B'), course.end_date.year))\n page.setFillColorRGB(0.1, 0.1, 0.1)\n page.drawCentredString(\n center, 220, 'Convened by %s %s at %s' % (\n course.course_convener.user.first_name,\n course.course_convener.user.last_name, course.training_center))\n\n if project_owner_signature is not None:\n page.drawImage(\n project_owner_signature,\n (margin_left + 100), (margin_bottom + 70), width=100, height=70,\n preserveAspectRatio=True, anchor='s', mask='auto')\n\n if convener_signature is not None:\n page.drawImage(\n convener_signature, (margin_right - 200), (margin_bottom + 70),\n width=100, height=70, preserveAspectRatio=True, anchor='s',\n mask='auto')\n\n page.setFont('Times-Italic', 12)\n page.drawCentredString(\n (margin_left + 150), (margin_bottom + 60),\n '%s %s' % (project.owner.first_name, project.owner.last_name))\n page.drawCentredString(\n (margin_right - 150), (margin_bottom + 60),\n '%s %s' % (\n course.course_convener.user.first_name,\n course.course_convener.user.last_name))\n page.line(\n (margin_left + 70), (margin_bottom + 55),\n (margin_left + 230), (margin_bottom + 55))\n page.line(\n (margin_right - 70), (margin_bottom + 55),\n (margin_right - 230), (margin_bottom + 55))\n page.setFont('Times-Roman', 13)\n page.drawCentredString(\n (margin_left + 150), (margin_bottom + 40), 'Project Owner')\n page.drawCentredString(\n (margin_right - 150), (margin_bottom + 40), 'Convener')\n\n # Footnotes.\n page.setFont('Times-Roman', 14)\n page.drawString(\n margin_left, margin_bottom - 10, 'ID: %s' % certificate.certificateID)\n page.setFont('Times-Roman', 8)\n page.drawString(\n margin_left, (margin_bottom - 20),\n 'You can verify this certificate by visiting '\n 'http://%s/en/%s/certificate/%s/.'\n % (current_site, project.slug, certificate.certificateID))\n\n # Close the PDF object cleanly.\n page.showPage()\n page.save()\n return response\n", "path": "django_project/certification/views/certificate.py"}]} | 3,519 | 321 |
gh_patches_debug_6359 | rasdani/github-patches | git_diff | pantsbuild__pants-15405 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add `update_env` to `process_execution::local`.
Replaces any `{chroot}` placeholders in the requested process env with the sandbox workdir.
This reflects the behavior for interactive processes executed with the `run` goal.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/core/goals/publish.py`
Content:
```
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3 """Goal for publishing packaged targets to any repository or registry etc.
4
5 Plugins implement the publish protocol that provides this goal with the processes to run in order to
6 publish the artifacts.
7
8 The publish protocol consists of defining two union members and one rule, returning the processes to
9 run. See the doc for the corresponding classses in this module for details on the classes to define.
10
11 Example rule:
12
13 @rule
14 async def publish_example(request: PublishToMyRepoRequest, ...) -> PublishProcesses:
15 # Create `InteractiveProcess` instances as required by the `request`.
16 return PublishProcesses(...)
17 """
18
19
20 from __future__ import annotations
21
22 import collections
23 import json
24 import logging
25 from abc import ABCMeta
26 from dataclasses import asdict, dataclass, field, is_dataclass, replace
27 from itertools import chain
28 from typing import ClassVar, Generic, Type, TypeVar
29
30 from typing_extensions import final
31
32 from pants.core.goals.package import BuiltPackage, PackageFieldSet
33 from pants.engine.addresses import Address
34 from pants.engine.collection import Collection
35 from pants.engine.console import Console
36 from pants.engine.goal import Goal, GoalSubsystem
37 from pants.engine.process import InteractiveProcess, InteractiveProcessResult
38 from pants.engine.rules import Effect, Get, MultiGet, collect_rules, goal_rule, rule
39 from pants.engine.target import (
40 FieldSet,
41 ImmutableValue,
42 NoApplicableTargetsBehavior,
43 TargetRootsToFieldSets,
44 TargetRootsToFieldSetsRequest,
45 )
46 from pants.engine.unions import UnionMembership, UnionRule, union
47 from pants.option.option_types import StrOption
48 from pants.util.frozendict import FrozenDict
49
50 logger = logging.getLogger(__name__)
51
52
53 _F = TypeVar("_F", bound=FieldSet)
54
55
56 class PublishOutputData(FrozenDict[str, ImmutableValue]):
57 pass
58
59
60 @union
61 @dataclass(frozen=True)
62 class PublishRequest(Generic[_F]):
63 """Implement a union member subclass of this union class along with a PublishFieldSet subclass
64 that appoints that member subclass in order to receive publish requests for targets compatible
65 with the field set.
66
67 The `packages` hold all artifacts produced for a given target to be published.
68
69 Example:
70
71 PublishToMyRepoRequest(PublishRequest):
72 pass
73
74 PublishToMyRepoFieldSet(PublishFieldSet):
75 publish_request_type = PublishToMyRepoRequest
76
77 # Standard FieldSet semantics from here on:
78 required_fields = (MyRepositories,)
79 ...
80 """
81
82 field_set: _F
83 packages: tuple[BuiltPackage, ...]
84
85
86 _T = TypeVar("_T", bound=PublishRequest)
87
88
89 @union
90 @dataclass(frozen=True)
91 class PublishFieldSet(Generic[_T], FieldSet, metaclass=ABCMeta):
92 """FieldSet for PublishRequest.
93
94 Union members may list any fields required to fulfill the instantiation of the
95 `PublishProcesses` result of the publish rule.
96 """
97
98 # Subclasses must provide this, to a union member (subclass) of `PublishRequest`.
99 publish_request_type: ClassVar[Type[_T]]
100
101 @final
102 def _request(self, packages: tuple[BuiltPackage, ...]) -> _T:
103 """Internal helper for the core publish goal."""
104 return self.publish_request_type(field_set=self, packages=packages)
105
106 @final
107 @classmethod
108 def rules(cls) -> tuple[UnionRule, ...]:
109 """Helper method for registering the union members."""
110 return (
111 UnionRule(PublishFieldSet, cls),
112 UnionRule(PublishRequest, cls.publish_request_type),
113 )
114
115 def get_output_data(self) -> PublishOutputData:
116 return PublishOutputData({"target": self.address})
117
118
119 @dataclass(frozen=True)
120 class PublishPackages:
121 """Processes to run in order to publish the named artifacts.
122
123 The `names` should list all artifacts being published by the `process` command.
124
125 The `process` may be `None`, indicating that it will not be published. This will be logged as
126 `skipped`. If the process returns a non zero exit code, it will be logged as `failed`.
127
128 The `description` may be a reason explaining why the publish was skipped, or identifying which
129 repository the artifacts are published to.
130 """
131
132 names: tuple[str, ...]
133 process: InteractiveProcess | None = None
134 description: str | None = None
135 data: PublishOutputData = field(default_factory=PublishOutputData)
136
137 def get_output_data(self, **extra_data) -> PublishOutputData:
138 return PublishOutputData(
139 {
140 "names": self.names,
141 **self.data,
142 **extra_data,
143 }
144 )
145
146
147 class PublishProcesses(Collection[PublishPackages]):
148 """Collection of what processes to run for all built packages.
149
150 This is returned from implementing rules in response to a PublishRequest.
151
152 Depending on the capabilities of the publishing tool, the work may be partitioned based on
153 number of artifacts and/or repositories to publish to.
154 """
155
156
157 @dataclass(frozen=True)
158 class PublishProcessesRequest:
159 """Internal request taking all field sets for a target and turning it into a `PublishProcesses`
160 collection (via registered publish plugins)."""
161
162 package_field_sets: tuple[PackageFieldSet, ...]
163 publish_field_sets: tuple[PublishFieldSet, ...]
164
165
166 class PublishSubsystem(GoalSubsystem):
167 name = "publish"
168 help = "Publish deliverables (assets, distributions, images, etc)."
169
170 @classmethod
171 def activated(cls, union_membership: UnionMembership) -> bool:
172 return PackageFieldSet in union_membership and PublishFieldSet in union_membership
173
174 output = StrOption(
175 "--output",
176 default=None,
177 help="Filename for JSON structured publish information.",
178 )
179
180
181 class Publish(Goal):
182 subsystem_cls = PublishSubsystem
183
184
185 @goal_rule
186 async def run_publish(console: Console, publish: PublishSubsystem) -> Publish:
187 target_roots_to_package_field_sets, target_roots_to_publish_field_sets = await MultiGet(
188 Get(
189 TargetRootsToFieldSets,
190 TargetRootsToFieldSetsRequest(
191 PackageFieldSet,
192 goal_description="",
193 # Don't warn/error here because it's already covered by `PublishFieldSet`.
194 no_applicable_targets_behavior=NoApplicableTargetsBehavior.ignore,
195 ),
196 ),
197 Get(
198 TargetRootsToFieldSets,
199 TargetRootsToFieldSetsRequest(
200 PublishFieldSet,
201 goal_description="the `publish` goal",
202 no_applicable_targets_behavior=NoApplicableTargetsBehavior.warn,
203 ),
204 ),
205 )
206
207 # Only keep field sets that both package something, and have something to publish.
208 targets = set(target_roots_to_package_field_sets.targets).intersection(
209 set(target_roots_to_publish_field_sets.targets)
210 )
211
212 if not targets:
213 return Publish(exit_code=0)
214
215 # Build all packages and request the processes to run for each field set.
216 processes = await MultiGet(
217 Get(
218 PublishProcesses,
219 PublishProcessesRequest(
220 target_roots_to_package_field_sets.mapping[tgt],
221 target_roots_to_publish_field_sets.mapping[tgt],
222 ),
223 )
224 for tgt in targets
225 )
226
227 # Run all processes interactively.
228 exit_code: int = 0
229 outputs: list[PublishOutputData] = []
230 results: list[str] = []
231
232 for pub in chain.from_iterable(processes):
233 if not pub.process:
234 sigil = console.sigil_skipped()
235 status = "skipped"
236 if pub.description:
237 status += f" {pub.description}"
238 for name in pub.names:
239 results.append(f"{sigil} {name} {status}.")
240 outputs.append(pub.get_output_data(published=False, status=status))
241 continue
242
243 res = await Effect(InteractiveProcessResult, InteractiveProcess, pub.process)
244 if res.exit_code == 0:
245 sigil = console.sigil_succeeded()
246 status = "published"
247 prep = "to"
248 else:
249 sigil = console.sigil_failed()
250 status = "failed"
251 prep = "for"
252 exit_code = res.exit_code
253
254 if pub.description:
255 status += f" {prep} {pub.description}"
256
257 for name in pub.names:
258 results.append(f"{sigil} {name} {status}.")
259
260 outputs.append(
261 pub.get_output_data(
262 exit_code=res.exit_code,
263 published=res.exit_code == 0,
264 status=status,
265 )
266 )
267
268 console.print_stderr("")
269 if not results:
270 sigil = console.sigil_skipped()
271 console.print_stderr(f"{sigil} Nothing published.")
272
273 # We collect all results to the end, so all output from the interactive processes are done,
274 # before printing the results.
275 for line in results:
276 console.print_stderr(line)
277
278 # Log structured output
279 output_data = json.dumps(outputs, cls=_PublishJsonEncoder, indent=2, sort_keys=True)
280 logger.debug(f"Publish result data:\n{output_data}")
281 if publish.output:
282 with open(publish.output, mode="w") as fd:
283 fd.write(output_data)
284
285 return Publish(exit_code)
286
287
288 class _PublishJsonEncoder(json.JSONEncoder):
289 safe_to_str_types = (Address,)
290
291 def default(self, o):
292 """Return a serializable object for o."""
293 if is_dataclass(o):
294 return asdict(o)
295 if isinstance(o, collections.abc.Mapping):
296 return dict(o)
297 if isinstance(o, collections.abc.Sequence):
298 return list(o)
299 try:
300 return super().default(o)
301 except TypeError:
302 return str(o)
303
304
305 @rule
306 async def package_for_publish(request: PublishProcessesRequest) -> PublishProcesses:
307 packages = await MultiGet(
308 Get(BuiltPackage, PackageFieldSet, field_set) for field_set in request.package_field_sets
309 )
310
311 for pkg in packages:
312 for artifact in pkg.artifacts:
313 if artifact.relpath:
314 logger.info(f"Packaged {artifact.relpath}")
315 elif artifact.extra_log_lines:
316 logger.info(str(artifact.extra_log_lines[0]))
317
318 publish = await MultiGet(
319 Get(
320 PublishProcesses,
321 PublishRequest,
322 field_set._request(packages),
323 )
324 for field_set in request.publish_field_sets
325 )
326
327 # Flatten and dress each publish processes collection with data about its origin.
328 publish_processes = [
329 replace(
330 publish_process,
331 data=PublishOutputData({**publish_process.data, **field_set.get_output_data()}),
332 )
333 for processes, field_set in zip(publish, request.publish_field_sets)
334 for publish_process in processes
335 ]
336
337 return PublishProcesses(publish_processes)
338
339
340 def rules():
341 return collect_rules()
342
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/python/pants/core/goals/publish.py b/src/python/pants/core/goals/publish.py
--- a/src/python/pants/core/goals/publish.py
+++ b/src/python/pants/core/goals/publish.py
@@ -240,6 +240,7 @@
outputs.append(pub.get_output_data(published=False, status=status))
continue
+ logger.debug(f"Execute {pub.process}")
res = await Effect(InteractiveProcessResult, InteractiveProcess, pub.process)
if res.exit_code == 0:
sigil = console.sigil_succeeded()
| {"golden_diff": "diff --git a/src/python/pants/core/goals/publish.py b/src/python/pants/core/goals/publish.py\n--- a/src/python/pants/core/goals/publish.py\n+++ b/src/python/pants/core/goals/publish.py\n@@ -240,6 +240,7 @@\n outputs.append(pub.get_output_data(published=False, status=status))\n continue\n \n+ logger.debug(f\"Execute {pub.process}\")\n res = await Effect(InteractiveProcessResult, InteractiveProcess, pub.process)\n if res.exit_code == 0:\n sigil = console.sigil_succeeded()\n", "issue": "Add `update_env` to `process_execution::local`.\nReplaces any `{chroot}` placeholders in the requested process env with the sandbox workdir.\r\n\r\nThis reflects the behavior for interactive processes executed with the `run` goal.\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\"\"\"Goal for publishing packaged targets to any repository or registry etc.\n\nPlugins implement the publish protocol that provides this goal with the processes to run in order to\npublish the artifacts.\n\nThe publish protocol consists of defining two union members and one rule, returning the processes to\nrun. See the doc for the corresponding classses in this module for details on the classes to define.\n\nExample rule:\n\n @rule\n async def publish_example(request: PublishToMyRepoRequest, ...) -> PublishProcesses:\n # Create `InteractiveProcess` instances as required by the `request`.\n return PublishProcesses(...)\n\"\"\"\n\n\nfrom __future__ import annotations\n\nimport collections\nimport json\nimport logging\nfrom abc import ABCMeta\nfrom dataclasses import asdict, dataclass, field, is_dataclass, replace\nfrom itertools import chain\nfrom typing import ClassVar, Generic, Type, TypeVar\n\nfrom typing_extensions import final\n\nfrom pants.core.goals.package import BuiltPackage, PackageFieldSet\nfrom pants.engine.addresses import Address\nfrom pants.engine.collection import Collection\nfrom pants.engine.console import Console\nfrom pants.engine.goal import Goal, GoalSubsystem\nfrom pants.engine.process import InteractiveProcess, InteractiveProcessResult\nfrom pants.engine.rules import Effect, Get, MultiGet, collect_rules, goal_rule, rule\nfrom pants.engine.target import (\n FieldSet,\n ImmutableValue,\n NoApplicableTargetsBehavior,\n TargetRootsToFieldSets,\n TargetRootsToFieldSetsRequest,\n)\nfrom pants.engine.unions import UnionMembership, UnionRule, union\nfrom pants.option.option_types import StrOption\nfrom pants.util.frozendict import FrozenDict\n\nlogger = logging.getLogger(__name__)\n\n\n_F = TypeVar(\"_F\", bound=FieldSet)\n\n\nclass PublishOutputData(FrozenDict[str, ImmutableValue]):\n pass\n\n\n@union\n@dataclass(frozen=True)\nclass PublishRequest(Generic[_F]):\n \"\"\"Implement a union member subclass of this union class along with a PublishFieldSet subclass\n that appoints that member subclass in order to receive publish requests for targets compatible\n with the field set.\n\n The `packages` hold all artifacts produced for a given target to be published.\n\n Example:\n\n PublishToMyRepoRequest(PublishRequest):\n pass\n\n PublishToMyRepoFieldSet(PublishFieldSet):\n publish_request_type = PublishToMyRepoRequest\n\n # Standard FieldSet semantics from here on:\n required_fields = (MyRepositories,)\n ...\n \"\"\"\n\n field_set: _F\n packages: tuple[BuiltPackage, ...]\n\n\n_T = TypeVar(\"_T\", bound=PublishRequest)\n\n\n@union\n@dataclass(frozen=True)\nclass PublishFieldSet(Generic[_T], FieldSet, metaclass=ABCMeta):\n \"\"\"FieldSet for PublishRequest.\n\n Union members may list any fields required to fulfill the instantiation of the\n `PublishProcesses` result of the publish rule.\n \"\"\"\n\n # Subclasses must provide this, to a union member (subclass) of `PublishRequest`.\n publish_request_type: ClassVar[Type[_T]]\n\n @final\n def _request(self, packages: tuple[BuiltPackage, ...]) -> _T:\n \"\"\"Internal helper for the core publish goal.\"\"\"\n return self.publish_request_type(field_set=self, packages=packages)\n\n @final\n @classmethod\n def rules(cls) -> tuple[UnionRule, ...]:\n \"\"\"Helper method for registering the union members.\"\"\"\n return (\n UnionRule(PublishFieldSet, cls),\n UnionRule(PublishRequest, cls.publish_request_type),\n )\n\n def get_output_data(self) -> PublishOutputData:\n return PublishOutputData({\"target\": self.address})\n\n\n@dataclass(frozen=True)\nclass PublishPackages:\n \"\"\"Processes to run in order to publish the named artifacts.\n\n The `names` should list all artifacts being published by the `process` command.\n\n The `process` may be `None`, indicating that it will not be published. This will be logged as\n `skipped`. If the process returns a non zero exit code, it will be logged as `failed`.\n\n The `description` may be a reason explaining why the publish was skipped, or identifying which\n repository the artifacts are published to.\n \"\"\"\n\n names: tuple[str, ...]\n process: InteractiveProcess | None = None\n description: str | None = None\n data: PublishOutputData = field(default_factory=PublishOutputData)\n\n def get_output_data(self, **extra_data) -> PublishOutputData:\n return PublishOutputData(\n {\n \"names\": self.names,\n **self.data,\n **extra_data,\n }\n )\n\n\nclass PublishProcesses(Collection[PublishPackages]):\n \"\"\"Collection of what processes to run for all built packages.\n\n This is returned from implementing rules in response to a PublishRequest.\n\n Depending on the capabilities of the publishing tool, the work may be partitioned based on\n number of artifacts and/or repositories to publish to.\n \"\"\"\n\n\n@dataclass(frozen=True)\nclass PublishProcessesRequest:\n \"\"\"Internal request taking all field sets for a target and turning it into a `PublishProcesses`\n collection (via registered publish plugins).\"\"\"\n\n package_field_sets: tuple[PackageFieldSet, ...]\n publish_field_sets: tuple[PublishFieldSet, ...]\n\n\nclass PublishSubsystem(GoalSubsystem):\n name = \"publish\"\n help = \"Publish deliverables (assets, distributions, images, etc).\"\n\n @classmethod\n def activated(cls, union_membership: UnionMembership) -> bool:\n return PackageFieldSet in union_membership and PublishFieldSet in union_membership\n\n output = StrOption(\n \"--output\",\n default=None,\n help=\"Filename for JSON structured publish information.\",\n )\n\n\nclass Publish(Goal):\n subsystem_cls = PublishSubsystem\n\n\n@goal_rule\nasync def run_publish(console: Console, publish: PublishSubsystem) -> Publish:\n target_roots_to_package_field_sets, target_roots_to_publish_field_sets = await MultiGet(\n Get(\n TargetRootsToFieldSets,\n TargetRootsToFieldSetsRequest(\n PackageFieldSet,\n goal_description=\"\",\n # Don't warn/error here because it's already covered by `PublishFieldSet`.\n no_applicable_targets_behavior=NoApplicableTargetsBehavior.ignore,\n ),\n ),\n Get(\n TargetRootsToFieldSets,\n TargetRootsToFieldSetsRequest(\n PublishFieldSet,\n goal_description=\"the `publish` goal\",\n no_applicable_targets_behavior=NoApplicableTargetsBehavior.warn,\n ),\n ),\n )\n\n # Only keep field sets that both package something, and have something to publish.\n targets = set(target_roots_to_package_field_sets.targets).intersection(\n set(target_roots_to_publish_field_sets.targets)\n )\n\n if not targets:\n return Publish(exit_code=0)\n\n # Build all packages and request the processes to run for each field set.\n processes = await MultiGet(\n Get(\n PublishProcesses,\n PublishProcessesRequest(\n target_roots_to_package_field_sets.mapping[tgt],\n target_roots_to_publish_field_sets.mapping[tgt],\n ),\n )\n for tgt in targets\n )\n\n # Run all processes interactively.\n exit_code: int = 0\n outputs: list[PublishOutputData] = []\n results: list[str] = []\n\n for pub in chain.from_iterable(processes):\n if not pub.process:\n sigil = console.sigil_skipped()\n status = \"skipped\"\n if pub.description:\n status += f\" {pub.description}\"\n for name in pub.names:\n results.append(f\"{sigil} {name} {status}.\")\n outputs.append(pub.get_output_data(published=False, status=status))\n continue\n\n res = await Effect(InteractiveProcessResult, InteractiveProcess, pub.process)\n if res.exit_code == 0:\n sigil = console.sigil_succeeded()\n status = \"published\"\n prep = \"to\"\n else:\n sigil = console.sigil_failed()\n status = \"failed\"\n prep = \"for\"\n exit_code = res.exit_code\n\n if pub.description:\n status += f\" {prep} {pub.description}\"\n\n for name in pub.names:\n results.append(f\"{sigil} {name} {status}.\")\n\n outputs.append(\n pub.get_output_data(\n exit_code=res.exit_code,\n published=res.exit_code == 0,\n status=status,\n )\n )\n\n console.print_stderr(\"\")\n if not results:\n sigil = console.sigil_skipped()\n console.print_stderr(f\"{sigil} Nothing published.\")\n\n # We collect all results to the end, so all output from the interactive processes are done,\n # before printing the results.\n for line in results:\n console.print_stderr(line)\n\n # Log structured output\n output_data = json.dumps(outputs, cls=_PublishJsonEncoder, indent=2, sort_keys=True)\n logger.debug(f\"Publish result data:\\n{output_data}\")\n if publish.output:\n with open(publish.output, mode=\"w\") as fd:\n fd.write(output_data)\n\n return Publish(exit_code)\n\n\nclass _PublishJsonEncoder(json.JSONEncoder):\n safe_to_str_types = (Address,)\n\n def default(self, o):\n \"\"\"Return a serializable object for o.\"\"\"\n if is_dataclass(o):\n return asdict(o)\n if isinstance(o, collections.abc.Mapping):\n return dict(o)\n if isinstance(o, collections.abc.Sequence):\n return list(o)\n try:\n return super().default(o)\n except TypeError:\n return str(o)\n\n\n@rule\nasync def package_for_publish(request: PublishProcessesRequest) -> PublishProcesses:\n packages = await MultiGet(\n Get(BuiltPackage, PackageFieldSet, field_set) for field_set in request.package_field_sets\n )\n\n for pkg in packages:\n for artifact in pkg.artifacts:\n if artifact.relpath:\n logger.info(f\"Packaged {artifact.relpath}\")\n elif artifact.extra_log_lines:\n logger.info(str(artifact.extra_log_lines[0]))\n\n publish = await MultiGet(\n Get(\n PublishProcesses,\n PublishRequest,\n field_set._request(packages),\n )\n for field_set in request.publish_field_sets\n )\n\n # Flatten and dress each publish processes collection with data about its origin.\n publish_processes = [\n replace(\n publish_process,\n data=PublishOutputData({**publish_process.data, **field_set.get_output_data()}),\n )\n for processes, field_set in zip(publish, request.publish_field_sets)\n for publish_process in processes\n ]\n\n return PublishProcesses(publish_processes)\n\n\ndef rules():\n return collect_rules()\n", "path": "src/python/pants/core/goals/publish.py"}], "after_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\"\"\"Goal for publishing packaged targets to any repository or registry etc.\n\nPlugins implement the publish protocol that provides this goal with the processes to run in order to\npublish the artifacts.\n\nThe publish protocol consists of defining two union members and one rule, returning the processes to\nrun. See the doc for the corresponding classses in this module for details on the classes to define.\n\nExample rule:\n\n @rule\n async def publish_example(request: PublishToMyRepoRequest, ...) -> PublishProcesses:\n # Create `InteractiveProcess` instances as required by the `request`.\n return PublishProcesses(...)\n\"\"\"\n\n\nfrom __future__ import annotations\n\nimport collections\nimport json\nimport logging\nfrom abc import ABCMeta\nfrom dataclasses import asdict, dataclass, field, is_dataclass, replace\nfrom itertools import chain\nfrom typing import ClassVar, Generic, Type, TypeVar\n\nfrom typing_extensions import final\n\nfrom pants.core.goals.package import BuiltPackage, PackageFieldSet\nfrom pants.engine.addresses import Address\nfrom pants.engine.collection import Collection\nfrom pants.engine.console import Console\nfrom pants.engine.goal import Goal, GoalSubsystem\nfrom pants.engine.process import InteractiveProcess, InteractiveProcessResult\nfrom pants.engine.rules import Effect, Get, MultiGet, collect_rules, goal_rule, rule\nfrom pants.engine.target import (\n FieldSet,\n ImmutableValue,\n NoApplicableTargetsBehavior,\n TargetRootsToFieldSets,\n TargetRootsToFieldSetsRequest,\n)\nfrom pants.engine.unions import UnionMembership, UnionRule, union\nfrom pants.option.option_types import StrOption\nfrom pants.util.frozendict import FrozenDict\n\nlogger = logging.getLogger(__name__)\n\n\n_F = TypeVar(\"_F\", bound=FieldSet)\n\n\nclass PublishOutputData(FrozenDict[str, ImmutableValue]):\n pass\n\n\n@union\n@dataclass(frozen=True)\nclass PublishRequest(Generic[_F]):\n \"\"\"Implement a union member subclass of this union class along with a PublishFieldSet subclass\n that appoints that member subclass in order to receive publish requests for targets compatible\n with the field set.\n\n The `packages` hold all artifacts produced for a given target to be published.\n\n Example:\n\n PublishToMyRepoRequest(PublishRequest):\n pass\n\n PublishToMyRepoFieldSet(PublishFieldSet):\n publish_request_type = PublishToMyRepoRequest\n\n # Standard FieldSet semantics from here on:\n required_fields = (MyRepositories,)\n ...\n \"\"\"\n\n field_set: _F\n packages: tuple[BuiltPackage, ...]\n\n\n_T = TypeVar(\"_T\", bound=PublishRequest)\n\n\n@union\n@dataclass(frozen=True)\nclass PublishFieldSet(Generic[_T], FieldSet, metaclass=ABCMeta):\n \"\"\"FieldSet for PublishRequest.\n\n Union members may list any fields required to fulfill the instantiation of the\n `PublishProcesses` result of the publish rule.\n \"\"\"\n\n # Subclasses must provide this, to a union member (subclass) of `PublishRequest`.\n publish_request_type: ClassVar[Type[_T]]\n\n @final\n def _request(self, packages: tuple[BuiltPackage, ...]) -> _T:\n \"\"\"Internal helper for the core publish goal.\"\"\"\n return self.publish_request_type(field_set=self, packages=packages)\n\n @final\n @classmethod\n def rules(cls) -> tuple[UnionRule, ...]:\n \"\"\"Helper method for registering the union members.\"\"\"\n return (\n UnionRule(PublishFieldSet, cls),\n UnionRule(PublishRequest, cls.publish_request_type),\n )\n\n def get_output_data(self) -> PublishOutputData:\n return PublishOutputData({\"target\": self.address})\n\n\n@dataclass(frozen=True)\nclass PublishPackages:\n \"\"\"Processes to run in order to publish the named artifacts.\n\n The `names` should list all artifacts being published by the `process` command.\n\n The `process` may be `None`, indicating that it will not be published. This will be logged as\n `skipped`. If the process returns a non zero exit code, it will be logged as `failed`.\n\n The `description` may be a reason explaining why the publish was skipped, or identifying which\n repository the artifacts are published to.\n \"\"\"\n\n names: tuple[str, ...]\n process: InteractiveProcess | None = None\n description: str | None = None\n data: PublishOutputData = field(default_factory=PublishOutputData)\n\n def get_output_data(self, **extra_data) -> PublishOutputData:\n return PublishOutputData(\n {\n \"names\": self.names,\n **self.data,\n **extra_data,\n }\n )\n\n\nclass PublishProcesses(Collection[PublishPackages]):\n \"\"\"Collection of what processes to run for all built packages.\n\n This is returned from implementing rules in response to a PublishRequest.\n\n Depending on the capabilities of the publishing tool, the work may be partitioned based on\n number of artifacts and/or repositories to publish to.\n \"\"\"\n\n\n@dataclass(frozen=True)\nclass PublishProcessesRequest:\n \"\"\"Internal request taking all field sets for a target and turning it into a `PublishProcesses`\n collection (via registered publish plugins).\"\"\"\n\n package_field_sets: tuple[PackageFieldSet, ...]\n publish_field_sets: tuple[PublishFieldSet, ...]\n\n\nclass PublishSubsystem(GoalSubsystem):\n name = \"publish\"\n help = \"Publish deliverables (assets, distributions, images, etc).\"\n\n @classmethod\n def activated(cls, union_membership: UnionMembership) -> bool:\n return PackageFieldSet in union_membership and PublishFieldSet in union_membership\n\n output = StrOption(\n \"--output\",\n default=None,\n help=\"Filename for JSON structured publish information.\",\n )\n\n\nclass Publish(Goal):\n subsystem_cls = PublishSubsystem\n\n\n@goal_rule\nasync def run_publish(console: Console, publish: PublishSubsystem) -> Publish:\n target_roots_to_package_field_sets, target_roots_to_publish_field_sets = await MultiGet(\n Get(\n TargetRootsToFieldSets,\n TargetRootsToFieldSetsRequest(\n PackageFieldSet,\n goal_description=\"\",\n # Don't warn/error here because it's already covered by `PublishFieldSet`.\n no_applicable_targets_behavior=NoApplicableTargetsBehavior.ignore,\n ),\n ),\n Get(\n TargetRootsToFieldSets,\n TargetRootsToFieldSetsRequest(\n PublishFieldSet,\n goal_description=\"the `publish` goal\",\n no_applicable_targets_behavior=NoApplicableTargetsBehavior.warn,\n ),\n ),\n )\n\n # Only keep field sets that both package something, and have something to publish.\n targets = set(target_roots_to_package_field_sets.targets).intersection(\n set(target_roots_to_publish_field_sets.targets)\n )\n\n if not targets:\n return Publish(exit_code=0)\n\n # Build all packages and request the processes to run for each field set.\n processes = await MultiGet(\n Get(\n PublishProcesses,\n PublishProcessesRequest(\n target_roots_to_package_field_sets.mapping[tgt],\n target_roots_to_publish_field_sets.mapping[tgt],\n ),\n )\n for tgt in targets\n )\n\n # Run all processes interactively.\n exit_code: int = 0\n outputs: list[PublishOutputData] = []\n results: list[str] = []\n\n for pub in chain.from_iterable(processes):\n if not pub.process:\n sigil = console.sigil_skipped()\n status = \"skipped\"\n if pub.description:\n status += f\" {pub.description}\"\n for name in pub.names:\n results.append(f\"{sigil} {name} {status}.\")\n outputs.append(pub.get_output_data(published=False, status=status))\n continue\n\n logger.debug(f\"Execute {pub.process}\")\n res = await Effect(InteractiveProcessResult, InteractiveProcess, pub.process)\n if res.exit_code == 0:\n sigil = console.sigil_succeeded()\n status = \"published\"\n prep = \"to\"\n else:\n sigil = console.sigil_failed()\n status = \"failed\"\n prep = \"for\"\n exit_code = res.exit_code\n\n if pub.description:\n status += f\" {prep} {pub.description}\"\n\n for name in pub.names:\n results.append(f\"{sigil} {name} {status}.\")\n\n outputs.append(\n pub.get_output_data(\n exit_code=res.exit_code,\n published=res.exit_code == 0,\n status=status,\n )\n )\n\n console.print_stderr(\"\")\n if not results:\n sigil = console.sigil_skipped()\n console.print_stderr(f\"{sigil} Nothing published.\")\n\n # We collect all results to the end, so all output from the interactive processes are done,\n # before printing the results.\n for line in results:\n console.print_stderr(line)\n\n # Log structured output\n output_data = json.dumps(outputs, cls=_PublishJsonEncoder, indent=2, sort_keys=True)\n logger.debug(f\"Publish result data:\\n{output_data}\")\n if publish.output:\n with open(publish.output, mode=\"w\") as fd:\n fd.write(output_data)\n\n return Publish(exit_code)\n\n\nclass _PublishJsonEncoder(json.JSONEncoder):\n safe_to_str_types = (Address,)\n\n def default(self, o):\n \"\"\"Return a serializable object for o.\"\"\"\n if is_dataclass(o):\n return asdict(o)\n if isinstance(o, collections.abc.Mapping):\n return dict(o)\n if isinstance(o, collections.abc.Sequence):\n return list(o)\n try:\n return super().default(o)\n except TypeError:\n return str(o)\n\n\n@rule\nasync def package_for_publish(request: PublishProcessesRequest) -> PublishProcesses:\n packages = await MultiGet(\n Get(BuiltPackage, PackageFieldSet, field_set) for field_set in request.package_field_sets\n )\n\n for pkg in packages:\n for artifact in pkg.artifacts:\n if artifact.relpath:\n logger.info(f\"Packaged {artifact.relpath}\")\n elif artifact.extra_log_lines:\n logger.info(str(artifact.extra_log_lines[0]))\n\n publish = await MultiGet(\n Get(\n PublishProcesses,\n PublishRequest,\n field_set._request(packages),\n )\n for field_set in request.publish_field_sets\n )\n\n # Flatten and dress each publish processes collection with data about its origin.\n publish_processes = [\n replace(\n publish_process,\n data=PublishOutputData({**publish_process.data, **field_set.get_output_data()}),\n )\n for processes, field_set in zip(publish, request.publish_field_sets)\n for publish_process in processes\n ]\n\n return PublishProcesses(publish_processes)\n\n\ndef rules():\n return collect_rules()\n", "path": "src/python/pants/core/goals/publish.py"}]} | 3,580 | 129 |
gh_patches_debug_10956 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-413 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
E2532 State Machine Definition key (OutputPath) for State of Type (Task) is not valid
cfn-lint version: 0.7.3
I am getting the above error when trying to lint a CF template containing a step function. The step function code is working fine in AWS console though.
"CreatePublishedRequest": {
"Type": "Task",
"Resource": "{$createPublishedRequest}",
"ResultPath":"$.publishedRequest",
"OutputPath":"$.publishedRequest",
"Next": "PutRequest"
},
"PutRequest": {
"Type": "Task",
"Resource": "{$updateKey}",
"ResultPath":"$.response",
"Next": "Take Down Mock"
},
When trying to change to using InputPath in "PutRequest" instead I am getting the same error, but for InputPath instead.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/stepfunctions/StateMachine.py`
Content:
```
1 """
2 Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import json
18 import six
19 from cfnlint import CloudFormationLintRule
20 from cfnlint import RuleMatch
21
22
23 class StateMachine(CloudFormationLintRule):
24 """Check State Machine Definition"""
25 id = 'E2532'
26 shortdesc = 'Check State Machine Definition for proper syntax'
27 description = 'Check the State Machine String Definition to make sure its JSON. ' \
28 'Validate basic syntax of the file to determine validity.'
29 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-stepfunctions-statemachine.html'
30 tags = ['resources', 'stepfunctions']
31
32 def __init__(self):
33 """Init"""
34 self.resource_property_types.append('AWS::StepFunctions::StateMachine')
35
36 def _check_state_json(self, def_json, state_name, path):
37 """Check State JSON Definition"""
38 matches = []
39
40 common_state_keys = [
41 'Next',
42 'End',
43 'Type',
44 'Comment',
45 'Input',
46 'Ouptut',
47 ]
48 common_state_required_keys = [
49 'Type',
50 ]
51 state_key_types = {
52 'Pass': ['Result', 'ResultPath'],
53 'Task': ['Resource', 'ResultPath', 'Retry', 'Catch', 'TimeoutSeconds', 'HeartbeatSeconds'],
54 'Choice': ['Choices', 'Default'],
55 'Wait': ['Seconds', 'Timestamp', 'SecondsPath', 'TimestampPath'],
56 'Succeed': [],
57 'Fail': ['Cause', 'Error'],
58 'Parallel': ['Branches', 'ResultPath', 'Retry', 'Catch']
59 }
60 state_required_types = {
61 'Pass': [],
62 'Task': ['Resource'],
63 'Choice': ['Choices'],
64 'Wait': [],
65 'Succeed': [],
66 'Fail': [],
67 'Parallel': ['Branches']
68 }
69
70 for req_key in common_state_required_keys:
71 if req_key not in def_json:
72 message = 'State Machine Definition required key (%s) for State (%s) is missing' % (req_key, state_name)
73 matches.append(RuleMatch(path, message))
74 return matches
75
76 state_type = def_json.get('Type')
77
78 if state_type in state_key_types:
79 for state_key, _ in def_json.items():
80 if state_key not in common_state_keys + state_key_types.get(state_type, []):
81 message = 'State Machine Definition key (%s) for State (%s) of Type (%s) is not valid' % (state_key, state_name, state_type)
82 matches.append(RuleMatch(path, message))
83 for req_key in common_state_required_keys + state_required_types.get(state_type, []):
84 if req_key not in def_json:
85 message = 'State Machine Definition required key (%s) for State (%s) of Type (%s) is missing' % (req_key, state_name, state_type)
86 matches.append(RuleMatch(path, message))
87 return matches
88 else:
89 message = 'State Machine Definition Type (%s) is not valid' % (state_type)
90 matches.append(RuleMatch(path, message))
91
92 return matches
93
94 def _check_definition_json(self, def_json, path):
95 """Check JSON Definition"""
96 matches = []
97
98 top_level_keys = [
99 'Comment',
100 'StartAt',
101 'TimeoutSeconds',
102 'Version',
103 'States'
104 ]
105 top_level_required_keys = [
106 'StartAt',
107 'States'
108 ]
109 for top_key, _ in def_json.items():
110 if top_key not in top_level_keys:
111 message = 'State Machine Definition key (%s) is not valid' % top_key
112 matches.append(RuleMatch(path, message))
113
114 for req_key in top_level_required_keys:
115 if req_key not in def_json:
116 message = 'State Machine Definition required key (%s) is missing' % req_key
117 matches.append(RuleMatch(path, message))
118
119 for state_name, state_value in def_json.get('States', {}).items():
120 matches.extend(self._check_state_json(state_value, state_name, path))
121 return matches
122
123 def check_value(self, value, path):
124 """Check Definition Value"""
125 matches = []
126 try:
127 def_json = json.loads(value)
128 # pylint: disable=W0703
129 except Exception as err:
130 message = 'State Machine Definition needs to be formatted as JSON. Error %s' % err
131 matches.append(RuleMatch(path, message))
132 return matches
133
134 matches.extend(self._check_definition_json(def_json, path))
135 return matches
136
137 def check_sub(self, value, path):
138 """Check Sub Object"""
139 matches = []
140 if isinstance(value, list):
141 matches.extend(self.check_value(value[0], path))
142 elif isinstance(value, six.string_types):
143 matches.extend(self.check_value(value, path))
144
145 return matches
146
147 def match_resource_properties(self, properties, _, path, cfn):
148 """Check CloudFormation Properties"""
149 matches = []
150
151 matches.extend(
152 cfn.check_value(
153 obj=properties, key='DefinitionString',
154 path=path[:],
155 check_value=self.check_value,
156 check_sub=self.check_sub
157 ))
158
159 return matches
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/stepfunctions/StateMachine.py b/src/cfnlint/rules/resources/stepfunctions/StateMachine.py
--- a/src/cfnlint/rules/resources/stepfunctions/StateMachine.py
+++ b/src/cfnlint/rules/resources/stepfunctions/StateMachine.py
@@ -37,13 +37,14 @@
"""Check State JSON Definition"""
matches = []
+ # https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-common-fields.html
common_state_keys = [
'Next',
'End',
'Type',
'Comment',
- 'Input',
- 'Ouptut',
+ 'InputPath',
+ 'OutputPath',
]
common_state_required_keys = [
'Type',
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/stepfunctions/StateMachine.py b/src/cfnlint/rules/resources/stepfunctions/StateMachine.py\n--- a/src/cfnlint/rules/resources/stepfunctions/StateMachine.py\n+++ b/src/cfnlint/rules/resources/stepfunctions/StateMachine.py\n@@ -37,13 +37,14 @@\n \"\"\"Check State JSON Definition\"\"\"\n matches = []\n \n+ # https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-common-fields.html\n common_state_keys = [\n 'Next',\n 'End',\n 'Type',\n 'Comment',\n- 'Input',\n- 'Ouptut',\n+ 'InputPath',\n+ 'OutputPath',\n ]\n common_state_required_keys = [\n 'Type',\n", "issue": "E2532 State Machine Definition key (OutputPath) for State of Type (Task) is not valid\ncfn-lint version: 0.7.3\r\n\r\nI am getting the above error when trying to lint a CF template containing a step function. The step function code is working fine in AWS console though. \r\n\r\n\"CreatePublishedRequest\": {\r\n \"Type\": \"Task\",\r\n \"Resource\": \"{$createPublishedRequest}\",\r\n \"ResultPath\":\"$.publishedRequest\",\r\n \"OutputPath\":\"$.publishedRequest\",\r\n \"Next\": \"PutRequest\"\r\n },\r\n\"PutRequest\": {\r\n \"Type\": \"Task\",\r\n \"Resource\": \"{$updateKey}\",\r\n \"ResultPath\":\"$.response\",\r\n \"Next\": \"Take Down Mock\"\r\n },\r\n\r\nWhen trying to change to using InputPath in \"PutRequest\" instead I am getting the same error, but for InputPath instead. \r\n\n", "before_files": [{"content": "\"\"\"\n Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport json\nimport six\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\n\nclass StateMachine(CloudFormationLintRule):\n \"\"\"Check State Machine Definition\"\"\"\n id = 'E2532'\n shortdesc = 'Check State Machine Definition for proper syntax'\n description = 'Check the State Machine String Definition to make sure its JSON. ' \\\n 'Validate basic syntax of the file to determine validity.'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-stepfunctions-statemachine.html'\n tags = ['resources', 'stepfunctions']\n\n def __init__(self):\n \"\"\"Init\"\"\"\n self.resource_property_types.append('AWS::StepFunctions::StateMachine')\n\n def _check_state_json(self, def_json, state_name, path):\n \"\"\"Check State JSON Definition\"\"\"\n matches = []\n\n common_state_keys = [\n 'Next',\n 'End',\n 'Type',\n 'Comment',\n 'Input',\n 'Ouptut',\n ]\n common_state_required_keys = [\n 'Type',\n ]\n state_key_types = {\n 'Pass': ['Result', 'ResultPath'],\n 'Task': ['Resource', 'ResultPath', 'Retry', 'Catch', 'TimeoutSeconds', 'HeartbeatSeconds'],\n 'Choice': ['Choices', 'Default'],\n 'Wait': ['Seconds', 'Timestamp', 'SecondsPath', 'TimestampPath'],\n 'Succeed': [],\n 'Fail': ['Cause', 'Error'],\n 'Parallel': ['Branches', 'ResultPath', 'Retry', 'Catch']\n }\n state_required_types = {\n 'Pass': [],\n 'Task': ['Resource'],\n 'Choice': ['Choices'],\n 'Wait': [],\n 'Succeed': [],\n 'Fail': [],\n 'Parallel': ['Branches']\n }\n\n for req_key in common_state_required_keys:\n if req_key not in def_json:\n message = 'State Machine Definition required key (%s) for State (%s) is missing' % (req_key, state_name)\n matches.append(RuleMatch(path, message))\n return matches\n\n state_type = def_json.get('Type')\n\n if state_type in state_key_types:\n for state_key, _ in def_json.items():\n if state_key not in common_state_keys + state_key_types.get(state_type, []):\n message = 'State Machine Definition key (%s) for State (%s) of Type (%s) is not valid' % (state_key, state_name, state_type)\n matches.append(RuleMatch(path, message))\n for req_key in common_state_required_keys + state_required_types.get(state_type, []):\n if req_key not in def_json:\n message = 'State Machine Definition required key (%s) for State (%s) of Type (%s) is missing' % (req_key, state_name, state_type)\n matches.append(RuleMatch(path, message))\n return matches\n else:\n message = 'State Machine Definition Type (%s) is not valid' % (state_type)\n matches.append(RuleMatch(path, message))\n\n return matches\n\n def _check_definition_json(self, def_json, path):\n \"\"\"Check JSON Definition\"\"\"\n matches = []\n\n top_level_keys = [\n 'Comment',\n 'StartAt',\n 'TimeoutSeconds',\n 'Version',\n 'States'\n ]\n top_level_required_keys = [\n 'StartAt',\n 'States'\n ]\n for top_key, _ in def_json.items():\n if top_key not in top_level_keys:\n message = 'State Machine Definition key (%s) is not valid' % top_key\n matches.append(RuleMatch(path, message))\n\n for req_key in top_level_required_keys:\n if req_key not in def_json:\n message = 'State Machine Definition required key (%s) is missing' % req_key\n matches.append(RuleMatch(path, message))\n\n for state_name, state_value in def_json.get('States', {}).items():\n matches.extend(self._check_state_json(state_value, state_name, path))\n return matches\n\n def check_value(self, value, path):\n \"\"\"Check Definition Value\"\"\"\n matches = []\n try:\n def_json = json.loads(value)\n # pylint: disable=W0703\n except Exception as err:\n message = 'State Machine Definition needs to be formatted as JSON. Error %s' % err\n matches.append(RuleMatch(path, message))\n return matches\n\n matches.extend(self._check_definition_json(def_json, path))\n return matches\n\n def check_sub(self, value, path):\n \"\"\"Check Sub Object\"\"\"\n matches = []\n if isinstance(value, list):\n matches.extend(self.check_value(value[0], path))\n elif isinstance(value, six.string_types):\n matches.extend(self.check_value(value, path))\n\n return matches\n\n def match_resource_properties(self, properties, _, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n matches.extend(\n cfn.check_value(\n obj=properties, key='DefinitionString',\n path=path[:],\n check_value=self.check_value,\n check_sub=self.check_sub\n ))\n\n return matches\n", "path": "src/cfnlint/rules/resources/stepfunctions/StateMachine.py"}], "after_files": [{"content": "\"\"\"\n Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport json\nimport six\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\n\nclass StateMachine(CloudFormationLintRule):\n \"\"\"Check State Machine Definition\"\"\"\n id = 'E2532'\n shortdesc = 'Check State Machine Definition for proper syntax'\n description = 'Check the State Machine String Definition to make sure its JSON. ' \\\n 'Validate basic syntax of the file to determine validity.'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-stepfunctions-statemachine.html'\n tags = ['resources', 'stepfunctions']\n\n def __init__(self):\n \"\"\"Init\"\"\"\n self.resource_property_types.append('AWS::StepFunctions::StateMachine')\n\n def _check_state_json(self, def_json, state_name, path):\n \"\"\"Check State JSON Definition\"\"\"\n matches = []\n\n # https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-common-fields.html\n common_state_keys = [\n 'Next',\n 'End',\n 'Type',\n 'Comment',\n 'InputPath',\n 'OutputPath',\n ]\n common_state_required_keys = [\n 'Type',\n ]\n state_key_types = {\n 'Pass': ['Result', 'ResultPath'],\n 'Task': ['Resource', 'ResultPath', 'Retry', 'Catch', 'TimeoutSeconds', 'HeartbeatSeconds'],\n 'Choice': ['Choices', 'Default'],\n 'Wait': ['Seconds', 'Timestamp', 'SecondsPath', 'TimestampPath'],\n 'Succeed': [],\n 'Fail': ['Cause', 'Error'],\n 'Parallel': ['Branches', 'ResultPath', 'Retry', 'Catch']\n }\n state_required_types = {\n 'Pass': [],\n 'Task': ['Resource'],\n 'Choice': ['Choices'],\n 'Wait': [],\n 'Succeed': [],\n 'Fail': [],\n 'Parallel': ['Branches']\n }\n\n for req_key in common_state_required_keys:\n if req_key not in def_json:\n message = 'State Machine Definition required key (%s) for State (%s) is missing' % (req_key, state_name)\n matches.append(RuleMatch(path, message))\n return matches\n\n state_type = def_json.get('Type')\n\n if state_type in state_key_types:\n for state_key, _ in def_json.items():\n if state_key not in common_state_keys + state_key_types.get(state_type, []):\n message = 'State Machine Definition key (%s) for State (%s) of Type (%s) is not valid' % (state_key, state_name, state_type)\n matches.append(RuleMatch(path, message))\n for req_key in common_state_required_keys + state_required_types.get(state_type, []):\n if req_key not in def_json:\n message = 'State Machine Definition required key (%s) for State (%s) of Type (%s) is missing' % (req_key, state_name, state_type)\n matches.append(RuleMatch(path, message))\n return matches\n else:\n message = 'State Machine Definition Type (%s) is not valid' % (state_type)\n matches.append(RuleMatch(path, message))\n\n return matches\n\n def _check_definition_json(self, def_json, path):\n \"\"\"Check JSON Definition\"\"\"\n matches = []\n\n top_level_keys = [\n 'Comment',\n 'StartAt',\n 'TimeoutSeconds',\n 'Version',\n 'States'\n ]\n top_level_required_keys = [\n 'StartAt',\n 'States'\n ]\n for top_key, _ in def_json.items():\n if top_key not in top_level_keys:\n message = 'State Machine Definition key (%s) is not valid' % top_key\n matches.append(RuleMatch(path, message))\n\n for req_key in top_level_required_keys:\n if req_key not in def_json:\n message = 'State Machine Definition required key (%s) is missing' % req_key\n matches.append(RuleMatch(path, message))\n\n for state_name, state_value in def_json.get('States', {}).items():\n matches.extend(self._check_state_json(state_value, state_name, path))\n return matches\n\n def check_value(self, value, path):\n \"\"\"Check Definition Value\"\"\"\n matches = []\n try:\n def_json = json.loads(value)\n # pylint: disable=W0703\n except Exception as err:\n message = 'State Machine Definition needs to be formatted as JSON. Error %s' % err\n matches.append(RuleMatch(path, message))\n return matches\n\n matches.extend(self._check_definition_json(def_json, path))\n return matches\n\n def check_sub(self, value, path):\n \"\"\"Check Sub Object\"\"\"\n matches = []\n if isinstance(value, list):\n matches.extend(self.check_value(value[0], path))\n elif isinstance(value, six.string_types):\n matches.extend(self.check_value(value, path))\n\n return matches\n\n def match_resource_properties(self, properties, _, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n matches.extend(\n cfn.check_value(\n obj=properties, key='DefinitionString',\n path=path[:],\n check_value=self.check_value,\n check_sub=self.check_sub\n ))\n\n return matches\n", "path": "src/cfnlint/rules/resources/stepfunctions/StateMachine.py"}]} | 2,151 | 168 |
gh_patches_debug_26248 | rasdani/github-patches | git_diff | statsmodels__statsmodels-4007 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
seasonal_decompose() With Known Freq but Without DatetimeIndex()
In issue #3225, someone wrote "In 0.8, you should be able to specify freq as keyword argument to override the index." But I seem to still be having the same problems as the questioner there, with my data.
Here is a simple reproducible code example of my issue:
```
import statsmodels.api as sm
dta = pd.Series([x%3 for x in range(100)])
decomposed = sm.tsa.seasonal_decompose(dta, freq=3)
```
> AttributeError: 'RangeIndex' object has no attribute 'inferred_freq'
```
import statsmodels
print(statsmodels.__version__)
```
> 0.8.0
Should it be possible to use StatsModels seasonal_decompose() with a given frequency but without a DatetimeIndex? With my real data, I know it is of half-hour resolution, but I don't know the dates/times it corresponds to, so there's no neat/clean way of mapping it to a datetime index.
I've also posted this question on [StackOverflow](http://stackoverflow.com/questions/42425774/using-statsmodels-seasonal-decompose-without-datetimeindex-but-with-known-freq) and will add an answer there if I get one here. Thanks everyone for your time.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `statsmodels/tsa/seasonal.py`
Content:
```
1 """
2 Seasonal Decomposition by Moving Averages
3 """
4 from statsmodels.compat.python import lmap, range, iteritems
5 import numpy as np
6 from pandas.core.nanops import nanmean as pd_nanmean
7 from .filters._utils import _maybe_get_pandas_wrapper_freq
8 from .filters.filtertools import convolution_filter
9 from statsmodels.tsa.tsatools import freq_to_period
10
11
12 def seasonal_mean(x, freq):
13 """
14 Return means for each period in x. freq is an int that gives the
15 number of periods per cycle. E.g., 12 for monthly. NaNs are ignored
16 in the mean.
17 """
18 return np.array([pd_nanmean(x[i::freq]) for i in range(freq)])
19
20
21 def seasonal_decompose(x, model="additive", filt=None, freq=None, two_sided=True):
22 """
23 Seasonal decomposition using moving averages
24
25 Parameters
26 ----------
27 x : array-like
28 Time series
29 model : str {"additive", "multiplicative"}
30 Type of seasonal component. Abbreviations are accepted.
31 filt : array-like
32 The filter coefficients for filtering out the seasonal component.
33 The concrete moving average method used in filtering is determined by two_sided.
34 freq : int, optional
35 Frequency of the series. Must be used if x is not a pandas object.
36 Overrides default periodicity of x if x is a pandas
37 object with a timeseries index.
38 two_sided : bool
39 The moving average method used in filtering.
40 If True (default), a centered moving average is computed using the filt.
41 If False, the filter coefficients are for past values only.
42
43 Returns
44 -------
45 results : obj
46 A object with seasonal, trend, and resid attributes.
47
48 Notes
49 -----
50 This is a naive decomposition. More sophisticated methods should
51 be preferred.
52
53 The additive model is Y[t] = T[t] + S[t] + e[t]
54
55 The multiplicative model is Y[t] = T[t] * S[t] * e[t]
56
57 The seasonal component is first removed by applying a convolution
58 filter to the data. The average of this smoothed series for each
59 period is the returned seasonal component.
60
61 See Also
62 --------
63 statsmodels.tsa.filters.bk_filter.bkfilter
64 statsmodels.tsa.filters.cf_filter.xffilter
65 statsmodels.tsa.filters.hp_filter.hpfilter
66 statsmodels.tsa.filters.convolution_filter
67 """
68 _pandas_wrapper, pfreq = _maybe_get_pandas_wrapper_freq(x)
69 x = np.asanyarray(x).squeeze()
70 nobs = len(x)
71
72 if not np.all(np.isfinite(x)):
73 raise ValueError("This function does not handle missing values")
74 if model.startswith('m'):
75 if np.any(x <= 0):
76 raise ValueError("Multiplicative seasonality is not appropriate "
77 "for zero and negative values")
78
79 if freq is None:
80 if pfreq is not None:
81 pfreq = freq_to_period(pfreq)
82 freq = pfreq
83 else:
84 raise ValueError("You must specify a freq or x must be a "
85 "pandas object with a timeseries index with"
86 "a freq not set to None")
87
88 if filt is None:
89 if freq % 2 == 0: # split weights at ends
90 filt = np.array([.5] + [1] * (freq - 1) + [.5]) / freq
91 else:
92 filt = np.repeat(1./freq, freq)
93
94 nsides = int(two_sided) + 1
95 trend = convolution_filter(x, filt, nsides)
96
97 # nan pad for conformability - convolve doesn't do it
98 if model.startswith('m'):
99 detrended = x / trend
100 else:
101 detrended = x - trend
102
103 period_averages = seasonal_mean(detrended, freq)
104
105 if model.startswith('m'):
106 period_averages /= np.mean(period_averages)
107 else:
108 period_averages -= np.mean(period_averages)
109
110 seasonal = np.tile(period_averages, nobs // freq + 1)[:nobs]
111
112 if model.startswith('m'):
113 resid = x / seasonal / trend
114 else:
115 resid = detrended - seasonal
116
117 results = lmap(_pandas_wrapper, [seasonal, trend, resid, x])
118 return DecomposeResult(seasonal=results[0], trend=results[1],
119 resid=results[2], observed=results[3])
120
121
122 class DecomposeResult(object):
123 def __init__(self, **kwargs):
124 for key, value in iteritems(kwargs):
125 setattr(self, key, value)
126 self.nobs = len(self.observed)
127
128 def plot(self):
129 from statsmodels.graphics.utils import _import_mpl
130 plt = _import_mpl()
131 fig, axes = plt.subplots(4, 1, sharex=True)
132 if hasattr(self.observed, 'plot'): # got pandas use it
133 self.observed.plot(ax=axes[0], legend=False)
134 axes[0].set_ylabel('Observed')
135 self.trend.plot(ax=axes[1], legend=False)
136 axes[1].set_ylabel('Trend')
137 self.seasonal.plot(ax=axes[2], legend=False)
138 axes[2].set_ylabel('Seasonal')
139 self.resid.plot(ax=axes[3], legend=False)
140 axes[3].set_ylabel('Residual')
141 else:
142 axes[0].plot(self.observed)
143 axes[0].set_ylabel('Observed')
144 axes[1].plot(self.trend)
145 axes[1].set_ylabel('Trend')
146 axes[2].plot(self.seasonal)
147 axes[2].set_ylabel('Seasonal')
148 axes[3].plot(self.resid)
149 axes[3].set_ylabel('Residual')
150 axes[3].set_xlabel('Time')
151 axes[3].set_xlim(0, self.nobs)
152
153 fig.tight_layout()
154 return fig
155
156
157 if __name__ == "__main__":
158 x = np.array([-50, 175, 149, 214, 247, 237, 225, 329, 729, 809,
159 530, 489, 540, 457, 195, 176, 337, 239, 128, 102,
160 232, 429, 3, 98, 43, -141, -77, -13, 125, 361, -45, 184])
161 results = seasonal_decompose(x, freq=4)
162
163 from pandas import DataFrame, DatetimeIndex
164 data = DataFrame(x, DatetimeIndex(start='1/1/1951',
165 periods=len(x),
166 freq='Q'))
167
168 res = seasonal_decompose(data)
169
170
```
Path: `statsmodels/tsa/filters/_utils.py`
Content:
```
1 from functools import wraps
2
3 from statsmodels.tools.data import _is_using_pandas
4 from statsmodels.tsa.tsatools import freq_to_period
5
6
7 def _get_pandas_wrapper(X, trim_head=None, trim_tail=None, names=None):
8 index = X.index
9 #TODO: allow use index labels
10 if trim_head is None and trim_tail is None:
11 index = index
12 elif trim_tail is None:
13 index = index[trim_head:]
14 elif trim_head is None:
15 index = index[:-trim_tail]
16 else:
17 index = index[trim_head:-trim_tail]
18 if hasattr(X, "columns"):
19 if names is None:
20 names = X.columns
21 return lambda x : X.__class__(x, index=index, columns=names)
22 else:
23 if names is None:
24 names = X.name
25 return lambda x : X.__class__(x, index=index, name=names)
26
27
28 def _maybe_get_pandas_wrapper(X, trim_head=None, trim_tail=None):
29 """
30 If using pandas returns a function to wrap the results, e.g., wrapper(X)
31 trim is an integer for the symmetric truncation of the series in some
32 filters.
33 otherwise returns None
34 """
35 if _is_using_pandas(X, None):
36 return _get_pandas_wrapper(X, trim_head, trim_tail)
37 else:
38 return
39
40
41 def _maybe_get_pandas_wrapper_freq(X, trim=None):
42 if _is_using_pandas(X, None):
43 index = X.index
44 func = _get_pandas_wrapper(X, trim)
45 freq = index.inferred_freq
46 return func, freq
47 else:
48 return lambda x : x, None
49
50
51 def pandas_wrapper(func, trim_head=None, trim_tail=None, names=None, *args,
52 **kwargs):
53 @wraps(func)
54 def new_func(X, *args, **kwargs):
55 # quick pass-through for do nothing case
56 if not _is_using_pandas(X, None):
57 return func(X, *args, **kwargs)
58
59 wrapper_func = _get_pandas_wrapper(X, trim_head, trim_tail,
60 names)
61 ret = func(X, *args, **kwargs)
62 ret = wrapper_func(ret)
63 return ret
64
65 return new_func
66
67
68 def pandas_wrapper_bunch(func, trim_head=None, trim_tail=None,
69 names=None, *args, **kwargs):
70 @wraps(func)
71 def new_func(X, *args, **kwargs):
72 # quick pass-through for do nothing case
73 if not _is_using_pandas(X, None):
74 return func(X, *args, **kwargs)
75
76 wrapper_func = _get_pandas_wrapper(X, trim_head, trim_tail,
77 names)
78 ret = func(X, *args, **kwargs)
79 ret = wrapper_func(ret)
80 return ret
81
82 return new_func
83
84
85 def pandas_wrapper_predict(func, trim_head=None, trim_tail=None,
86 columns=None, *args, **kwargs):
87 pass
88
89
90 def pandas_wrapper_freq(func, trim_head=None, trim_tail=None,
91 freq_kw='freq', columns=None, *args, **kwargs):
92 """
93 Return a new function that catches the incoming X, checks if it's pandas,
94 calls the functions as is. Then wraps the results in the incoming index.
95
96 Deals with frequencies. Expects that the function returns a tuple,
97 a Bunch object, or a pandas-object.
98 """
99
100 @wraps(func)
101 def new_func(X, *args, **kwargs):
102 # quick pass-through for do nothing case
103 if not _is_using_pandas(X, None):
104 return func(X, *args, **kwargs)
105
106 wrapper_func = _get_pandas_wrapper(X, trim_head, trim_tail,
107 columns)
108 index = X.index
109 freq = index.inferred_freq
110 kwargs.update({freq_kw : freq_to_period(freq)})
111 ret = func(X, *args, **kwargs)
112 ret = wrapper_func(ret)
113 return ret
114
115 return new_func
116
117
118 def dummy_func(X):
119 return X
120
121 def dummy_func_array(X):
122 return X.values
123
124 def dummy_func_pandas_columns(X):
125 return X.values
126
127
128 def dummy_func_pandas_series(X):
129 return X['A']
130
131 import pandas as pd
132 import numpy as np
133
134
135 def test_pandas_freq_decorator():
136 X = pd.util.testing.makeDataFrame()
137 # in X, get a function back that returns an X with the same columns
138 func = pandas_wrapper(dummy_func)
139
140 np.testing.assert_equal(func(X.values), X)
141
142 func = pandas_wrapper(dummy_func_array)
143 pd.util.testing.assert_frame_equal(func(X), X)
144
145 expected = X.rename(columns=dict(zip('ABCD', 'EFGH')))
146 func = pandas_wrapper(dummy_func_array, names=list('EFGH'))
147 pd.util.testing.assert_frame_equal(func(X), expected)
148
149
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/statsmodels/tsa/filters/_utils.py b/statsmodels/tsa/filters/_utils.py
--- a/statsmodels/tsa/filters/_utils.py
+++ b/statsmodels/tsa/filters/_utils.py
@@ -35,7 +35,7 @@
if _is_using_pandas(X, None):
return _get_pandas_wrapper(X, trim_head, trim_tail)
else:
- return
+ return lambda x : x
def _maybe_get_pandas_wrapper_freq(X, trim=None):
diff --git a/statsmodels/tsa/seasonal.py b/statsmodels/tsa/seasonal.py
--- a/statsmodels/tsa/seasonal.py
+++ b/statsmodels/tsa/seasonal.py
@@ -4,7 +4,8 @@
from statsmodels.compat.python import lmap, range, iteritems
import numpy as np
from pandas.core.nanops import nanmean as pd_nanmean
-from .filters._utils import _maybe_get_pandas_wrapper_freq
+from .filters._utils import (_maybe_get_pandas_wrapper_freq,
+ _maybe_get_pandas_wrapper)
from .filters.filtertools import convolution_filter
from statsmodels.tsa.tsatools import freq_to_period
@@ -65,7 +66,11 @@
statsmodels.tsa.filters.hp_filter.hpfilter
statsmodels.tsa.filters.convolution_filter
"""
- _pandas_wrapper, pfreq = _maybe_get_pandas_wrapper_freq(x)
+ if freq is None:
+ _pandas_wrapper, pfreq = _maybe_get_pandas_wrapper_freq(x)
+ else:
+ _pandas_wrapper = _maybe_get_pandas_wrapper(x)
+ pfreq = None
x = np.asanyarray(x).squeeze()
nobs = len(x)
| {"golden_diff": "diff --git a/statsmodels/tsa/filters/_utils.py b/statsmodels/tsa/filters/_utils.py\n--- a/statsmodels/tsa/filters/_utils.py\n+++ b/statsmodels/tsa/filters/_utils.py\n@@ -35,7 +35,7 @@\n if _is_using_pandas(X, None):\n return _get_pandas_wrapper(X, trim_head, trim_tail)\n else:\n- return\n+ return lambda x : x\n \n \n def _maybe_get_pandas_wrapper_freq(X, trim=None):\ndiff --git a/statsmodels/tsa/seasonal.py b/statsmodels/tsa/seasonal.py\n--- a/statsmodels/tsa/seasonal.py\n+++ b/statsmodels/tsa/seasonal.py\n@@ -4,7 +4,8 @@\n from statsmodels.compat.python import lmap, range, iteritems\n import numpy as np\n from pandas.core.nanops import nanmean as pd_nanmean\n-from .filters._utils import _maybe_get_pandas_wrapper_freq\n+from .filters._utils import (_maybe_get_pandas_wrapper_freq,\n+ _maybe_get_pandas_wrapper)\n from .filters.filtertools import convolution_filter\n from statsmodels.tsa.tsatools import freq_to_period\n \n@@ -65,7 +66,11 @@\n statsmodels.tsa.filters.hp_filter.hpfilter\n statsmodels.tsa.filters.convolution_filter\n \"\"\"\n- _pandas_wrapper, pfreq = _maybe_get_pandas_wrapper_freq(x)\n+ if freq is None:\n+ _pandas_wrapper, pfreq = _maybe_get_pandas_wrapper_freq(x)\n+ else:\n+ _pandas_wrapper = _maybe_get_pandas_wrapper(x)\n+ pfreq = None\n x = np.asanyarray(x).squeeze()\n nobs = len(x)\n", "issue": "seasonal_decompose() With Known Freq but Without DatetimeIndex()\nIn issue #3225, someone wrote \"In 0.8, you should be able to specify freq as keyword argument to override the index.\" But I seem to still be having the same problems as the questioner there, with my data.\r\n\r\nHere is a simple reproducible code example of my issue:\r\n```\r\nimport statsmodels.api as sm\r\ndta = pd.Series([x%3 for x in range(100)])\r\ndecomposed = sm.tsa.seasonal_decompose(dta, freq=3)\r\n```\r\n\r\n> AttributeError: 'RangeIndex' object has no attribute 'inferred_freq'\r\n\r\n```\r\nimport statsmodels\r\nprint(statsmodels.__version__)\r\n```\r\n\r\n> 0.8.0\r\n\r\nShould it be possible to use StatsModels seasonal_decompose() with a given frequency but without a DatetimeIndex? With my real data, I know it is of half-hour resolution, but I don't know the dates/times it corresponds to, so there's no neat/clean way of mapping it to a datetime index.\r\n\r\nI've also posted this question on [StackOverflow](http://stackoverflow.com/questions/42425774/using-statsmodels-seasonal-decompose-without-datetimeindex-but-with-known-freq) and will add an answer there if I get one here. Thanks everyone for your time.\n", "before_files": [{"content": "\"\"\"\nSeasonal Decomposition by Moving Averages\n\"\"\"\nfrom statsmodels.compat.python import lmap, range, iteritems\nimport numpy as np\nfrom pandas.core.nanops import nanmean as pd_nanmean\nfrom .filters._utils import _maybe_get_pandas_wrapper_freq\nfrom .filters.filtertools import convolution_filter\nfrom statsmodels.tsa.tsatools import freq_to_period\n\n\ndef seasonal_mean(x, freq):\n \"\"\"\n Return means for each period in x. freq is an int that gives the\n number of periods per cycle. E.g., 12 for monthly. NaNs are ignored\n in the mean.\n \"\"\"\n return np.array([pd_nanmean(x[i::freq]) for i in range(freq)])\n\n\ndef seasonal_decompose(x, model=\"additive\", filt=None, freq=None, two_sided=True):\n \"\"\"\n Seasonal decomposition using moving averages\n\n Parameters\n ----------\n x : array-like\n Time series\n model : str {\"additive\", \"multiplicative\"}\n Type of seasonal component. Abbreviations are accepted.\n filt : array-like\n The filter coefficients for filtering out the seasonal component.\n The concrete moving average method used in filtering is determined by two_sided.\n freq : int, optional\n Frequency of the series. Must be used if x is not a pandas object.\n Overrides default periodicity of x if x is a pandas\n object with a timeseries index.\n two_sided : bool\n The moving average method used in filtering.\n If True (default), a centered moving average is computed using the filt.\n If False, the filter coefficients are for past values only.\n\n Returns\n -------\n results : obj\n A object with seasonal, trend, and resid attributes.\n\n Notes\n -----\n This is a naive decomposition. More sophisticated methods should\n be preferred.\n\n The additive model is Y[t] = T[t] + S[t] + e[t]\n\n The multiplicative model is Y[t] = T[t] * S[t] * e[t]\n\n The seasonal component is first removed by applying a convolution\n filter to the data. The average of this smoothed series for each\n period is the returned seasonal component.\n\n See Also\n --------\n statsmodels.tsa.filters.bk_filter.bkfilter\n statsmodels.tsa.filters.cf_filter.xffilter\n statsmodels.tsa.filters.hp_filter.hpfilter\n statsmodels.tsa.filters.convolution_filter\n \"\"\"\n _pandas_wrapper, pfreq = _maybe_get_pandas_wrapper_freq(x)\n x = np.asanyarray(x).squeeze()\n nobs = len(x)\n\n if not np.all(np.isfinite(x)):\n raise ValueError(\"This function does not handle missing values\")\n if model.startswith('m'):\n if np.any(x <= 0):\n raise ValueError(\"Multiplicative seasonality is not appropriate \"\n \"for zero and negative values\")\n\n if freq is None:\n if pfreq is not None:\n pfreq = freq_to_period(pfreq)\n freq = pfreq\n else:\n raise ValueError(\"You must specify a freq or x must be a \"\n \"pandas object with a timeseries index with\"\n \"a freq not set to None\")\n\n if filt is None:\n if freq % 2 == 0: # split weights at ends\n filt = np.array([.5] + [1] * (freq - 1) + [.5]) / freq\n else:\n filt = np.repeat(1./freq, freq)\n\n nsides = int(two_sided) + 1\n trend = convolution_filter(x, filt, nsides)\n\n # nan pad for conformability - convolve doesn't do it\n if model.startswith('m'):\n detrended = x / trend\n else:\n detrended = x - trend\n\n period_averages = seasonal_mean(detrended, freq)\n\n if model.startswith('m'):\n period_averages /= np.mean(period_averages)\n else:\n period_averages -= np.mean(period_averages)\n\n seasonal = np.tile(period_averages, nobs // freq + 1)[:nobs]\n\n if model.startswith('m'):\n resid = x / seasonal / trend\n else:\n resid = detrended - seasonal\n\n results = lmap(_pandas_wrapper, [seasonal, trend, resid, x])\n return DecomposeResult(seasonal=results[0], trend=results[1],\n resid=results[2], observed=results[3])\n\n\nclass DecomposeResult(object):\n def __init__(self, **kwargs):\n for key, value in iteritems(kwargs):\n setattr(self, key, value)\n self.nobs = len(self.observed)\n\n def plot(self):\n from statsmodels.graphics.utils import _import_mpl\n plt = _import_mpl()\n fig, axes = plt.subplots(4, 1, sharex=True)\n if hasattr(self.observed, 'plot'): # got pandas use it\n self.observed.plot(ax=axes[0], legend=False)\n axes[0].set_ylabel('Observed')\n self.trend.plot(ax=axes[1], legend=False)\n axes[1].set_ylabel('Trend')\n self.seasonal.plot(ax=axes[2], legend=False)\n axes[2].set_ylabel('Seasonal')\n self.resid.plot(ax=axes[3], legend=False)\n axes[3].set_ylabel('Residual')\n else:\n axes[0].plot(self.observed)\n axes[0].set_ylabel('Observed')\n axes[1].plot(self.trend)\n axes[1].set_ylabel('Trend')\n axes[2].plot(self.seasonal)\n axes[2].set_ylabel('Seasonal')\n axes[3].plot(self.resid)\n axes[3].set_ylabel('Residual')\n axes[3].set_xlabel('Time')\n axes[3].set_xlim(0, self.nobs)\n\n fig.tight_layout()\n return fig\n\n\nif __name__ == \"__main__\":\n x = np.array([-50, 175, 149, 214, 247, 237, 225, 329, 729, 809,\n 530, 489, 540, 457, 195, 176, 337, 239, 128, 102,\n 232, 429, 3, 98, 43, -141, -77, -13, 125, 361, -45, 184])\n results = seasonal_decompose(x, freq=4)\n\n from pandas import DataFrame, DatetimeIndex\n data = DataFrame(x, DatetimeIndex(start='1/1/1951',\n periods=len(x),\n freq='Q'))\n\n res = seasonal_decompose(data)\n\n", "path": "statsmodels/tsa/seasonal.py"}, {"content": "from functools import wraps\n\nfrom statsmodels.tools.data import _is_using_pandas\nfrom statsmodels.tsa.tsatools import freq_to_period\n\n\ndef _get_pandas_wrapper(X, trim_head=None, trim_tail=None, names=None):\n index = X.index\n #TODO: allow use index labels\n if trim_head is None and trim_tail is None:\n index = index\n elif trim_tail is None:\n index = index[trim_head:]\n elif trim_head is None:\n index = index[:-trim_tail]\n else:\n index = index[trim_head:-trim_tail]\n if hasattr(X, \"columns\"):\n if names is None:\n names = X.columns\n return lambda x : X.__class__(x, index=index, columns=names)\n else:\n if names is None:\n names = X.name\n return lambda x : X.__class__(x, index=index, name=names)\n\n\ndef _maybe_get_pandas_wrapper(X, trim_head=None, trim_tail=None):\n \"\"\"\n If using pandas returns a function to wrap the results, e.g., wrapper(X)\n trim is an integer for the symmetric truncation of the series in some\n filters.\n otherwise returns None\n \"\"\"\n if _is_using_pandas(X, None):\n return _get_pandas_wrapper(X, trim_head, trim_tail)\n else:\n return\n\n\ndef _maybe_get_pandas_wrapper_freq(X, trim=None):\n if _is_using_pandas(X, None):\n index = X.index\n func = _get_pandas_wrapper(X, trim)\n freq = index.inferred_freq\n return func, freq\n else:\n return lambda x : x, None\n\n\ndef pandas_wrapper(func, trim_head=None, trim_tail=None, names=None, *args,\n **kwargs):\n @wraps(func)\n def new_func(X, *args, **kwargs):\n # quick pass-through for do nothing case\n if not _is_using_pandas(X, None):\n return func(X, *args, **kwargs)\n\n wrapper_func = _get_pandas_wrapper(X, trim_head, trim_tail,\n names)\n ret = func(X, *args, **kwargs)\n ret = wrapper_func(ret)\n return ret\n\n return new_func\n\n\ndef pandas_wrapper_bunch(func, trim_head=None, trim_tail=None,\n names=None, *args, **kwargs):\n @wraps(func)\n def new_func(X, *args, **kwargs):\n # quick pass-through for do nothing case\n if not _is_using_pandas(X, None):\n return func(X, *args, **kwargs)\n\n wrapper_func = _get_pandas_wrapper(X, trim_head, trim_tail,\n names)\n ret = func(X, *args, **kwargs)\n ret = wrapper_func(ret)\n return ret\n\n return new_func\n\n\ndef pandas_wrapper_predict(func, trim_head=None, trim_tail=None,\n columns=None, *args, **kwargs):\n pass\n\n\ndef pandas_wrapper_freq(func, trim_head=None, trim_tail=None,\n freq_kw='freq', columns=None, *args, **kwargs):\n \"\"\"\n Return a new function that catches the incoming X, checks if it's pandas,\n calls the functions as is. Then wraps the results in the incoming index.\n\n Deals with frequencies. Expects that the function returns a tuple,\n a Bunch object, or a pandas-object.\n \"\"\"\n\n @wraps(func)\n def new_func(X, *args, **kwargs):\n # quick pass-through for do nothing case\n if not _is_using_pandas(X, None):\n return func(X, *args, **kwargs)\n\n wrapper_func = _get_pandas_wrapper(X, trim_head, trim_tail,\n columns)\n index = X.index\n freq = index.inferred_freq\n kwargs.update({freq_kw : freq_to_period(freq)})\n ret = func(X, *args, **kwargs)\n ret = wrapper_func(ret)\n return ret\n\n return new_func\n\n\ndef dummy_func(X):\n return X\n\ndef dummy_func_array(X):\n return X.values\n\ndef dummy_func_pandas_columns(X):\n return X.values\n\n\ndef dummy_func_pandas_series(X):\n return X['A']\n\nimport pandas as pd\nimport numpy as np\n\n\ndef test_pandas_freq_decorator():\n X = pd.util.testing.makeDataFrame()\n # in X, get a function back that returns an X with the same columns\n func = pandas_wrapper(dummy_func)\n\n np.testing.assert_equal(func(X.values), X)\n\n func = pandas_wrapper(dummy_func_array)\n pd.util.testing.assert_frame_equal(func(X), X)\n\n expected = X.rename(columns=dict(zip('ABCD', 'EFGH')))\n func = pandas_wrapper(dummy_func_array, names=list('EFGH'))\n pd.util.testing.assert_frame_equal(func(X), expected)\n\n", "path": "statsmodels/tsa/filters/_utils.py"}], "after_files": [{"content": "\"\"\"\nSeasonal Decomposition by Moving Averages\n\"\"\"\nfrom statsmodels.compat.python import lmap, range, iteritems\nimport numpy as np\nfrom pandas.core.nanops import nanmean as pd_nanmean\nfrom .filters._utils import (_maybe_get_pandas_wrapper_freq,\n _maybe_get_pandas_wrapper)\nfrom .filters.filtertools import convolution_filter\nfrom statsmodels.tsa.tsatools import freq_to_period\n\n\ndef seasonal_mean(x, freq):\n \"\"\"\n Return means for each period in x. freq is an int that gives the\n number of periods per cycle. E.g., 12 for monthly. NaNs are ignored\n in the mean.\n \"\"\"\n return np.array([pd_nanmean(x[i::freq]) for i in range(freq)])\n\n\ndef seasonal_decompose(x, model=\"additive\", filt=None, freq=None, two_sided=True):\n \"\"\"\n Seasonal decomposition using moving averages\n\n Parameters\n ----------\n x : array-like\n Time series\n model : str {\"additive\", \"multiplicative\"}\n Type of seasonal component. Abbreviations are accepted.\n filt : array-like\n The filter coefficients for filtering out the seasonal component.\n The concrete moving average method used in filtering is determined by two_sided.\n freq : int, optional\n Frequency of the series. Must be used if x is not a pandas object.\n Overrides default periodicity of x if x is a pandas\n object with a timeseries index.\n two_sided : bool\n The moving average method used in filtering.\n If True (default), a centered moving average is computed using the filt.\n If False, the filter coefficients are for past values only.\n\n Returns\n -------\n results : obj\n A object with seasonal, trend, and resid attributes.\n\n Notes\n -----\n This is a naive decomposition. More sophisticated methods should\n be preferred.\n\n The additive model is Y[t] = T[t] + S[t] + e[t]\n\n The multiplicative model is Y[t] = T[t] * S[t] * e[t]\n\n The seasonal component is first removed by applying a convolution\n filter to the data. The average of this smoothed series for each\n period is the returned seasonal component.\n\n See Also\n --------\n statsmodels.tsa.filters.bk_filter.bkfilter\n statsmodels.tsa.filters.cf_filter.xffilter\n statsmodels.tsa.filters.hp_filter.hpfilter\n statsmodels.tsa.filters.convolution_filter\n \"\"\"\n if freq is None:\n _pandas_wrapper, pfreq = _maybe_get_pandas_wrapper_freq(x)\n else:\n _pandas_wrapper = _maybe_get_pandas_wrapper(x)\n pfreq = None\n x = np.asanyarray(x).squeeze()\n nobs = len(x)\n\n if not np.all(np.isfinite(x)):\n raise ValueError(\"This function does not handle missing values\")\n if model.startswith('m'):\n if np.any(x <= 0):\n raise ValueError(\"Multiplicative seasonality is not appropriate \"\n \"for zero and negative values\")\n\n if freq is None:\n if pfreq is not None:\n pfreq = freq_to_period(pfreq)\n freq = pfreq\n else:\n raise ValueError(\"You must specify a freq or x must be a \"\n \"pandas object with a timeseries index with\"\n \"a freq not set to None\")\n\n if filt is None:\n if freq % 2 == 0: # split weights at ends\n filt = np.array([.5] + [1] * (freq - 1) + [.5]) / freq\n else:\n filt = np.repeat(1./freq, freq)\n\n nsides = int(two_sided) + 1\n trend = convolution_filter(x, filt, nsides)\n\n # nan pad for conformability - convolve doesn't do it\n if model.startswith('m'):\n detrended = x / trend\n else:\n detrended = x - trend\n\n period_averages = seasonal_mean(detrended, freq)\n\n if model.startswith('m'):\n period_averages /= np.mean(period_averages)\n else:\n period_averages -= np.mean(period_averages)\n\n seasonal = np.tile(period_averages, nobs // freq + 1)[:nobs]\n\n if model.startswith('m'):\n resid = x / seasonal / trend\n else:\n resid = detrended - seasonal\n\n results = lmap(_pandas_wrapper, [seasonal, trend, resid, x])\n return DecomposeResult(seasonal=results[0], trend=results[1],\n resid=results[2], observed=results[3])\n\n\nclass DecomposeResult(object):\n def __init__(self, **kwargs):\n for key, value in iteritems(kwargs):\n setattr(self, key, value)\n self.nobs = len(self.observed)\n\n def plot(self):\n from statsmodels.graphics.utils import _import_mpl\n plt = _import_mpl()\n fig, axes = plt.subplots(4, 1, sharex=True)\n if hasattr(self.observed, 'plot'): # got pandas use it\n self.observed.plot(ax=axes[0], legend=False)\n axes[0].set_ylabel('Observed')\n self.trend.plot(ax=axes[1], legend=False)\n axes[1].set_ylabel('Trend')\n self.seasonal.plot(ax=axes[2], legend=False)\n axes[2].set_ylabel('Seasonal')\n self.resid.plot(ax=axes[3], legend=False)\n axes[3].set_ylabel('Residual')\n else:\n axes[0].plot(self.observed)\n axes[0].set_ylabel('Observed')\n axes[1].plot(self.trend)\n axes[1].set_ylabel('Trend')\n axes[2].plot(self.seasonal)\n axes[2].set_ylabel('Seasonal')\n axes[3].plot(self.resid)\n axes[3].set_ylabel('Residual')\n axes[3].set_xlabel('Time')\n axes[3].set_xlim(0, self.nobs)\n\n fig.tight_layout()\n return fig\n\n\nif __name__ == \"__main__\":\n x = np.array([-50, 175, 149, 214, 247, 237, 225, 329, 729, 809,\n 530, 489, 540, 457, 195, 176, 337, 239, 128, 102,\n 232, 429, 3, 98, 43, -141, -77, -13, 125, 361, -45, 184])\n results = seasonal_decompose(x, freq=4)\n\n from pandas import DataFrame, DatetimeIndex\n data = DataFrame(x, DatetimeIndex(start='1/1/1951',\n periods=len(x),\n freq='Q'))\n\n res = seasonal_decompose(data)\n\n", "path": "statsmodels/tsa/seasonal.py"}, {"content": "from functools import wraps\n\nfrom statsmodels.tools.data import _is_using_pandas\nfrom statsmodels.tsa.tsatools import freq_to_period\n\n\ndef _get_pandas_wrapper(X, trim_head=None, trim_tail=None, names=None):\n index = X.index\n #TODO: allow use index labels\n if trim_head is None and trim_tail is None:\n index = index\n elif trim_tail is None:\n index = index[trim_head:]\n elif trim_head is None:\n index = index[:-trim_tail]\n else:\n index = index[trim_head:-trim_tail]\n if hasattr(X, \"columns\"):\n if names is None:\n names = X.columns\n return lambda x : X.__class__(x, index=index, columns=names)\n else:\n if names is None:\n names = X.name\n return lambda x : X.__class__(x, index=index, name=names)\n\n\ndef _maybe_get_pandas_wrapper(X, trim_head=None, trim_tail=None):\n \"\"\"\n If using pandas returns a function to wrap the results, e.g., wrapper(X)\n trim is an integer for the symmetric truncation of the series in some\n filters.\n otherwise returns None\n \"\"\"\n if _is_using_pandas(X, None):\n return _get_pandas_wrapper(X, trim_head, trim_tail)\n else:\n return lambda x : x\n\n\ndef _maybe_get_pandas_wrapper_freq(X, trim=None):\n if _is_using_pandas(X, None):\n index = X.index\n func = _get_pandas_wrapper(X, trim)\n freq = index.inferred_freq\n return func, freq\n else:\n return lambda x : x, None\n\n\ndef pandas_wrapper(func, trim_head=None, trim_tail=None, names=None, *args,\n **kwargs):\n @wraps(func)\n def new_func(X, *args, **kwargs):\n # quick pass-through for do nothing case\n if not _is_using_pandas(X, None):\n return func(X, *args, **kwargs)\n\n wrapper_func = _get_pandas_wrapper(X, trim_head, trim_tail,\n names)\n ret = func(X, *args, **kwargs)\n ret = wrapper_func(ret)\n return ret\n\n return new_func\n\n\ndef pandas_wrapper_bunch(func, trim_head=None, trim_tail=None,\n names=None, *args, **kwargs):\n @wraps(func)\n def new_func(X, *args, **kwargs):\n # quick pass-through for do nothing case\n if not _is_using_pandas(X, None):\n return func(X, *args, **kwargs)\n\n wrapper_func = _get_pandas_wrapper(X, trim_head, trim_tail,\n names)\n ret = func(X, *args, **kwargs)\n ret = wrapper_func(ret)\n return ret\n\n return new_func\n\n\ndef pandas_wrapper_predict(func, trim_head=None, trim_tail=None,\n columns=None, *args, **kwargs):\n pass\n\n\ndef pandas_wrapper_freq(func, trim_head=None, trim_tail=None,\n freq_kw='freq', columns=None, *args, **kwargs):\n \"\"\"\n Return a new function that catches the incoming X, checks if it's pandas,\n calls the functions as is. Then wraps the results in the incoming index.\n\n Deals with frequencies. Expects that the function returns a tuple,\n a Bunch object, or a pandas-object.\n \"\"\"\n\n @wraps(func)\n def new_func(X, *args, **kwargs):\n # quick pass-through for do nothing case\n if not _is_using_pandas(X, None):\n return func(X, *args, **kwargs)\n\n wrapper_func = _get_pandas_wrapper(X, trim_head, trim_tail,\n columns)\n index = X.index\n freq = index.inferred_freq\n kwargs.update({freq_kw : freq_to_period(freq)})\n ret = func(X, *args, **kwargs)\n ret = wrapper_func(ret)\n return ret\n\n return new_func\n\n\ndef dummy_func(X):\n return X\n\ndef dummy_func_array(X):\n return X.values\n\ndef dummy_func_pandas_columns(X):\n return X.values\n\n\ndef dummy_func_pandas_series(X):\n return X['A']\n\nimport pandas as pd\nimport numpy as np\n\n\ndef test_pandas_freq_decorator():\n X = pd.util.testing.makeDataFrame()\n # in X, get a function back that returns an X with the same columns\n func = pandas_wrapper(dummy_func)\n\n np.testing.assert_equal(func(X.values), X)\n\n func = pandas_wrapper(dummy_func_array)\n pd.util.testing.assert_frame_equal(func(X), X)\n\n expected = X.rename(columns=dict(zip('ABCD', 'EFGH')))\n func = pandas_wrapper(dummy_func_array, names=list('EFGH'))\n pd.util.testing.assert_frame_equal(func(X), expected)\n\n", "path": "statsmodels/tsa/filters/_utils.py"}]} | 3,927 | 386 |
gh_patches_debug_48428 | rasdani/github-patches | git_diff | pytorch__ignite-930 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Metrics objects are not pickleable
Pickling Metrics objects fails due to infinite loop.
The reason for that is the following:
To make metrics composable, the base class Metric has [methods](https://github.com/pytorch/ignite/blob/master/ignite/metrics/metric.py#L86) to return a `MetricsLambda` object. However, this means that pickling `Metrics` requires pickling `MetricsLambda`, but since `MetricsLambda` depends on `Metrics` we enter an infinite loop.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/metrics/metric.py`
Content:
```
1 import numbers
2 from abc import ABCMeta, abstractmethod
3 from functools import wraps
4 from collections.abc import Mapping
5 import warnings
6
7 from typing import Callable, Union, Optional, Any
8
9 import torch
10 import torch.distributed as dist
11
12 from ignite.engine import Events, Engine
13
14 __all__ = ["Metric"]
15
16
17 class Metric(metaclass=ABCMeta):
18 """
19 Base class for all Metrics.
20
21 Args:
22 output_transform (callable, optional): a callable that is used to transform the
23 :class:`~ignite.engine.Engine`'s `process_function`'s output into the
24 form expected by the metric. This can be useful if, for example, you have a multi-output model and
25 you want to compute the metric with respect to one of the outputs.
26 By default, metrics require the output as `(y_pred, y)` or `{'y_pred': y_pred, 'y': y}`.
27 device (str of torch.device, optional): device specification in case of distributed computation usage.
28 In most of the cases, it can be defined as "cuda:local_rank" or "cuda"
29 if already set `torch.cuda.set_device(local_rank)`. By default, if a distributed process group is
30 initialized and available, device is set to `cuda`.
31
32 """
33
34 _required_output_keys = ("y_pred", "y")
35
36 def __init__(self, output_transform: Callable = lambda x: x, device: Optional[Union[str, torch.device]] = None):
37 self._output_transform = output_transform
38
39 # Check device if distributed is initialized:
40 if dist.is_available() and dist.is_initialized():
41
42 # check if reset and update methods are decorated. Compute may not be decorated
43 if not (hasattr(self.reset, "_decorated") and hasattr(self.update, "_decorated")):
44 warnings.warn(
45 "{} class does not support distributed setting. Computed result is not collected "
46 "across all computing devices".format(self.__class__.__name__),
47 RuntimeWarning,
48 )
49 if device is None:
50 device = "cuda"
51 device = torch.device(device)
52 self._device = device
53 self._is_reduced = False
54 self.reset()
55
56 @abstractmethod
57 def reset(self) -> None:
58 """
59 Resets the metric to it's initial state.
60
61 This is called at the start of each epoch.
62 """
63 pass
64
65 @abstractmethod
66 def update(self, output) -> None:
67 """
68 Updates the metric's state using the passed batch output.
69
70 This is called once for each batch.
71
72 Args:
73 output: the is the output from the engine's process function.
74 """
75 pass
76
77 @abstractmethod
78 def compute(self) -> Any:
79 """
80 Computes the metric based on it's accumulated state.
81
82 This is called at the end of each epoch.
83
84 Returns:
85 Any: the actual quantity of interest.
86
87 Raises:
88 NotComputableError: raised when the metric cannot be computed.
89 """
90 pass
91
92 def _sync_all_reduce(self, tensor: Union[torch.Tensor, numbers.Number]) -> Union[torch.Tensor, numbers.Number]:
93 if not (dist.is_available() and dist.is_initialized()):
94 # Nothing to reduce
95 return tensor
96
97 tensor_to_number = False
98 if isinstance(tensor, numbers.Number):
99 tensor = torch.tensor(tensor, device=self._device)
100 tensor_to_number = True
101
102 if isinstance(tensor, torch.Tensor):
103 # check if the tensor is at specified device
104 if tensor.device != self._device:
105 tensor = tensor.to(self._device)
106 else:
107 raise TypeError("Unhandled input type {}".format(type(tensor)))
108
109 # synchronize and reduce
110 dist.barrier()
111 dist.all_reduce(tensor)
112
113 if tensor_to_number:
114 return tensor.item()
115 return tensor
116
117 def started(self, engine: Engine) -> None:
118 self.reset()
119
120 @torch.no_grad()
121 def iteration_completed(self, engine: Engine) -> None:
122 output = self._output_transform(engine.state.output)
123 if isinstance(output, Mapping):
124 if self._required_output_keys is None:
125 raise TypeError(
126 "Transformed engine output for {} metric should be a tuple/list, but given {}".format(
127 self.__class__.__name__, type(output)
128 )
129 )
130 if not all([k in output for k in self._required_output_keys]):
131 raise ValueError(
132 "When transformed engine's output is a mapping, "
133 "it should contain {} keys, but given {}".format(self._required_output_keys, list(output.keys()))
134 )
135 output = tuple(output[k] for k in self._required_output_keys)
136 self.update(output)
137
138 def completed(self, engine: Engine, name: str) -> None:
139 result = self.compute()
140 if torch.is_tensor(result) and len(result.shape) == 0:
141 result = result.item()
142 engine.state.metrics[name] = result
143
144 def attach(self, engine: Engine, name: str) -> None:
145 """
146 Attaches current metric to provided engine. On the end of engine's run,
147 `engine.state.metrics` dictionary will contain computed metric's value under provided name.
148
149 Args:
150 engine (Engine): the engine to which the metric must be attached
151 name (str): the name of the metric to attach
152
153 Example:
154
155 .. code-block:: python
156
157 metric = ...
158 metric.attach(engine, "mymetric")
159
160 assert "mymetric" in engine.run(data).metrics
161
162 assert metric.is_attached(engine)
163 """
164 engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)
165 if not engine.has_event_handler(self.started, Events.EPOCH_STARTED):
166 engine.add_event_handler(Events.EPOCH_STARTED, self.started)
167 if not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):
168 engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)
169
170 def detach(self, engine: Engine) -> None:
171 """
172 Detaches current metric from the engine and no metric's computation is done during the run.
173 This method in conjunction with :meth:`~ignite.metrics.Metric.attach` can be useful if several
174 metrics need to be computed with different periods. For example, one metric is computed every training epoch
175 and another metric (e.g. more expensive one) is done every n-th training epoch.
176
177 Args:
178 engine (Engine): the engine from which the metric must be detached
179
180 Example:
181
182 .. code-block:: python
183
184 metric = ...
185 engine = ...
186 metric.detach(engine)
187
188 assert "mymetric" not in engine.run(data).metrics
189
190 assert not metric.is_attached(engine)
191 """
192 if engine.has_event_handler(self.completed, Events.EPOCH_COMPLETED):
193 engine.remove_event_handler(self.completed, Events.EPOCH_COMPLETED)
194 if engine.has_event_handler(self.started, Events.EPOCH_STARTED):
195 engine.remove_event_handler(self.started, Events.EPOCH_STARTED)
196 if engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):
197 engine.remove_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED)
198
199 def is_attached(self, engine: Engine) -> bool:
200 """
201 Checks if current metric is attached to provided engine. If attached, metric's computed
202 value is written to `engine.state.metrics` dictionary.
203
204 Args:
205 engine (Engine): the engine checked from which the metric should be attached
206 """
207 return engine.has_event_handler(self.completed, Events.EPOCH_COMPLETED)
208
209 def __add__(self, other):
210 from ignite.metrics import MetricsLambda
211
212 return MetricsLambda(lambda x, y: x + y, self, other)
213
214 def __radd__(self, other):
215 from ignite.metrics import MetricsLambda
216
217 return MetricsLambda(lambda x, y: x + y, other, self)
218
219 def __sub__(self, other):
220 from ignite.metrics import MetricsLambda
221
222 return MetricsLambda(lambda x, y: x - y, self, other)
223
224 def __rsub__(self, other):
225 from ignite.metrics import MetricsLambda
226
227 return MetricsLambda(lambda x, y: x - y, other, self)
228
229 def __mul__(self, other):
230 from ignite.metrics import MetricsLambda
231
232 return MetricsLambda(lambda x, y: x * y, self, other)
233
234 def __rmul__(self, other):
235 from ignite.metrics import MetricsLambda
236
237 return MetricsLambda(lambda x, y: x * y, other, self)
238
239 def __pow__(self, other):
240 from ignite.metrics import MetricsLambda
241
242 return MetricsLambda(lambda x, y: x ** y, self, other)
243
244 def __rpow__(self, other):
245 from ignite.metrics import MetricsLambda
246
247 return MetricsLambda(lambda x, y: x ** y, other, self)
248
249 def __mod__(self, other):
250 from ignite.metrics import MetricsLambda
251
252 return MetricsLambda(lambda x, y: x % y, self, other)
253
254 def __div__(self, other):
255 from ignite.metrics import MetricsLambda
256
257 return MetricsLambda(lambda x, y: x.__div__(y), self, other)
258
259 def __rdiv__(self, other):
260 from ignite.metrics import MetricsLambda
261
262 return MetricsLambda(lambda x, y: x.__div__(y), other, self)
263
264 def __truediv__(self, other):
265 from ignite.metrics import MetricsLambda
266
267 return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)
268
269 def __rtruediv__(self, other):
270 from ignite.metrics import MetricsLambda
271
272 return MetricsLambda(lambda x, y: x.__truediv__(y), other, self)
273
274 def __floordiv__(self, other):
275 from ignite.metrics import MetricsLambda
276
277 return MetricsLambda(lambda x, y: x // y, self, other)
278
279 def __getattr__(self, attr: str) -> Callable:
280 from ignite.metrics import MetricsLambda
281
282 def fn(x, *args, **kwargs):
283 return getattr(x, attr)(*args, **kwargs)
284
285 def wrapper(*args, **kwargs):
286 return MetricsLambda(fn, self, *args, **kwargs)
287
288 return wrapper
289
290 def __getitem__(self, index: Any):
291 from ignite.metrics import MetricsLambda
292
293 return MetricsLambda(lambda x: x[index], self)
294
295
296 def sync_all_reduce(*attrs) -> Callable:
297 def wrapper(func: Callable) -> Callable:
298 @wraps(func)
299 def another_wrapper(self: Metric, *args, **kwargs) -> Callable:
300 if not isinstance(self, Metric):
301 raise RuntimeError(
302 "Decorator sync_all_reduce should be used on " "ignite.metric.Metric class methods only"
303 )
304
305 if len(attrs) > 0 and not self._is_reduced:
306 for attr in attrs:
307 t = getattr(self, attr, None)
308 if t is not None:
309 t = self._sync_all_reduce(t)
310 self._is_reduced = True
311 setattr(self, attr, t)
312
313 return func(self, *args, **kwargs)
314
315 return another_wrapper
316
317 wrapper._decorated = True
318 return wrapper
319
320
321 def reinit__is_reduced(func: Callable) -> Callable:
322 @wraps(func)
323 def wrapper(self, *args, **kwargs):
324 func(self, *args, **kwargs)
325 self._is_reduced = False
326
327 wrapper._decorated = True
328 return wrapper
329
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ignite/metrics/metric.py b/ignite/metrics/metric.py
--- a/ignite/metrics/metric.py
+++ b/ignite/metrics/metric.py
@@ -292,6 +292,12 @@
return MetricsLambda(lambda x: x[index], self)
+ def __getstate__(self):
+ return self.__dict__
+
+ def __setstate__(self, d):
+ self.__dict__.update(d)
+
def sync_all_reduce(*attrs) -> Callable:
def wrapper(func: Callable) -> Callable:
| {"golden_diff": "diff --git a/ignite/metrics/metric.py b/ignite/metrics/metric.py\n--- a/ignite/metrics/metric.py\n+++ b/ignite/metrics/metric.py\n@@ -292,6 +292,12 @@\n \n return MetricsLambda(lambda x: x[index], self)\n \n+ def __getstate__(self):\n+ return self.__dict__\n+\n+ def __setstate__(self, d):\n+ self.__dict__.update(d)\n+\n \n def sync_all_reduce(*attrs) -> Callable:\n def wrapper(func: Callable) -> Callable:\n", "issue": "Metrics objects are not pickleable\nPickling Metrics objects fails due to infinite loop.\r\n\r\nThe reason for that is the following:\r\n\r\nTo make metrics composable, the base class Metric has [methods](https://github.com/pytorch/ignite/blob/master/ignite/metrics/metric.py#L86) to return a `MetricsLambda` object. However, this means that pickling `Metrics` requires pickling `MetricsLambda`, but since `MetricsLambda` depends on `Metrics` we enter an infinite loop.\r\n\r\n\n", "before_files": [{"content": "import numbers\nfrom abc import ABCMeta, abstractmethod\nfrom functools import wraps\nfrom collections.abc import Mapping\nimport warnings\n\nfrom typing import Callable, Union, Optional, Any\n\nimport torch\nimport torch.distributed as dist\n\nfrom ignite.engine import Events, Engine\n\n__all__ = [\"Metric\"]\n\n\nclass Metric(metaclass=ABCMeta):\n \"\"\"\n Base class for all Metrics.\n\n Args:\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n By default, metrics require the output as `(y_pred, y)` or `{'y_pred': y_pred, 'y': y}`.\n device (str of torch.device, optional): device specification in case of distributed computation usage.\n In most of the cases, it can be defined as \"cuda:local_rank\" or \"cuda\"\n if already set `torch.cuda.set_device(local_rank)`. By default, if a distributed process group is\n initialized and available, device is set to `cuda`.\n\n \"\"\"\n\n _required_output_keys = (\"y_pred\", \"y\")\n\n def __init__(self, output_transform: Callable = lambda x: x, device: Optional[Union[str, torch.device]] = None):\n self._output_transform = output_transform\n\n # Check device if distributed is initialized:\n if dist.is_available() and dist.is_initialized():\n\n # check if reset and update methods are decorated. Compute may not be decorated\n if not (hasattr(self.reset, \"_decorated\") and hasattr(self.update, \"_decorated\")):\n warnings.warn(\n \"{} class does not support distributed setting. Computed result is not collected \"\n \"across all computing devices\".format(self.__class__.__name__),\n RuntimeWarning,\n )\n if device is None:\n device = \"cuda\"\n device = torch.device(device)\n self._device = device\n self._is_reduced = False\n self.reset()\n\n @abstractmethod\n def reset(self) -> None:\n \"\"\"\n Resets the metric to it's initial state.\n\n This is called at the start of each epoch.\n \"\"\"\n pass\n\n @abstractmethod\n def update(self, output) -> None:\n \"\"\"\n Updates the metric's state using the passed batch output.\n\n This is called once for each batch.\n\n Args:\n output: the is the output from the engine's process function.\n \"\"\"\n pass\n\n @abstractmethod\n def compute(self) -> Any:\n \"\"\"\n Computes the metric based on it's accumulated state.\n\n This is called at the end of each epoch.\n\n Returns:\n Any: the actual quantity of interest.\n\n Raises:\n NotComputableError: raised when the metric cannot be computed.\n \"\"\"\n pass\n\n def _sync_all_reduce(self, tensor: Union[torch.Tensor, numbers.Number]) -> Union[torch.Tensor, numbers.Number]:\n if not (dist.is_available() and dist.is_initialized()):\n # Nothing to reduce\n return tensor\n\n tensor_to_number = False\n if isinstance(tensor, numbers.Number):\n tensor = torch.tensor(tensor, device=self._device)\n tensor_to_number = True\n\n if isinstance(tensor, torch.Tensor):\n # check if the tensor is at specified device\n if tensor.device != self._device:\n tensor = tensor.to(self._device)\n else:\n raise TypeError(\"Unhandled input type {}\".format(type(tensor)))\n\n # synchronize and reduce\n dist.barrier()\n dist.all_reduce(tensor)\n\n if tensor_to_number:\n return tensor.item()\n return tensor\n\n def started(self, engine: Engine) -> None:\n self.reset()\n\n @torch.no_grad()\n def iteration_completed(self, engine: Engine) -> None:\n output = self._output_transform(engine.state.output)\n if isinstance(output, Mapping):\n if self._required_output_keys is None:\n raise TypeError(\n \"Transformed engine output for {} metric should be a tuple/list, but given {}\".format(\n self.__class__.__name__, type(output)\n )\n )\n if not all([k in output for k in self._required_output_keys]):\n raise ValueError(\n \"When transformed engine's output is a mapping, \"\n \"it should contain {} keys, but given {}\".format(self._required_output_keys, list(output.keys()))\n )\n output = tuple(output[k] for k in self._required_output_keys)\n self.update(output)\n\n def completed(self, engine: Engine, name: str) -> None:\n result = self.compute()\n if torch.is_tensor(result) and len(result.shape) == 0:\n result = result.item()\n engine.state.metrics[name] = result\n\n def attach(self, engine: Engine, name: str) -> None:\n \"\"\"\n Attaches current metric to provided engine. On the end of engine's run,\n `engine.state.metrics` dictionary will contain computed metric's value under provided name.\n\n Args:\n engine (Engine): the engine to which the metric must be attached\n name (str): the name of the metric to attach\n\n Example:\n\n .. code-block:: python\n\n metric = ...\n metric.attach(engine, \"mymetric\")\n\n assert \"mymetric\" in engine.run(data).metrics\n\n assert metric.is_attached(engine)\n \"\"\"\n engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)\n if not engine.has_event_handler(self.started, Events.EPOCH_STARTED):\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\n if not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\n\n def detach(self, engine: Engine) -> None:\n \"\"\"\n Detaches current metric from the engine and no metric's computation is done during the run.\n This method in conjunction with :meth:`~ignite.metrics.Metric.attach` can be useful if several\n metrics need to be computed with different periods. For example, one metric is computed every training epoch\n and another metric (e.g. more expensive one) is done every n-th training epoch.\n\n Args:\n engine (Engine): the engine from which the metric must be detached\n\n Example:\n\n .. code-block:: python\n\n metric = ...\n engine = ...\n metric.detach(engine)\n\n assert \"mymetric\" not in engine.run(data).metrics\n\n assert not metric.is_attached(engine)\n \"\"\"\n if engine.has_event_handler(self.completed, Events.EPOCH_COMPLETED):\n engine.remove_event_handler(self.completed, Events.EPOCH_COMPLETED)\n if engine.has_event_handler(self.started, Events.EPOCH_STARTED):\n engine.remove_event_handler(self.started, Events.EPOCH_STARTED)\n if engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\n engine.remove_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED)\n\n def is_attached(self, engine: Engine) -> bool:\n \"\"\"\n Checks if current metric is attached to provided engine. If attached, metric's computed\n value is written to `engine.state.metrics` dictionary.\n\n Args:\n engine (Engine): the engine checked from which the metric should be attached\n \"\"\"\n return engine.has_event_handler(self.completed, Events.EPOCH_COMPLETED)\n\n def __add__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x + y, self, other)\n\n def __radd__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x + y, other, self)\n\n def __sub__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x - y, self, other)\n\n def __rsub__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x - y, other, self)\n\n def __mul__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x * y, self, other)\n\n def __rmul__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x * y, other, self)\n\n def __pow__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x ** y, self, other)\n\n def __rpow__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x ** y, other, self)\n\n def __mod__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x % y, self, other)\n\n def __div__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x.__div__(y), self, other)\n\n def __rdiv__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x.__div__(y), other, self)\n\n def __truediv__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)\n\n def __rtruediv__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x.__truediv__(y), other, self)\n\n def __floordiv__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x // y, self, other)\n\n def __getattr__(self, attr: str) -> Callable:\n from ignite.metrics import MetricsLambda\n\n def fn(x, *args, **kwargs):\n return getattr(x, attr)(*args, **kwargs)\n\n def wrapper(*args, **kwargs):\n return MetricsLambda(fn, self, *args, **kwargs)\n\n return wrapper\n\n def __getitem__(self, index: Any):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x: x[index], self)\n\n\ndef sync_all_reduce(*attrs) -> Callable:\n def wrapper(func: Callable) -> Callable:\n @wraps(func)\n def another_wrapper(self: Metric, *args, **kwargs) -> Callable:\n if not isinstance(self, Metric):\n raise RuntimeError(\n \"Decorator sync_all_reduce should be used on \" \"ignite.metric.Metric class methods only\"\n )\n\n if len(attrs) > 0 and not self._is_reduced:\n for attr in attrs:\n t = getattr(self, attr, None)\n if t is not None:\n t = self._sync_all_reduce(t)\n self._is_reduced = True\n setattr(self, attr, t)\n\n return func(self, *args, **kwargs)\n\n return another_wrapper\n\n wrapper._decorated = True\n return wrapper\n\n\ndef reinit__is_reduced(func: Callable) -> Callable:\n @wraps(func)\n def wrapper(self, *args, **kwargs):\n func(self, *args, **kwargs)\n self._is_reduced = False\n\n wrapper._decorated = True\n return wrapper\n", "path": "ignite/metrics/metric.py"}], "after_files": [{"content": "import numbers\nfrom abc import ABCMeta, abstractmethod\nfrom functools import wraps\nfrom collections.abc import Mapping\nimport warnings\n\nfrom typing import Callable, Union, Optional, Any\n\nimport torch\nimport torch.distributed as dist\n\nfrom ignite.engine import Events, Engine\n\n__all__ = [\"Metric\"]\n\n\nclass Metric(metaclass=ABCMeta):\n \"\"\"\n Base class for all Metrics.\n\n Args:\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n By default, metrics require the output as `(y_pred, y)` or `{'y_pred': y_pred, 'y': y}`.\n device (str of torch.device, optional): device specification in case of distributed computation usage.\n In most of the cases, it can be defined as \"cuda:local_rank\" or \"cuda\"\n if already set `torch.cuda.set_device(local_rank)`. By default, if a distributed process group is\n initialized and available, device is set to `cuda`.\n\n \"\"\"\n\n _required_output_keys = (\"y_pred\", \"y\")\n\n def __init__(self, output_transform: Callable = lambda x: x, device: Optional[Union[str, torch.device]] = None):\n self._output_transform = output_transform\n\n # Check device if distributed is initialized:\n if dist.is_available() and dist.is_initialized():\n\n # check if reset and update methods are decorated. Compute may not be decorated\n if not (hasattr(self.reset, \"_decorated\") and hasattr(self.update, \"_decorated\")):\n warnings.warn(\n \"{} class does not support distributed setting. Computed result is not collected \"\n \"across all computing devices\".format(self.__class__.__name__),\n RuntimeWarning,\n )\n if device is None:\n device = \"cuda\"\n device = torch.device(device)\n self._device = device\n self._is_reduced = False\n self.reset()\n\n @abstractmethod\n def reset(self) -> None:\n \"\"\"\n Resets the metric to it's initial state.\n\n This is called at the start of each epoch.\n \"\"\"\n pass\n\n @abstractmethod\n def update(self, output) -> None:\n \"\"\"\n Updates the metric's state using the passed batch output.\n\n This is called once for each batch.\n\n Args:\n output: the is the output from the engine's process function.\n \"\"\"\n pass\n\n @abstractmethod\n def compute(self) -> Any:\n \"\"\"\n Computes the metric based on it's accumulated state.\n\n This is called at the end of each epoch.\n\n Returns:\n Any: the actual quantity of interest.\n\n Raises:\n NotComputableError: raised when the metric cannot be computed.\n \"\"\"\n pass\n\n def _sync_all_reduce(self, tensor: Union[torch.Tensor, numbers.Number]) -> Union[torch.Tensor, numbers.Number]:\n if not (dist.is_available() and dist.is_initialized()):\n # Nothing to reduce\n return tensor\n\n tensor_to_number = False\n if isinstance(tensor, numbers.Number):\n tensor = torch.tensor(tensor, device=self._device)\n tensor_to_number = True\n\n if isinstance(tensor, torch.Tensor):\n # check if the tensor is at specified device\n if tensor.device != self._device:\n tensor = tensor.to(self._device)\n else:\n raise TypeError(\"Unhandled input type {}\".format(type(tensor)))\n\n # synchronize and reduce\n dist.barrier()\n dist.all_reduce(tensor)\n\n if tensor_to_number:\n return tensor.item()\n return tensor\n\n def started(self, engine: Engine) -> None:\n self.reset()\n\n @torch.no_grad()\n def iteration_completed(self, engine: Engine) -> None:\n output = self._output_transform(engine.state.output)\n if isinstance(output, Mapping):\n if self._required_output_keys is None:\n raise TypeError(\n \"Transformed engine output for {} metric should be a tuple/list, but given {}\".format(\n self.__class__.__name__, type(output)\n )\n )\n if not all([k in output for k in self._required_output_keys]):\n raise ValueError(\n \"When transformed engine's output is a mapping, \"\n \"it should contain {} keys, but given {}\".format(self._required_output_keys, list(output.keys()))\n )\n output = tuple(output[k] for k in self._required_output_keys)\n self.update(output)\n\n def completed(self, engine: Engine, name: str) -> None:\n result = self.compute()\n if torch.is_tensor(result) and len(result.shape) == 0:\n result = result.item()\n engine.state.metrics[name] = result\n\n def attach(self, engine: Engine, name: str) -> None:\n \"\"\"\n Attaches current metric to provided engine. On the end of engine's run,\n `engine.state.metrics` dictionary will contain computed metric's value under provided name.\n\n Args:\n engine (Engine): the engine to which the metric must be attached\n name (str): the name of the metric to attach\n\n Example:\n\n .. code-block:: python\n\n metric = ...\n metric.attach(engine, \"mymetric\")\n\n assert \"mymetric\" in engine.run(data).metrics\n\n assert metric.is_attached(engine)\n \"\"\"\n engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)\n if not engine.has_event_handler(self.started, Events.EPOCH_STARTED):\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\n if not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\n\n def detach(self, engine: Engine) -> None:\n \"\"\"\n Detaches current metric from the engine and no metric's computation is done during the run.\n This method in conjunction with :meth:`~ignite.metrics.Metric.attach` can be useful if several\n metrics need to be computed with different periods. For example, one metric is computed every training epoch\n and another metric (e.g. more expensive one) is done every n-th training epoch.\n\n Args:\n engine (Engine): the engine from which the metric must be detached\n\n Example:\n\n .. code-block:: python\n\n metric = ...\n engine = ...\n metric.detach(engine)\n\n assert \"mymetric\" not in engine.run(data).metrics\n\n assert not metric.is_attached(engine)\n \"\"\"\n if engine.has_event_handler(self.completed, Events.EPOCH_COMPLETED):\n engine.remove_event_handler(self.completed, Events.EPOCH_COMPLETED)\n if engine.has_event_handler(self.started, Events.EPOCH_STARTED):\n engine.remove_event_handler(self.started, Events.EPOCH_STARTED)\n if engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\n engine.remove_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED)\n\n def is_attached(self, engine: Engine) -> bool:\n \"\"\"\n Checks if current metric is attached to provided engine. If attached, metric's computed\n value is written to `engine.state.metrics` dictionary.\n\n Args:\n engine (Engine): the engine checked from which the metric should be attached\n \"\"\"\n return engine.has_event_handler(self.completed, Events.EPOCH_COMPLETED)\n\n def __add__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x + y, self, other)\n\n def __radd__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x + y, other, self)\n\n def __sub__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x - y, self, other)\n\n def __rsub__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x - y, other, self)\n\n def __mul__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x * y, self, other)\n\n def __rmul__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x * y, other, self)\n\n def __pow__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x ** y, self, other)\n\n def __rpow__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x ** y, other, self)\n\n def __mod__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x % y, self, other)\n\n def __div__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x.__div__(y), self, other)\n\n def __rdiv__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x.__div__(y), other, self)\n\n def __truediv__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x.__truediv__(y), self, other)\n\n def __rtruediv__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x.__truediv__(y), other, self)\n\n def __floordiv__(self, other):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x, y: x // y, self, other)\n\n def __getattr__(self, attr: str) -> Callable:\n from ignite.metrics import MetricsLambda\n\n def fn(x, *args, **kwargs):\n return getattr(x, attr)(*args, **kwargs)\n\n def wrapper(*args, **kwargs):\n return MetricsLambda(fn, self, *args, **kwargs)\n\n return wrapper\n\n def __getitem__(self, index: Any):\n from ignite.metrics import MetricsLambda\n\n return MetricsLambda(lambda x: x[index], self)\n\n def __getstate__(self):\n return self.__dict__\n\n def __setstate__(self, d):\n self.__dict__.update(d)\n\n\ndef sync_all_reduce(*attrs) -> Callable:\n def wrapper(func: Callable) -> Callable:\n @wraps(func)\n def another_wrapper(self: Metric, *args, **kwargs) -> Callable:\n if not isinstance(self, Metric):\n raise RuntimeError(\n \"Decorator sync_all_reduce should be used on \" \"ignite.metric.Metric class methods only\"\n )\n\n if len(attrs) > 0 and not self._is_reduced:\n for attr in attrs:\n t = getattr(self, attr, None)\n if t is not None:\n t = self._sync_all_reduce(t)\n self._is_reduced = True\n setattr(self, attr, t)\n\n return func(self, *args, **kwargs)\n\n return another_wrapper\n\n wrapper._decorated = True\n return wrapper\n\n\ndef reinit__is_reduced(func: Callable) -> Callable:\n @wraps(func)\n def wrapper(self, *args, **kwargs):\n func(self, *args, **kwargs)\n self._is_reduced = False\n\n wrapper._decorated = True\n return wrapper\n", "path": "ignite/metrics/metric.py"}]} | 3,738 | 128 |
gh_patches_debug_37541 | rasdani/github-patches | git_diff | aws__aws-cli-1039 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
aws cloudwatch put-metric-data no longer working with statistic sets
One of our automated scripts stopped reporting data a few weeks ago - we've traced this to a newer version of the AWS CLI.
In fact, the documented example for how to publish statistic sets (http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/publishingMetrics.html#publishingDataPoints1) fails with the same error that we are getting.
```
$ aws cloudwatch put-metric-data --metric-name PageViewCount --namespace "MyService" --statistic-value Sum=11,Minimum=2,Maximum=5,SampleCount=3 --timestamp 2014-02-14T12:00:00.000Z
Parameter validation failed:
Invalid type for parameter MetricData[0].StatisticValues.SampleCount, value: 3, type: <type 'unicode'>, valid types: <type 'float'>, <class 'decimal.Decimal'>, <type 'int'>, <type 'long'>
Invalid type for parameter MetricData[0].StatisticValues.Sum, value: 11, type: <type 'unicode'>, valid types: <type 'float'>, <class 'decimal.Decimal'>, <type 'int'>, <type 'long'>
Invalid type for parameter MetricData[0].StatisticValues.Minimum, value: 2, type: <type 'unicode'>, valid types: <type 'float'>, <class 'decimal.Decimal'>, <type 'int'>, <type 'long'>
Invalid type for parameter MetricData[0].StatisticValues.Maximum, value: 5, type: <type 'unicode'>, valid types: <type 'float'>, <class 'decimal.Decimal'>, <type 'int'>, <type 'long'>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awscli/customizations/putmetricdata.py`
Content:
```
1 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 """
14 This customization adds the following scalar parameters to the
15 cloudwatch put-metric-data operation:
16
17 * --metric-name
18 * --dimensions
19 * --timestamp
20 * --value
21 * --statistic-values
22 * --unit
23
24 """
25 import decimal
26
27 from awscli.arguments import CustomArgument
28 from awscli.utils import split_on_commas
29 from awscli.customizations.utils import validate_mutually_exclusive_handler
30
31
32 def register_put_metric_data(event_handler):
33 event_handler.register('building-argument-table.cloudwatch.put-metric-data',
34 _promote_args)
35 event_handler.register(
36 'operation-args-parsed.cloudwatch.put-metric-data',
37 validate_mutually_exclusive_handler(
38 ['metric_data'], ['metric_name', 'timestamp', 'unit', 'value',
39 'dimensions', 'statistic_values']))
40
41
42 def _promote_args(argument_table, operation, **kwargs):
43 # We're providing top level params for metric-data. This means
44 # that metric-data is now longer a required arg. We do need
45 # to check that either metric-data or the complex args we've added
46 # have been provided.
47 argument_table['metric-data'].required = False
48
49 argument_table['metric-name'] = PutMetricArgument(
50 'metric-name', help_text='The name of the metric.')
51 argument_table['timestamp'] = PutMetricArgument(
52 'timestamp', help_text='The time stamp used for the metric. '
53 'If not specified, the default value is '
54 'set to the time the metric data was '
55 'received.')
56 argument_table['unit'] = PutMetricArgument(
57 'unit', help_text='The unit of metric.')
58 argument_table['value'] = PutMetricArgument(
59 'value', help_text='The value for the metric. Although the --value '
60 'parameter accepts numbers of type Double, '
61 'Amazon CloudWatch truncates values with very '
62 'large exponents. Values with base-10 exponents '
63 'greater than 126 (1 x 10^126) are truncated. '
64 'Likewise, values with base-10 exponents less '
65 'than -130 (1 x 10^-130) are also truncated.')
66
67 argument_table['dimensions'] = PutMetricArgument(
68 'dimensions', help_text=(
69 'The --dimension argument further expands '
70 'on the identity of a metric using a Name=Value'
71 'pair, separated by commas, for example: '
72 '<code>--dimensions User=SomeUser,Stack=Test</code>'))
73 argument_table['statistic-values'] = PutMetricArgument(
74 'statistic-values', help_text='A set of statistical values describing '
75 'the metric.')
76
77
78 def insert_first_element(name):
79 def _wrap_add_to_params(func):
80 def _add_to_params(self, parameters, value):
81 if value is None:
82 return
83 if name not in parameters:
84 # We're taking a shortcut here and assuming that the first
85 # element is a struct type, hence the default value of
86 # a dict. If this was going to be more general we'd need
87 # to have this paramterized, i.e. you pass in some sort of
88 # factory function that creates the initial starting value.
89 parameters[name] = [{}]
90 first_element = parameters[name][0]
91 return func(self, first_element, value)
92 return _add_to_params
93 return _wrap_add_to_params
94
95
96 class PutMetricArgument(CustomArgument):
97 def add_to_params(self, parameters, value):
98 method_name = '_add_param_%s' % self.name.replace('-', '_')
99 return getattr(self, method_name)(parameters, value)
100
101 @insert_first_element('metric_data')
102 def _add_param_metric_name(self, first_element, value):
103 first_element['MetricName'] = value
104
105 @insert_first_element('metric_data')
106 def _add_param_unit(self, first_element, value):
107 first_element['Unit'] = value
108
109 @insert_first_element('metric_data')
110 def _add_param_timestamp(self, first_element, value):
111 first_element['Timestamp'] = value
112
113 @insert_first_element('metric_data')
114 def _add_param_value(self, first_element, value):
115 # Use a Decimal to avoid loss in precision.
116 first_element['Value'] = decimal.Decimal(value)
117
118 @insert_first_element('metric_data')
119 def _add_param_dimensions(self, first_element, value):
120 # Dimensions needs a little more processing. We support
121 # the key=value,key2=value syntax so we need to parse
122 # that.
123 dimensions = []
124 for pair in split_on_commas(value):
125 key, value = pair.split('=')
126 dimensions.append({'Name': key, 'Value': value})
127 first_element['Dimensions'] = dimensions
128
129 @insert_first_element('metric_data')
130 def _add_param_statistic_values(self, first_element, value):
131 # StatisticValues is a struct type so we are parsing
132 # a csv keyval list into a dict.
133 statistics = {}
134 for pair in split_on_commas(value):
135 key, value = pair.split('=')
136 statistics[key] = value
137 first_element['StatisticValues'] = statistics
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/awscli/customizations/putmetricdata.py b/awscli/customizations/putmetricdata.py
--- a/awscli/customizations/putmetricdata.py
+++ b/awscli/customizations/putmetricdata.py
@@ -98,24 +98,24 @@
method_name = '_add_param_%s' % self.name.replace('-', '_')
return getattr(self, method_name)(parameters, value)
- @insert_first_element('metric_data')
+ @insert_first_element('MetricData')
def _add_param_metric_name(self, first_element, value):
first_element['MetricName'] = value
- @insert_first_element('metric_data')
+ @insert_first_element('MetricData')
def _add_param_unit(self, first_element, value):
first_element['Unit'] = value
- @insert_first_element('metric_data')
+ @insert_first_element('MetricData')
def _add_param_timestamp(self, first_element, value):
first_element['Timestamp'] = value
- @insert_first_element('metric_data')
+ @insert_first_element('MetricData')
def _add_param_value(self, first_element, value):
# Use a Decimal to avoid loss in precision.
first_element['Value'] = decimal.Decimal(value)
- @insert_first_element('metric_data')
+ @insert_first_element('MetricData')
def _add_param_dimensions(self, first_element, value):
# Dimensions needs a little more processing. We support
# the key=value,key2=value syntax so we need to parse
@@ -126,12 +126,15 @@
dimensions.append({'Name': key, 'Value': value})
first_element['Dimensions'] = dimensions
- @insert_first_element('metric_data')
+ @insert_first_element('MetricData')
def _add_param_statistic_values(self, first_element, value):
# StatisticValues is a struct type so we are parsing
# a csv keyval list into a dict.
statistics = {}
for pair in split_on_commas(value):
key, value = pair.split('=')
- statistics[key] = value
+ # There are four supported values: Maximum, Minimum, SampleCount,
+ # and Sum. All of them are documented as a type double so we can
+ # convert these to a decimal value to preserve precision.
+ statistics[key] = decimal.Decimal(value)
first_element['StatisticValues'] = statistics
| {"golden_diff": "diff --git a/awscli/customizations/putmetricdata.py b/awscli/customizations/putmetricdata.py\n--- a/awscli/customizations/putmetricdata.py\n+++ b/awscli/customizations/putmetricdata.py\n@@ -98,24 +98,24 @@\n method_name = '_add_param_%s' % self.name.replace('-', '_')\n return getattr(self, method_name)(parameters, value)\n \n- @insert_first_element('metric_data')\n+ @insert_first_element('MetricData')\n def _add_param_metric_name(self, first_element, value):\n first_element['MetricName'] = value\n \n- @insert_first_element('metric_data')\n+ @insert_first_element('MetricData')\n def _add_param_unit(self, first_element, value):\n first_element['Unit'] = value\n \n- @insert_first_element('metric_data')\n+ @insert_first_element('MetricData')\n def _add_param_timestamp(self, first_element, value):\n first_element['Timestamp'] = value\n \n- @insert_first_element('metric_data')\n+ @insert_first_element('MetricData')\n def _add_param_value(self, first_element, value):\n # Use a Decimal to avoid loss in precision.\n first_element['Value'] = decimal.Decimal(value)\n \n- @insert_first_element('metric_data')\n+ @insert_first_element('MetricData')\n def _add_param_dimensions(self, first_element, value):\n # Dimensions needs a little more processing. We support\n # the key=value,key2=value syntax so we need to parse\n@@ -126,12 +126,15 @@\n dimensions.append({'Name': key, 'Value': value})\n first_element['Dimensions'] = dimensions\n \n- @insert_first_element('metric_data')\n+ @insert_first_element('MetricData')\n def _add_param_statistic_values(self, first_element, value):\n # StatisticValues is a struct type so we are parsing\n # a csv keyval list into a dict.\n statistics = {}\n for pair in split_on_commas(value):\n key, value = pair.split('=')\n- statistics[key] = value\n+ # There are four supported values: Maximum, Minimum, SampleCount,\n+ # and Sum. All of them are documented as a type double so we can\n+ # convert these to a decimal value to preserve precision.\n+ statistics[key] = decimal.Decimal(value)\n first_element['StatisticValues'] = statistics\n", "issue": "aws cloudwatch put-metric-data no longer working with statistic sets\nOne of our automated scripts stopped reporting data a few weeks ago - we've traced this to a newer version of the AWS CLI.\n\nIn fact, the documented example for how to publish statistic sets (http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/publishingMetrics.html#publishingDataPoints1) fails with the same error that we are getting.\n\n```\n$ aws cloudwatch put-metric-data --metric-name PageViewCount --namespace \"MyService\" --statistic-value Sum=11,Minimum=2,Maximum=5,SampleCount=3 --timestamp 2014-02-14T12:00:00.000Z\n\nParameter validation failed:\nInvalid type for parameter MetricData[0].StatisticValues.SampleCount, value: 3, type: <type 'unicode'>, valid types: <type 'float'>, <class 'decimal.Decimal'>, <type 'int'>, <type 'long'>\nInvalid type for parameter MetricData[0].StatisticValues.Sum, value: 11, type: <type 'unicode'>, valid types: <type 'float'>, <class 'decimal.Decimal'>, <type 'int'>, <type 'long'>\nInvalid type for parameter MetricData[0].StatisticValues.Minimum, value: 2, type: <type 'unicode'>, valid types: <type 'float'>, <class 'decimal.Decimal'>, <type 'int'>, <type 'long'>\nInvalid type for parameter MetricData[0].StatisticValues.Maximum, value: 5, type: <type 'unicode'>, valid types: <type 'float'>, <class 'decimal.Decimal'>, <type 'int'>, <type 'long'>\n```\n\n", "before_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"\nThis customization adds the following scalar parameters to the\ncloudwatch put-metric-data operation:\n\n* --metric-name\n* --dimensions\n* --timestamp\n* --value\n* --statistic-values\n* --unit\n\n\"\"\"\nimport decimal\n\nfrom awscli.arguments import CustomArgument\nfrom awscli.utils import split_on_commas\nfrom awscli.customizations.utils import validate_mutually_exclusive_handler\n\n\ndef register_put_metric_data(event_handler):\n event_handler.register('building-argument-table.cloudwatch.put-metric-data',\n _promote_args)\n event_handler.register(\n 'operation-args-parsed.cloudwatch.put-metric-data',\n validate_mutually_exclusive_handler(\n ['metric_data'], ['metric_name', 'timestamp', 'unit', 'value',\n 'dimensions', 'statistic_values']))\n\n\ndef _promote_args(argument_table, operation, **kwargs):\n # We're providing top level params for metric-data. This means\n # that metric-data is now longer a required arg. We do need\n # to check that either metric-data or the complex args we've added\n # have been provided.\n argument_table['metric-data'].required = False\n\n argument_table['metric-name'] = PutMetricArgument(\n 'metric-name', help_text='The name of the metric.')\n argument_table['timestamp'] = PutMetricArgument(\n 'timestamp', help_text='The time stamp used for the metric. '\n 'If not specified, the default value is '\n 'set to the time the metric data was '\n 'received.')\n argument_table['unit'] = PutMetricArgument(\n 'unit', help_text='The unit of metric.')\n argument_table['value'] = PutMetricArgument(\n 'value', help_text='The value for the metric. Although the --value '\n 'parameter accepts numbers of type Double, '\n 'Amazon CloudWatch truncates values with very '\n 'large exponents. Values with base-10 exponents '\n 'greater than 126 (1 x 10^126) are truncated. '\n 'Likewise, values with base-10 exponents less '\n 'than -130 (1 x 10^-130) are also truncated.')\n\n argument_table['dimensions'] = PutMetricArgument(\n 'dimensions', help_text=(\n 'The --dimension argument further expands '\n 'on the identity of a metric using a Name=Value'\n 'pair, separated by commas, for example: '\n '<code>--dimensions User=SomeUser,Stack=Test</code>'))\n argument_table['statistic-values'] = PutMetricArgument(\n 'statistic-values', help_text='A set of statistical values describing '\n 'the metric.')\n\n\ndef insert_first_element(name):\n def _wrap_add_to_params(func):\n def _add_to_params(self, parameters, value):\n if value is None:\n return\n if name not in parameters:\n # We're taking a shortcut here and assuming that the first\n # element is a struct type, hence the default value of\n # a dict. If this was going to be more general we'd need\n # to have this paramterized, i.e. you pass in some sort of\n # factory function that creates the initial starting value.\n parameters[name] = [{}]\n first_element = parameters[name][0]\n return func(self, first_element, value)\n return _add_to_params\n return _wrap_add_to_params\n\n\nclass PutMetricArgument(CustomArgument):\n def add_to_params(self, parameters, value):\n method_name = '_add_param_%s' % self.name.replace('-', '_')\n return getattr(self, method_name)(parameters, value)\n\n @insert_first_element('metric_data')\n def _add_param_metric_name(self, first_element, value):\n first_element['MetricName'] = value\n\n @insert_first_element('metric_data')\n def _add_param_unit(self, first_element, value):\n first_element['Unit'] = value\n\n @insert_first_element('metric_data')\n def _add_param_timestamp(self, first_element, value):\n first_element['Timestamp'] = value\n\n @insert_first_element('metric_data')\n def _add_param_value(self, first_element, value):\n # Use a Decimal to avoid loss in precision.\n first_element['Value'] = decimal.Decimal(value)\n\n @insert_first_element('metric_data')\n def _add_param_dimensions(self, first_element, value):\n # Dimensions needs a little more processing. We support\n # the key=value,key2=value syntax so we need to parse\n # that.\n dimensions = []\n for pair in split_on_commas(value):\n key, value = pair.split('=')\n dimensions.append({'Name': key, 'Value': value})\n first_element['Dimensions'] = dimensions\n\n @insert_first_element('metric_data')\n def _add_param_statistic_values(self, first_element, value):\n # StatisticValues is a struct type so we are parsing\n # a csv keyval list into a dict.\n statistics = {}\n for pair in split_on_commas(value):\n key, value = pair.split('=')\n statistics[key] = value\n first_element['StatisticValues'] = statistics\n", "path": "awscli/customizations/putmetricdata.py"}], "after_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"\nThis customization adds the following scalar parameters to the\ncloudwatch put-metric-data operation:\n\n* --metric-name\n* --dimensions\n* --timestamp\n* --value\n* --statistic-values\n* --unit\n\n\"\"\"\nimport decimal\n\nfrom awscli.arguments import CustomArgument\nfrom awscli.utils import split_on_commas\nfrom awscli.customizations.utils import validate_mutually_exclusive_handler\n\n\ndef register_put_metric_data(event_handler):\n event_handler.register('building-argument-table.cloudwatch.put-metric-data',\n _promote_args)\n event_handler.register(\n 'operation-args-parsed.cloudwatch.put-metric-data',\n validate_mutually_exclusive_handler(\n ['metric_data'], ['metric_name', 'timestamp', 'unit', 'value',\n 'dimensions', 'statistic_values']))\n\n\ndef _promote_args(argument_table, operation, **kwargs):\n # We're providing top level params for metric-data. This means\n # that metric-data is now longer a required arg. We do need\n # to check that either metric-data or the complex args we've added\n # have been provided.\n argument_table['metric-data'].required = False\n\n argument_table['metric-name'] = PutMetricArgument(\n 'metric-name', help_text='The name of the metric.')\n argument_table['timestamp'] = PutMetricArgument(\n 'timestamp', help_text='The time stamp used for the metric. '\n 'If not specified, the default value is '\n 'set to the time the metric data was '\n 'received.')\n argument_table['unit'] = PutMetricArgument(\n 'unit', help_text='The unit of metric.')\n argument_table['value'] = PutMetricArgument(\n 'value', help_text='The value for the metric. Although the --value '\n 'parameter accepts numbers of type Double, '\n 'Amazon CloudWatch truncates values with very '\n 'large exponents. Values with base-10 exponents '\n 'greater than 126 (1 x 10^126) are truncated. '\n 'Likewise, values with base-10 exponents less '\n 'than -130 (1 x 10^-130) are also truncated.')\n\n argument_table['dimensions'] = PutMetricArgument(\n 'dimensions', help_text=(\n 'The --dimension argument further expands '\n 'on the identity of a metric using a Name=Value'\n 'pair, separated by commas, for example: '\n '<code>--dimensions User=SomeUser,Stack=Test</code>'))\n argument_table['statistic-values'] = PutMetricArgument(\n 'statistic-values', help_text='A set of statistical values describing '\n 'the metric.')\n\n\ndef insert_first_element(name):\n def _wrap_add_to_params(func):\n def _add_to_params(self, parameters, value):\n if value is None:\n return\n if name not in parameters:\n # We're taking a shortcut here and assuming that the first\n # element is a struct type, hence the default value of\n # a dict. If this was going to be more general we'd need\n # to have this paramterized, i.e. you pass in some sort of\n # factory function that creates the initial starting value.\n parameters[name] = [{}]\n first_element = parameters[name][0]\n return func(self, first_element, value)\n return _add_to_params\n return _wrap_add_to_params\n\n\nclass PutMetricArgument(CustomArgument):\n def add_to_params(self, parameters, value):\n method_name = '_add_param_%s' % self.name.replace('-', '_')\n return getattr(self, method_name)(parameters, value)\n\n @insert_first_element('MetricData')\n def _add_param_metric_name(self, first_element, value):\n first_element['MetricName'] = value\n\n @insert_first_element('MetricData')\n def _add_param_unit(self, first_element, value):\n first_element['Unit'] = value\n\n @insert_first_element('MetricData')\n def _add_param_timestamp(self, first_element, value):\n first_element['Timestamp'] = value\n\n @insert_first_element('MetricData')\n def _add_param_value(self, first_element, value):\n # Use a Decimal to avoid loss in precision.\n first_element['Value'] = decimal.Decimal(value)\n\n @insert_first_element('MetricData')\n def _add_param_dimensions(self, first_element, value):\n # Dimensions needs a little more processing. We support\n # the key=value,key2=value syntax so we need to parse\n # that.\n dimensions = []\n for pair in split_on_commas(value):\n key, value = pair.split('=')\n dimensions.append({'Name': key, 'Value': value})\n first_element['Dimensions'] = dimensions\n\n @insert_first_element('MetricData')\n def _add_param_statistic_values(self, first_element, value):\n # StatisticValues is a struct type so we are parsing\n # a csv keyval list into a dict.\n statistics = {}\n for pair in split_on_commas(value):\n key, value = pair.split('=')\n # There are four supported values: Maximum, Minimum, SampleCount,\n # and Sum. All of them are documented as a type double so we can\n # convert these to a decimal value to preserve precision.\n statistics[key] = decimal.Decimal(value)\n first_element['StatisticValues'] = statistics\n", "path": "awscli/customizations/putmetricdata.py"}]} | 2,219 | 535 |
gh_patches_debug_34183 | rasdani/github-patches | git_diff | sonic-net__sonic-mgmt-4352 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Investigate RDMA nightly run failures on 202012
<!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
For more information about reporting issues, see
https://github.com/Azure/SONiC/wiki#report-issues
---------------------------------------------------
GENERAL SUPPORT INFORMATION
---------------------------------------------------
The GitHub issue tracker is for bug reports and feature requests.
General support can be found at the following locations:
- SONiC Support Forums - https://groups.google.com/forum/#!forum/sonicproject
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
**Description**
RDMA test runs on TD2 with 202012 are quite flaky. Different set of test failures are seen daily and sometimes test fails at pretest
09/09 run skipped all tgen tests with the following reason
SKIPPED [1] /azp/agent/_work/27/s/tests/common/helpers/assertions.py:13: Port is not mapped to the expected DUT
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ansible/library/testbed_vm_info.py`
Content:
```
1 #!/usr/bin/env python
2
3 import re
4 import yaml
5 import os
6 import traceback
7 import subprocess
8 import ipaddr as ipaddress
9 from operator import itemgetter
10 from itertools import groupby
11 from collections import defaultdict
12 import re
13
14 from ansible.parsing.dataloader import DataLoader
15 from ansible.inventory.manager import InventoryManager
16
17 DOCUMENTATION = '''
18 module: testbed_vm_info.py
19 Ansible_version_added: 2.0.0.2
20 short_description: Gather all related VMs info
21 Description:
22 When deploy testbed topology with VM connected to SONiC, gather neighbor VMs info for generating SONiC minigraph file
23 options:
24 base_vm: base vm name defined in testbed.csv for the deployed topology; required: True
25 topo: topology name defined in testbed.csv for the deployed topology; required: True
26 vm_file: the virtual machine file path ; default: 'veos'
27
28 Ansible_facts:
29 'neighbor_eosvm_mgmt': all VM hosts management IPs
30 'topoall': topology information
31
32 '''
33
34 EXAMPLES = '''
35 - name: gather vm information
36 testbed_vm_info: base_vm='VM0100' topo='t1' vm_file='veos'
37 '''
38
39 ### Here are the assumption/expectation of files to gather VM informations, if the file location or name changes, please modify it here
40 TOPO_PATH = 'vars/'
41 VM_INV_FILE = 'veos'
42
43
44 class TestbedVMFacts():
45 """
46 Retrieve testbed VMs management information that for a specified toplogy defined in testbed.csv
47
48 """
49
50 def __init__(self, toponame, vmbase, vmfile):
51 CLET_SUFFIX = "-clet"
52 toponame = re.sub(CLET_SUFFIX + "$", "", toponame)
53 self.topofile = TOPO_PATH+'topo_'+toponame +'.yml'
54 self.start_index = int(re.findall('VM(\d+)', vmbase)[0])
55 self.vmhosts = {}
56 self.vmfile = vmfile
57 self.inv_mgr = InventoryManager(loader=DataLoader(), sources=self.vmfile)
58 return
59
60
61 def get_neighbor_eos(self):
62 eos = {}
63 with open(self.topofile) as f:
64 vm_topology = yaml.load(f)
65 self.topoall = vm_topology
66 for vm in vm_topology['topology']['VMs']:
67 vm_index = int(vm_topology['topology']['VMs'][vm]['vm_offset'])+self.start_index
68 eos[vm] = vm_index
69 return eos
70
71
72 def main():
73 module = AnsibleModule(
74 argument_spec=dict(
75 base_vm=dict(required=True, type='str'),
76 topo=dict(required=True, type='str'),
77 vm_file=dict(default=VM_INV_FILE, type='str')
78 ),
79 supports_check_mode=True
80 )
81 m_args = module.params
82 topo_type = m_args['topo']
83 if 'ptf' in topo_type:
84 module.exit_json(ansible_facts={'neighbor_eosvm_mgmt': {}})
85 try:
86 vmsall = TestbedVMFacts(m_args['topo'], m_args['base_vm'], m_args['vm_file'])
87 neighbor_eos = vmsall.get_neighbor_eos()
88 for eos in neighbor_eos:
89 vmname = 'VM'+format(neighbor_eos[eos], '04d')
90 if vmname in vmsall.inv_mgr.hosts:
91 vmsall.vmhosts[eos] = vmsall.inv_mgr.get_host(vmname).get_vars()['ansible_host']
92 else:
93 err_msg = "cannot find the vm " + vmname + " in VM inventory file, please make sure you have enough VMs for the topology you are using"
94 module.fail_json(msg=err_msg)
95 module.exit_json(ansible_facts={'neighbor_eosvm_mgmt':vmsall.vmhosts, 'topoall': vmsall.topoall})
96 except (IOError, OSError):
97 module.fail_json(msg="Can not find file "+vmsall.topofile+" or "+m_args['vm_file']+" or "+VM_INV_FILE)
98 except Exception as e:
99 module.fail_json(msg=traceback.format_exc())
100
101 from ansible.module_utils.basic import *
102 if __name__ == "__main__":
103 main()
104
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ansible/library/testbed_vm_info.py b/ansible/library/testbed_vm_info.py
--- a/ansible/library/testbed_vm_info.py
+++ b/ansible/library/testbed_vm_info.py
@@ -39,6 +39,7 @@
### Here are the assumption/expectation of files to gather VM informations, if the file location or name changes, please modify it here
TOPO_PATH = 'vars/'
VM_INV_FILE = 'veos'
+TGEN_MGMT_NETWORK = '10.65.32.0/24'
class TestbedVMFacts():
@@ -51,7 +52,10 @@
CLET_SUFFIX = "-clet"
toponame = re.sub(CLET_SUFFIX + "$", "", toponame)
self.topofile = TOPO_PATH+'topo_'+toponame +'.yml'
- self.start_index = int(re.findall('VM(\d+)', vmbase)[0])
+ if vmbase != '':
+ self.start_index = int(re.findall('VM(\d+)', vmbase)[0])
+ else:
+ self.start_index = 0
self.vmhosts = {}
self.vmfile = vmfile
self.inv_mgr = InventoryManager(loader=DataLoader(), sources=self.vmfile)
@@ -85,9 +89,12 @@
try:
vmsall = TestbedVMFacts(m_args['topo'], m_args['base_vm'], m_args['vm_file'])
neighbor_eos = vmsall.get_neighbor_eos()
- for eos in neighbor_eos:
+ tgen_mgmt_ips = list(ipaddress.IPNetwork(unicode(TGEN_MGMT_NETWORK)))
+ for index, eos in enumerate(neighbor_eos):
vmname = 'VM'+format(neighbor_eos[eos], '04d')
- if vmname in vmsall.inv_mgr.hosts:
+ if 'tgen' in topo_type:
+ vmsall.vmhosts[eos] = str(tgen_mgmt_ips[index])
+ elif vmname in vmsall.inv_mgr.hosts:
vmsall.vmhosts[eos] = vmsall.inv_mgr.get_host(vmname).get_vars()['ansible_host']
else:
err_msg = "cannot find the vm " + vmname + " in VM inventory file, please make sure you have enough VMs for the topology you are using"
| {"golden_diff": "diff --git a/ansible/library/testbed_vm_info.py b/ansible/library/testbed_vm_info.py\n--- a/ansible/library/testbed_vm_info.py\n+++ b/ansible/library/testbed_vm_info.py\n@@ -39,6 +39,7 @@\n ### Here are the assumption/expectation of files to gather VM informations, if the file location or name changes, please modify it here\n TOPO_PATH = 'vars/'\n VM_INV_FILE = 'veos'\n+TGEN_MGMT_NETWORK = '10.65.32.0/24'\n \n \n class TestbedVMFacts():\n@@ -51,7 +52,10 @@\n CLET_SUFFIX = \"-clet\"\n toponame = re.sub(CLET_SUFFIX + \"$\", \"\", toponame)\n self.topofile = TOPO_PATH+'topo_'+toponame +'.yml'\n- self.start_index = int(re.findall('VM(\\d+)', vmbase)[0])\n+ if vmbase != '':\n+ self.start_index = int(re.findall('VM(\\d+)', vmbase)[0])\n+ else:\n+ self.start_index = 0\n self.vmhosts = {}\n self.vmfile = vmfile\n self.inv_mgr = InventoryManager(loader=DataLoader(), sources=self.vmfile)\n@@ -85,9 +89,12 @@\n try:\n vmsall = TestbedVMFacts(m_args['topo'], m_args['base_vm'], m_args['vm_file'])\n neighbor_eos = vmsall.get_neighbor_eos()\n- for eos in neighbor_eos:\n+ tgen_mgmt_ips = list(ipaddress.IPNetwork(unicode(TGEN_MGMT_NETWORK)))\n+ for index, eos in enumerate(neighbor_eos):\n vmname = 'VM'+format(neighbor_eos[eos], '04d')\n- if vmname in vmsall.inv_mgr.hosts:\n+ if 'tgen' in topo_type:\n+ vmsall.vmhosts[eos] = str(tgen_mgmt_ips[index])\n+ elif vmname in vmsall.inv_mgr.hosts:\n vmsall.vmhosts[eos] = vmsall.inv_mgr.get_host(vmname).get_vars()['ansible_host']\n else:\n err_msg = \"cannot find the vm \" + vmname + \" in VM inventory file, please make sure you have enough VMs for the topology you are using\"\n", "issue": "Investigate RDMA nightly run failures on 202012\n<!--\r\nIf you are reporting a new issue, make sure that we do not have any duplicates\r\nalready open. You can ensure this by searching the issue list for this\r\nrepository. If there is a duplicate, please close your issue and add a comment\r\nto the existing issue instead.\r\n\r\nIf you suspect your issue is a bug, please edit your issue description to\r\ninclude the BUG REPORT INFORMATION shown below. If you fail to provide this\r\ninformation within 7 days, we cannot debug your issue and will close it. We\r\nwill, however, reopen it if you later provide the information.\r\n\r\nFor more information about reporting issues, see\r\nhttps://github.com/Azure/SONiC/wiki#report-issues\r\n\r\n---------------------------------------------------\r\nGENERAL SUPPORT INFORMATION\r\n---------------------------------------------------\r\n\r\nThe GitHub issue tracker is for bug reports and feature requests.\r\nGeneral support can be found at the following locations:\r\n\r\n- SONiC Support Forums - https://groups.google.com/forum/#!forum/sonicproject\r\n\r\n---------------------------------------------------\r\nBUG REPORT INFORMATION\r\n---------------------------------------------------\r\nUse the commands below to provide key information from your environment:\r\nYou do NOT have to include this information if this is a FEATURE REQUEST\r\n-->\r\n\r\n**Description**\r\nRDMA test runs on TD2 with 202012 are quite flaky. Different set of test failures are seen daily and sometimes test fails at pretest\r\n09/09 run skipped all tgen tests with the following reason\r\nSKIPPED [1] /azp/agent/_work/27/s/tests/common/helpers/assertions.py:13: Port is not mapped to the expected DUT\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport re\nimport yaml\nimport os\nimport traceback\nimport subprocess\nimport ipaddr as ipaddress\nfrom operator import itemgetter\nfrom itertools import groupby\nfrom collections import defaultdict\nimport re\n\nfrom ansible.parsing.dataloader import DataLoader\nfrom ansible.inventory.manager import InventoryManager\n\nDOCUMENTATION = '''\nmodule: testbed_vm_info.py\nAnsible_version_added: 2.0.0.2\nshort_description: Gather all related VMs info\nDescription:\n When deploy testbed topology with VM connected to SONiC, gather neighbor VMs info for generating SONiC minigraph file\n options:\n base_vm: base vm name defined in testbed.csv for the deployed topology; required: True\n topo: topology name defined in testbed.csv for the deployed topology; required: True\n vm_file: the virtual machine file path ; default: 'veos'\n\nAnsible_facts:\n 'neighbor_eosvm_mgmt': all VM hosts management IPs\n 'topoall': topology information\n\n'''\n\nEXAMPLES = '''\n - name: gather vm information\n testbed_vm_info: base_vm='VM0100' topo='t1' vm_file='veos'\n'''\n\n### Here are the assumption/expectation of files to gather VM informations, if the file location or name changes, please modify it here\nTOPO_PATH = 'vars/'\nVM_INV_FILE = 'veos'\n\n\nclass TestbedVMFacts():\n \"\"\"\n Retrieve testbed VMs management information that for a specified toplogy defined in testbed.csv\n\n \"\"\"\n\n def __init__(self, toponame, vmbase, vmfile):\n CLET_SUFFIX = \"-clet\"\n toponame = re.sub(CLET_SUFFIX + \"$\", \"\", toponame)\n self.topofile = TOPO_PATH+'topo_'+toponame +'.yml'\n self.start_index = int(re.findall('VM(\\d+)', vmbase)[0])\n self.vmhosts = {}\n self.vmfile = vmfile\n self.inv_mgr = InventoryManager(loader=DataLoader(), sources=self.vmfile)\n return\n\n\n def get_neighbor_eos(self):\n eos = {}\n with open(self.topofile) as f:\n vm_topology = yaml.load(f)\n self.topoall = vm_topology\n for vm in vm_topology['topology']['VMs']:\n vm_index = int(vm_topology['topology']['VMs'][vm]['vm_offset'])+self.start_index\n eos[vm] = vm_index\n return eos\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n base_vm=dict(required=True, type='str'),\n topo=dict(required=True, type='str'),\n vm_file=dict(default=VM_INV_FILE, type='str')\n ),\n supports_check_mode=True\n )\n m_args = module.params\n topo_type = m_args['topo']\n if 'ptf' in topo_type:\n module.exit_json(ansible_facts={'neighbor_eosvm_mgmt': {}})\n try:\n vmsall = TestbedVMFacts(m_args['topo'], m_args['base_vm'], m_args['vm_file'])\n neighbor_eos = vmsall.get_neighbor_eos()\n for eos in neighbor_eos:\n vmname = 'VM'+format(neighbor_eos[eos], '04d')\n if vmname in vmsall.inv_mgr.hosts:\n vmsall.vmhosts[eos] = vmsall.inv_mgr.get_host(vmname).get_vars()['ansible_host']\n else:\n err_msg = \"cannot find the vm \" + vmname + \" in VM inventory file, please make sure you have enough VMs for the topology you are using\"\n module.fail_json(msg=err_msg)\n module.exit_json(ansible_facts={'neighbor_eosvm_mgmt':vmsall.vmhosts, 'topoall': vmsall.topoall})\n except (IOError, OSError):\n module.fail_json(msg=\"Can not find file \"+vmsall.topofile+\" or \"+m_args['vm_file']+\" or \"+VM_INV_FILE)\n except Exception as e:\n module.fail_json(msg=traceback.format_exc())\n\nfrom ansible.module_utils.basic import *\nif __name__ == \"__main__\":\n main()\n\n", "path": "ansible/library/testbed_vm_info.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport re\nimport yaml\nimport os\nimport traceback\nimport subprocess\nimport ipaddr as ipaddress\nfrom operator import itemgetter\nfrom itertools import groupby\nfrom collections import defaultdict\nimport re\n\nfrom ansible.parsing.dataloader import DataLoader\nfrom ansible.inventory.manager import InventoryManager\n\nDOCUMENTATION = '''\nmodule: testbed_vm_info.py\nAnsible_version_added: 2.0.0.2\nshort_description: Gather all related VMs info\nDescription:\n When deploy testbed topology with VM connected to SONiC, gather neighbor VMs info for generating SONiC minigraph file\n options:\n base_vm: base vm name defined in testbed.csv for the deployed topology; required: True\n topo: topology name defined in testbed.csv for the deployed topology; required: True\n vm_file: the virtual machine file path ; default: 'veos'\n\nAnsible_facts:\n 'neighbor_eosvm_mgmt': all VM hosts management IPs\n 'topoall': topology information\n\n'''\n\nEXAMPLES = '''\n - name: gather vm information\n testbed_vm_info: base_vm='VM0100' topo='t1' vm_file='veos'\n'''\n\n### Here are the assumption/expectation of files to gather VM informations, if the file location or name changes, please modify it here\nTOPO_PATH = 'vars/'\nVM_INV_FILE = 'veos'\nTGEN_MGMT_NETWORK = '10.65.32.0/24'\n\n\nclass TestbedVMFacts():\n \"\"\"\n Retrieve testbed VMs management information that for a specified toplogy defined in testbed.csv\n\n \"\"\"\n\n def __init__(self, toponame, vmbase, vmfile):\n CLET_SUFFIX = \"-clet\"\n toponame = re.sub(CLET_SUFFIX + \"$\", \"\", toponame)\n self.topofile = TOPO_PATH+'topo_'+toponame +'.yml'\n if vmbase != '':\n self.start_index = int(re.findall('VM(\\d+)', vmbase)[0])\n else:\n self.start_index = 0\n self.vmhosts = {}\n self.vmfile = vmfile\n self.inv_mgr = InventoryManager(loader=DataLoader(), sources=self.vmfile)\n return\n\n\n def get_neighbor_eos(self):\n eos = {}\n with open(self.topofile) as f:\n vm_topology = yaml.load(f)\n self.topoall = vm_topology\n for vm in vm_topology['topology']['VMs']:\n vm_index = int(vm_topology['topology']['VMs'][vm]['vm_offset'])+self.start_index\n eos[vm] = vm_index\n return eos\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n base_vm=dict(required=True, type='str'),\n topo=dict(required=True, type='str'),\n vm_file=dict(default=VM_INV_FILE, type='str')\n ),\n supports_check_mode=True\n )\n m_args = module.params\n topo_type = m_args['topo']\n if 'ptf' in topo_type:\n module.exit_json(ansible_facts={'neighbor_eosvm_mgmt': {}})\n try:\n vmsall = TestbedVMFacts(m_args['topo'], m_args['base_vm'], m_args['vm_file'])\n neighbor_eos = vmsall.get_neighbor_eos()\n tgen_mgmt_ips = list(ipaddress.IPNetwork(unicode(TGEN_MGMT_NETWORK)))\n for index, eos in enumerate(neighbor_eos):\n vmname = 'VM'+format(neighbor_eos[eos], '04d')\n if 'tgen' in topo_type:\n vmsall.vmhosts[eos] = str(tgen_mgmt_ips[index])\n elif vmname in vmsall.inv_mgr.hosts:\n vmsall.vmhosts[eos] = vmsall.inv_mgr.get_host(vmname).get_vars()['ansible_host']\n else:\n err_msg = \"cannot find the vm \" + vmname + \" in VM inventory file, please make sure you have enough VMs for the topology you are using\"\n module.fail_json(msg=err_msg)\n module.exit_json(ansible_facts={'neighbor_eosvm_mgmt':vmsall.vmhosts, 'topoall': vmsall.topoall})\n except (IOError, OSError):\n module.fail_json(msg=\"Can not find file \"+vmsall.topofile+\" or \"+m_args['vm_file']+\" or \"+VM_INV_FILE)\n except Exception as e:\n module.fail_json(msg=traceback.format_exc())\n\nfrom ansible.module_utils.basic import *\nif __name__ == \"__main__\":\n main()\n\n", "path": "ansible/library/testbed_vm_info.py"}]} | 1,748 | 523 |
gh_patches_debug_24704 | rasdani/github-patches | git_diff | AlexsLemonade__refinebio-1839 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Quantpendia failed to upload to S3
### Context
We kicked off quantpendia jobs for all organisms but they weren't succeeding because they couldn't upload to S3.
### Problem or idea
This is probably just because the worker instances don't have access to the compendia S3 bucket. The smasher probably has those permissions, but it looks like the workers don't.
### Solution or next step
Give worker instances permissions to push to the compendia S3 bucket.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `workers/data_refinery_workers/processors/create_quantpendia.py`
Content:
```
1 import os
2 import logging
3 import shutil
4 import time
5 from django.utils import timezone
6 from typing import Dict, List, Tuple
7 import psutil
8
9 from data_refinery_common.job_lookup import PipelineEnum
10 from data_refinery_common.logging import get_and_configure_logger
11 from data_refinery_common.models import (ComputationalResult,
12 ComputedFile,
13 Organism,
14 Pipeline,
15 Sample)
16 from data_refinery_common.utils import get_env_variable
17 from data_refinery_workers.processors import smashing_utils, utils
18
19 S3_BUCKET_NAME = get_env_variable("S3_BUCKET_NAME", "data-refinery")
20 SMASHING_DIR = "/home/user/data_store/smashed/"
21
22 logger = get_and_configure_logger(__name__)
23 logger.setLevel(logging.getLevelName('DEBUG'))
24
25 def create_quantpendia(job_id: int) -> None:
26 pipeline = Pipeline(name=PipelineEnum.CREATE_QUANTPENDIA.value)
27 job_context = utils.run_pipeline({"job_id": job_id, "pipeline": pipeline},
28 [utils.start_job,
29 make_dirs,
30 download_files,
31 create_result_objects,
32 remove_job_dir,
33 utils.end_job])
34 return job_context
35
36
37 def download_files(job_context: Dict) -> Dict:
38 job_context['time_start'] = timezone.now()
39
40 num_samples = 0
41 for key, samples in job_context['samples'].items():
42 outfile_dir = job_context['output_dir'] + key + '/'
43 os.makedirs(outfile_dir, exist_ok=True)
44
45 logger.debug("Downloading quant.sf files for quantpendia.",
46 accession_code=key,
47 job_id=job_context['job_id'],
48 **get_process_stats())
49
50 # download quant.sf files directly into the dataset folder
51 num_samples += smashing_utils.sync_quant_files(outfile_dir, samples)
52
53 job_context['num_samples'] = num_samples
54 job_context['time_end'] = timezone.now()
55 job_context['formatted_command'] = "create_quantpendia.py"
56
57 logger.debug("Finished downloading quant.sf files for quantpendia.",
58 job_id=job_context['job_id'],
59 total_downloaded_files=num_samples,
60 **get_process_stats())
61
62 return job_context
63
64
65 def create_result_objects(job_context: Dict) -> Dict:
66 """
67 Store and host the result as a ComputationalResult object.
68 """
69 result = ComputationalResult()
70 result.commands.append(" ".join(job_context['formatted_command']))
71 result.is_ccdl = True
72 result.is_public = True
73 result.time_start = job_context['time_start']
74 result.time_end = job_context['time_end']
75 try:
76 processor_key = "CREATE_QUANTPENDIA"
77 result.processor = utils.find_processor(processor_key)
78 except Exception as e:
79 return utils.handle_processor_exception(job_context, processor_key, e)
80 result.save()
81
82 compendia_organism = _get_organisms(job_context['samples']).first()
83
84 # Create the resulting archive
85 smashing_utils.write_non_data_files(job_context)
86 final_zip_base = job_context['job_dir'] + compendia_organism.name + "_rnaseq_compendia"
87 shutil.copy("/home/user/README_QUANT.md", job_context["output_dir"] + "/README.md")
88
89 archive_path = shutil.make_archive(final_zip_base, 'zip', job_context["output_dir"])
90 compendia_version = _get_next_compendia_version(compendia_organism)
91
92 archive_computed_file = ComputedFile()
93
94 archive_computed_file.absolute_file_path = archive_path
95 archive_computed_file.filename = archive_path.split('/')[-1]
96 archive_computed_file.calculate_sha1()
97 archive_computed_file.calculate_size()
98 archive_computed_file.is_smashable = False
99 archive_computed_file.is_qn_target = False
100 archive_computed_file.result = result
101 archive_computed_file.is_compendia = True
102 archive_computed_file.quant_sf_only = True
103 archive_computed_file.compendia_organism = compendia_organism
104 archive_computed_file.compendia_version = compendia_version
105 archive_computed_file.save()
106
107 logger.info("Quantpendia created!",
108 archive_path=archive_path,
109 organism_name=compendia_organism.name)
110
111 # Upload the result to S3
112 timestamp = str(int(time.time()))
113 s3_key = compendia_organism.name + "_" + str(compendia_version) + "_" + timestamp + ".zip"
114 archive_computed_file.sync_to_s3(S3_BUCKET_NAME, s3_key)
115
116 job_context['result'] = result
117 job_context['computed_files'] = [archive_computed_file]
118 job_context['success'] = True
119
120 return job_context
121
122
123 def remove_job_dir(job_context: Dict):
124 """ remove the directory when the job is successful. At this point
125 the quantpendia was already zipped and uploaded. """
126 shutil.rmtree(job_context["job_dir"], ignore_errors=True)
127 return job_context
128
129 def make_dirs(job_context: Dict):
130 dataset_id = str(job_context["dataset"].pk)
131 job_context["job_dir"] = "/home/user/data_store/smashed/" + dataset_id + "/"
132 os.makedirs(job_context["job_dir"], exist_ok=True)
133 job_context["output_dir"] = job_context["job_dir"] + "output/"
134 os.makedirs(job_context["output_dir"], exist_ok=True)
135 return job_context
136
137 def get_process_stats():
138 BYTES_IN_GB = 1024 * 1024 * 1024
139 process = psutil.Process(os.getpid())
140 ram_in_GB = process.memory_info().rss / BYTES_IN_GB
141 return { 'total_cpu': psutil.cpu_percent(), 'process_ram': ram_in_GB }
142
143
144 def _get_organisms(aggregated_samples: Dict[str, Sample]) -> List[Organism]:
145 organisms = set()
146 for key, samples in aggregated_samples.items():
147 organism_ids = samples.values_list('organism__id', flat=True).distinct()
148 organisms.update(organism_ids)
149
150 return Organism.objects.filter(id__in=list(organisms))
151
152
153 def _get_next_compendia_version(organism: Organism) -> int:
154 last_compendia = ComputedFile.objects\
155 .filter(is_compendia=True, quant_sf_only=True, compendia_organism=organism)\
156 .order_by('-compendia_version').first()
157
158 if last_compendia:
159 return last_compendia.compendia_version + 1
160
161 # otherwise this is the first compendia that we are generating
162 return 1
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/workers/data_refinery_workers/processors/create_quantpendia.py b/workers/data_refinery_workers/processors/create_quantpendia.py
--- a/workers/data_refinery_workers/processors/create_quantpendia.py
+++ b/workers/data_refinery_workers/processors/create_quantpendia.py
@@ -3,6 +3,7 @@
import shutil
import time
from django.utils import timezone
+from django.conf import settings
from typing import Dict, List, Tuple
import psutil
@@ -114,7 +115,6 @@
archive_computed_file.sync_to_s3(S3_BUCKET_NAME, s3_key)
job_context['result'] = result
- job_context['computed_files'] = [archive_computed_file]
job_context['success'] = True
return job_context
@@ -123,7 +123,9 @@
def remove_job_dir(job_context: Dict):
""" remove the directory when the job is successful. At this point
the quantpendia was already zipped and uploaded. """
- shutil.rmtree(job_context["job_dir"], ignore_errors=True)
+ # don't remove the files when running locally or for tests
+ if settings.RUNNING_IN_CLOUD:
+ shutil.rmtree(job_context["job_dir"], ignore_errors=True)
return job_context
def make_dirs(job_context: Dict):
| {"golden_diff": "diff --git a/workers/data_refinery_workers/processors/create_quantpendia.py b/workers/data_refinery_workers/processors/create_quantpendia.py\n--- a/workers/data_refinery_workers/processors/create_quantpendia.py\n+++ b/workers/data_refinery_workers/processors/create_quantpendia.py\n@@ -3,6 +3,7 @@\n import shutil\n import time\n from django.utils import timezone\n+from django.conf import settings\n from typing import Dict, List, Tuple\n import psutil\n \n@@ -114,7 +115,6 @@\n archive_computed_file.sync_to_s3(S3_BUCKET_NAME, s3_key)\n \n job_context['result'] = result\n- job_context['computed_files'] = [archive_computed_file]\n job_context['success'] = True\n \n return job_context\n@@ -123,7 +123,9 @@\n def remove_job_dir(job_context: Dict):\n \"\"\" remove the directory when the job is successful. At this point\n the quantpendia was already zipped and uploaded. \"\"\"\n- shutil.rmtree(job_context[\"job_dir\"], ignore_errors=True)\n+ # don't remove the files when running locally or for tests\n+ if settings.RUNNING_IN_CLOUD:\n+ shutil.rmtree(job_context[\"job_dir\"], ignore_errors=True)\n return job_context\n \n def make_dirs(job_context: Dict):\n", "issue": "Quantpendia failed to upload to S3\n### Context\r\n\r\nWe kicked off quantpendia jobs for all organisms but they weren't succeeding because they couldn't upload to S3.\r\n\r\n### Problem or idea\r\n\r\nThis is probably just because the worker instances don't have access to the compendia S3 bucket. The smasher probably has those permissions, but it looks like the workers don't.\r\n\r\n### Solution or next step\r\n\r\nGive worker instances permissions to push to the compendia S3 bucket.\n", "before_files": [{"content": "import os\nimport logging\nimport shutil\nimport time\nfrom django.utils import timezone\nfrom typing import Dict, List, Tuple\nimport psutil\n\nfrom data_refinery_common.job_lookup import PipelineEnum\nfrom data_refinery_common.logging import get_and_configure_logger\nfrom data_refinery_common.models import (ComputationalResult,\n ComputedFile,\n Organism,\n Pipeline,\n Sample)\nfrom data_refinery_common.utils import get_env_variable\nfrom data_refinery_workers.processors import smashing_utils, utils\n\nS3_BUCKET_NAME = get_env_variable(\"S3_BUCKET_NAME\", \"data-refinery\")\nSMASHING_DIR = \"/home/user/data_store/smashed/\"\n\nlogger = get_and_configure_logger(__name__)\nlogger.setLevel(logging.getLevelName('DEBUG'))\n\ndef create_quantpendia(job_id: int) -> None:\n pipeline = Pipeline(name=PipelineEnum.CREATE_QUANTPENDIA.value)\n job_context = utils.run_pipeline({\"job_id\": job_id, \"pipeline\": pipeline},\n [utils.start_job,\n make_dirs,\n download_files,\n create_result_objects,\n remove_job_dir,\n utils.end_job])\n return job_context\n\n\ndef download_files(job_context: Dict) -> Dict:\n job_context['time_start'] = timezone.now()\n\n num_samples = 0\n for key, samples in job_context['samples'].items():\n outfile_dir = job_context['output_dir'] + key + '/'\n os.makedirs(outfile_dir, exist_ok=True)\n\n logger.debug(\"Downloading quant.sf files for quantpendia.\",\n accession_code=key,\n job_id=job_context['job_id'],\n **get_process_stats())\n\n # download quant.sf files directly into the dataset folder\n num_samples += smashing_utils.sync_quant_files(outfile_dir, samples)\n\n job_context['num_samples'] = num_samples\n job_context['time_end'] = timezone.now()\n job_context['formatted_command'] = \"create_quantpendia.py\"\n\n logger.debug(\"Finished downloading quant.sf files for quantpendia.\",\n job_id=job_context['job_id'],\n total_downloaded_files=num_samples,\n **get_process_stats())\n\n return job_context\n\n\ndef create_result_objects(job_context: Dict) -> Dict:\n \"\"\"\n Store and host the result as a ComputationalResult object.\n \"\"\"\n result = ComputationalResult()\n result.commands.append(\" \".join(job_context['formatted_command']))\n result.is_ccdl = True\n result.is_public = True\n result.time_start = job_context['time_start']\n result.time_end = job_context['time_end']\n try:\n processor_key = \"CREATE_QUANTPENDIA\"\n result.processor = utils.find_processor(processor_key)\n except Exception as e:\n return utils.handle_processor_exception(job_context, processor_key, e)\n result.save()\n\n compendia_organism = _get_organisms(job_context['samples']).first()\n\n # Create the resulting archive\n smashing_utils.write_non_data_files(job_context)\n final_zip_base = job_context['job_dir'] + compendia_organism.name + \"_rnaseq_compendia\"\n shutil.copy(\"/home/user/README_QUANT.md\", job_context[\"output_dir\"] + \"/README.md\")\n\n archive_path = shutil.make_archive(final_zip_base, 'zip', job_context[\"output_dir\"])\n compendia_version = _get_next_compendia_version(compendia_organism)\n\n archive_computed_file = ComputedFile()\n\n archive_computed_file.absolute_file_path = archive_path\n archive_computed_file.filename = archive_path.split('/')[-1]\n archive_computed_file.calculate_sha1()\n archive_computed_file.calculate_size()\n archive_computed_file.is_smashable = False\n archive_computed_file.is_qn_target = False\n archive_computed_file.result = result\n archive_computed_file.is_compendia = True\n archive_computed_file.quant_sf_only = True\n archive_computed_file.compendia_organism = compendia_organism\n archive_computed_file.compendia_version = compendia_version\n archive_computed_file.save()\n\n logger.info(\"Quantpendia created!\",\n archive_path=archive_path,\n organism_name=compendia_organism.name)\n\n # Upload the result to S3\n timestamp = str(int(time.time()))\n s3_key = compendia_organism.name + \"_\" + str(compendia_version) + \"_\" + timestamp + \".zip\"\n archive_computed_file.sync_to_s3(S3_BUCKET_NAME, s3_key)\n\n job_context['result'] = result\n job_context['computed_files'] = [archive_computed_file]\n job_context['success'] = True\n\n return job_context\n\n\ndef remove_job_dir(job_context: Dict):\n \"\"\" remove the directory when the job is successful. At this point\n the quantpendia was already zipped and uploaded. \"\"\"\n shutil.rmtree(job_context[\"job_dir\"], ignore_errors=True)\n return job_context\n\ndef make_dirs(job_context: Dict):\n dataset_id = str(job_context[\"dataset\"].pk)\n job_context[\"job_dir\"] = \"/home/user/data_store/smashed/\" + dataset_id + \"/\"\n os.makedirs(job_context[\"job_dir\"], exist_ok=True)\n job_context[\"output_dir\"] = job_context[\"job_dir\"] + \"output/\"\n os.makedirs(job_context[\"output_dir\"], exist_ok=True)\n return job_context\n\ndef get_process_stats():\n BYTES_IN_GB = 1024 * 1024 * 1024\n process = psutil.Process(os.getpid())\n ram_in_GB = process.memory_info().rss / BYTES_IN_GB\n return { 'total_cpu': psutil.cpu_percent(), 'process_ram': ram_in_GB }\n\n\ndef _get_organisms(aggregated_samples: Dict[str, Sample]) -> List[Organism]:\n organisms = set()\n for key, samples in aggregated_samples.items():\n organism_ids = samples.values_list('organism__id', flat=True).distinct()\n organisms.update(organism_ids)\n\n return Organism.objects.filter(id__in=list(organisms))\n\n\ndef _get_next_compendia_version(organism: Organism) -> int:\n last_compendia = ComputedFile.objects\\\n .filter(is_compendia=True, quant_sf_only=True, compendia_organism=organism)\\\n .order_by('-compendia_version').first()\n\n if last_compendia:\n return last_compendia.compendia_version + 1\n\n # otherwise this is the first compendia that we are generating\n return 1\n", "path": "workers/data_refinery_workers/processors/create_quantpendia.py"}], "after_files": [{"content": "import os\nimport logging\nimport shutil\nimport time\nfrom django.utils import timezone\nfrom django.conf import settings\nfrom typing import Dict, List, Tuple\nimport psutil\n\nfrom data_refinery_common.job_lookup import PipelineEnum\nfrom data_refinery_common.logging import get_and_configure_logger\nfrom data_refinery_common.models import (ComputationalResult,\n ComputedFile,\n Organism,\n Pipeline,\n Sample)\nfrom data_refinery_common.utils import get_env_variable\nfrom data_refinery_workers.processors import smashing_utils, utils\n\nS3_BUCKET_NAME = get_env_variable(\"S3_BUCKET_NAME\", \"data-refinery\")\nSMASHING_DIR = \"/home/user/data_store/smashed/\"\n\nlogger = get_and_configure_logger(__name__)\nlogger.setLevel(logging.getLevelName('DEBUG'))\n\ndef create_quantpendia(job_id: int) -> None:\n pipeline = Pipeline(name=PipelineEnum.CREATE_QUANTPENDIA.value)\n job_context = utils.run_pipeline({\"job_id\": job_id, \"pipeline\": pipeline},\n [utils.start_job,\n make_dirs,\n download_files,\n create_result_objects,\n remove_job_dir,\n utils.end_job])\n return job_context\n\n\ndef download_files(job_context: Dict) -> Dict:\n job_context['time_start'] = timezone.now()\n\n num_samples = 0\n for key, samples in job_context['samples'].items():\n outfile_dir = job_context['output_dir'] + key + '/'\n os.makedirs(outfile_dir, exist_ok=True)\n\n logger.debug(\"Downloading quant.sf files for quantpendia.\",\n accession_code=key,\n job_id=job_context['job_id'],\n **get_process_stats())\n\n # download quant.sf files directly into the dataset folder\n num_samples += smashing_utils.sync_quant_files(outfile_dir, samples)\n\n job_context['num_samples'] = num_samples\n job_context['time_end'] = timezone.now()\n job_context['formatted_command'] = \"create_quantpendia.py\"\n\n logger.debug(\"Finished downloading quant.sf files for quantpendia.\",\n job_id=job_context['job_id'],\n total_downloaded_files=num_samples,\n **get_process_stats())\n\n return job_context\n\n\ndef create_result_objects(job_context: Dict) -> Dict:\n \"\"\"\n Store and host the result as a ComputationalResult object.\n \"\"\"\n result = ComputationalResult()\n result.commands.append(\" \".join(job_context['formatted_command']))\n result.is_ccdl = True\n result.is_public = True\n result.time_start = job_context['time_start']\n result.time_end = job_context['time_end']\n try:\n processor_key = \"CREATE_QUANTPENDIA\"\n result.processor = utils.find_processor(processor_key)\n except Exception as e:\n return utils.handle_processor_exception(job_context, processor_key, e)\n result.save()\n\n compendia_organism = _get_organisms(job_context['samples']).first()\n\n # Create the resulting archive\n smashing_utils.write_non_data_files(job_context)\n final_zip_base = job_context['job_dir'] + compendia_organism.name + \"_rnaseq_compendia\"\n shutil.copy(\"/home/user/README_QUANT.md\", job_context[\"output_dir\"] + \"/README.md\")\n\n archive_path = shutil.make_archive(final_zip_base, 'zip', job_context[\"output_dir\"])\n compendia_version = _get_next_compendia_version(compendia_organism)\n\n archive_computed_file = ComputedFile()\n\n archive_computed_file.absolute_file_path = archive_path\n archive_computed_file.filename = archive_path.split('/')[-1]\n archive_computed_file.calculate_sha1()\n archive_computed_file.calculate_size()\n archive_computed_file.is_smashable = False\n archive_computed_file.is_qn_target = False\n archive_computed_file.result = result\n archive_computed_file.is_compendia = True\n archive_computed_file.quant_sf_only = True\n archive_computed_file.compendia_organism = compendia_organism\n archive_computed_file.compendia_version = compendia_version\n archive_computed_file.save()\n\n logger.info(\"Quantpendia created!\",\n archive_path=archive_path,\n organism_name=compendia_organism.name)\n\n # Upload the result to S3\n timestamp = str(int(time.time()))\n s3_key = compendia_organism.name + \"_\" + str(compendia_version) + \"_\" + timestamp + \".zip\"\n archive_computed_file.sync_to_s3(S3_BUCKET_NAME, s3_key)\n\n job_context['result'] = result\n job_context['success'] = True\n\n return job_context\n\n\ndef remove_job_dir(job_context: Dict):\n \"\"\" remove the directory when the job is successful. At this point\n the quantpendia was already zipped and uploaded. \"\"\"\n # don't remove the files when running locally or for tests\n if settings.RUNNING_IN_CLOUD:\n shutil.rmtree(job_context[\"job_dir\"], ignore_errors=True)\n return job_context\n\ndef make_dirs(job_context: Dict):\n dataset_id = str(job_context[\"dataset\"].pk)\n job_context[\"job_dir\"] = \"/home/user/data_store/smashed/\" + dataset_id + \"/\"\n os.makedirs(job_context[\"job_dir\"], exist_ok=True)\n job_context[\"output_dir\"] = job_context[\"job_dir\"] + \"output/\"\n os.makedirs(job_context[\"output_dir\"], exist_ok=True)\n return job_context\n\ndef get_process_stats():\n BYTES_IN_GB = 1024 * 1024 * 1024\n process = psutil.Process(os.getpid())\n ram_in_GB = process.memory_info().rss / BYTES_IN_GB\n return { 'total_cpu': psutil.cpu_percent(), 'process_ram': ram_in_GB }\n\n\ndef _get_organisms(aggregated_samples: Dict[str, Sample]) -> List[Organism]:\n organisms = set()\n for key, samples in aggregated_samples.items():\n organism_ids = samples.values_list('organism__id', flat=True).distinct()\n organisms.update(organism_ids)\n\n return Organism.objects.filter(id__in=list(organisms))\n\n\ndef _get_next_compendia_version(organism: Organism) -> int:\n last_compendia = ComputedFile.objects\\\n .filter(is_compendia=True, quant_sf_only=True, compendia_organism=organism)\\\n .order_by('-compendia_version').first()\n\n if last_compendia:\n return last_compendia.compendia_version + 1\n\n # otherwise this is the first compendia that we are generating\n return 1\n", "path": "workers/data_refinery_workers/processors/create_quantpendia.py"}]} | 2,161 | 294 |
gh_patches_debug_190 | rasdani/github-patches | git_diff | facebookresearch__fairseq-62 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
installation from source requires installing cffi
This is a very minor documentation issue
note: using python3/pip3 as there is a comment about requiring python 3 for fairseq-py
not using anaconda..I have had issues with package consistency..so I avoid it
fairseq-py installed with
git clone https://github.com/facebookresearch/fairseq-py.git
sudo pip3 install -r requirements.txt
levinth@zt-gpu-lin-1:~/fairseq-py$ sudo python3 setup.py build
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/__init__.py", line 12, in <module>
import cffi
ImportError: No module named 'cffi'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "setup.py", line 13, in <module>
from torch.utils.ffi import create_extension
File "/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/__init__.py", line 14, in <module>
raise ImportError("torch.utils.ffi requires the cffi package")
ImportError: torch.utils.ffi requires the cffi package
levinth@zt-gpu-lin-1:~/fairseq-py$ pip3 install cffi
and then the build worked
likely can be fixed by adding cffii to requirements.txt
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `fairseq/progress_bar.py`
Content:
```
1 # Copyright (c) 2017-present, Facebook, Inc.
2 # All rights reserved.
3 #
4 # This source code is licensed under the license found in the LICENSE file in
5 # the root directory of this source tree. An additional grant of patent rights
6 # can be found in the PATENTS file in the same directory.
7 #
8
9 """
10 Wrapper around various loggers and progress bars (e.g., tqdm).
11 """
12
13 from collections import OrderedDict
14 import json
15 from numbers import Number
16 import sys
17
18 from tqdm import tqdm
19
20 from fairseq.meters import AverageMeter
21
22
23 class progress_bar(object):
24 """Abstract class for progress bars."""
25 def __init__(self, iterable, epoch=None, prefix=None):
26 self.iterable = iterable
27 self.epoch = epoch
28 self.prefix = ''
29 if epoch is not None:
30 self.prefix += '| epoch {:03d}'.format(epoch)
31 if prefix is not None:
32 self.prefix += ' | {}'.format(prefix)
33
34 def __enter__(self):
35 return self
36
37 def __exit__(self, *exc):
38 return False
39
40 def __iter__(self):
41 raise NotImplementedError
42
43 def log(self, stats):
44 """Log intermediate stats according to log_interval."""
45 raise NotImplementedError
46
47 def print(self, stats):
48 """Print end-of-epoch stats."""
49 raise NotImplementedError
50
51 def _str_commas(self, stats):
52 return ', '.join(key + '=' + stats[key].strip()
53 for key in stats.keys())
54
55 def _str_pipes(self, stats):
56 return ' | '.join(key + ' ' + stats[key].strip()
57 for key in stats.keys())
58
59 def _format_stats(self, stats):
60 postfix = OrderedDict(stats)
61 # Preprocess stats according to datatype
62 for key in postfix.keys():
63 # Number: limit the length of the string
64 if isinstance(postfix[key], Number):
65 postfix[key] = '{:g}'.format(postfix[key])
66 # Meter: display both current and average value
67 elif isinstance(postfix[key], AverageMeter):
68 postfix[key] = '{:.2f} ({:.2f})'.format(
69 postfix[key].val, postfix[key].avg)
70 # Else for any other type, try to get the string conversion
71 elif not isinstance(postfix[key], str):
72 postfix[key] = str(postfix[key])
73 # Else if it's a string, don't need to preprocess anything
74 return postfix
75
76
77 class json_progress_bar(progress_bar):
78 """Log output in JSON format."""
79
80 def __init__(self, iterable, epoch=None, prefix=None, log_interval=1000):
81 super().__init__(iterable, epoch, prefix)
82 self.log_interval = log_interval
83 self.stats = None
84
85 def __iter__(self):
86 size = float(len(self.iterable))
87 for i, obj in enumerate(self.iterable):
88 yield obj
89 if self.stats is not None and i > 0 and \
90 self.log_interval is not None and i % self.log_interval == 0:
91 update = self.epoch + float(i / size) if self.epoch is not None else None
92 stats = self._format_stats(self.stats, epoch=self.epoch, update=update)
93 print('sweep_log: ' + json.dumps(stats), flush=True)
94
95 def log(self, stats):
96 """Log intermediate stats according to log_interval."""
97 self.stats = stats
98
99 def print(self, stats):
100 """Print end-of-epoch stats."""
101 stats = self._format_stats(self.stats, epoch=self.epoch)
102 print("sweep_log: " + json.dumps(stats), flush=True)
103
104 def _format_stats(self, stats, epoch=None, update=None):
105 postfix = OrderedDict()
106 if epoch is not None:
107 postfix['epoch'] = epoch
108 if update is not None:
109 postfix['update'] = update
110 # Preprocess stats according to datatype
111 for key in stats.keys():
112 # Meter: display both current and average value
113 if isinstance(stats[key], AverageMeter):
114 postfix[key] = stats[key].val
115 postfix[key + '_avg'] = stats[key].avg
116 else:
117 postfix[key] = stats[key]
118 return postfix
119
120
121 class noop_progress_bar(progress_bar):
122 """No logging."""
123
124 def __init__(self, iterable, epoch=None, prefix=None):
125 super().__init__(iterable, epoch, prefix)
126
127 def __iter__(self):
128 for obj in self.iterable:
129 yield obj
130
131 def log(self, stats):
132 """Log intermediate stats according to log_interval."""
133 pass
134
135 def print(self, stats):
136 """Print end-of-epoch stats."""
137 pass
138
139
140 class simple_progress_bar(progress_bar):
141 """A minimal logger for non-TTY environments."""
142
143 def __init__(self, iterable, epoch=None, prefix=None, log_interval=1000):
144 super().__init__(iterable, epoch, prefix)
145 self.log_interval = log_interval
146 self.stats = None
147
148 def __iter__(self):
149 size = len(self.iterable)
150 for i, obj in enumerate(self.iterable):
151 yield obj
152 if self.stats is not None and i > 0 and \
153 self.log_interval is not None and i % self.log_interval == 0:
154 postfix = self._str_commas(self.stats)
155 print('{}: {:5d} / {:d} {}'.format(self.prefix, i, size, postfix),
156 flush=True)
157
158 def log(self, stats):
159 """Log intermediate stats according to log_interval."""
160 self.stats = self._format_stats(stats)
161
162 def print(self, stats):
163 """Print end-of-epoch stats."""
164 postfix = self._str_pipes(self._format_stats(stats))
165 print('{} | {}'.format(self.prefix, postfix), flush=True)
166
167
168 class tqdm_progress_bar(progress_bar):
169 """Log to tqdm."""
170
171 def __init__(self, iterable, epoch=None, prefix=None):
172 super().__init__(iterable, epoch, prefix)
173 self.tqdm = tqdm(iterable, self.prefix, leave=False)
174
175 def __iter__(self):
176 return iter(self.tqdm)
177
178 def log(self, stats):
179 """Log intermediate stats according to log_interval."""
180 self.tqdm.set_postfix(self._format_stats(stats), refresh=False)
181
182 def print(self, stats):
183 """Print end-of-epoch stats."""
184 postfix = self._str_pipes(self._format_stats(stats))
185 self.tqdm.write('{} | {}'.format(self.tqdm.desc, postfix))
186
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/fairseq/progress_bar.py b/fairseq/progress_bar.py
--- a/fairseq/progress_bar.py
+++ b/fairseq/progress_bar.py
@@ -13,7 +13,6 @@
from collections import OrderedDict
import json
from numbers import Number
-import sys
from tqdm import tqdm
| {"golden_diff": "diff --git a/fairseq/progress_bar.py b/fairseq/progress_bar.py\n--- a/fairseq/progress_bar.py\n+++ b/fairseq/progress_bar.py\n@@ -13,7 +13,6 @@\n from collections import OrderedDict\n import json\n from numbers import Number\n-import sys\n \n from tqdm import tqdm\n", "issue": "installation from source requires installing cffi\nThis is a very minor documentation issue\r\nnote: using python3/pip3 as there is a comment about requiring python 3 for fairseq-py\r\nnot using anaconda..I have had issues with package consistency..so I avoid it\r\nfairseq-py installed with \r\ngit clone https://github.com/facebookresearch/fairseq-py.git\r\nsudo pip3 install -r requirements.txt \r\n\r\nlevinth@zt-gpu-lin-1:~/fairseq-py$ sudo python3 setup.py build\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/__init__.py\", line 12, in <module>\r\n import cffi\r\nImportError: No module named 'cffi'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 13, in <module>\r\n from torch.utils.ffi import create_extension\r\n File \"/usr/local/lib/python3.5/dist-packages/torch/utils/ffi/__init__.py\", line 14, in <module>\r\n raise ImportError(\"torch.utils.ffi requires the cffi package\")\r\nImportError: torch.utils.ffi requires the cffi package\r\nlevinth@zt-gpu-lin-1:~/fairseq-py$ pip3 install cffi\r\n\r\nand then the build worked\r\nlikely can be fixed by adding cffii to requirements.txt\n", "before_files": [{"content": "# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n#\n# This source code is licensed under the license found in the LICENSE file in\n# the root directory of this source tree. An additional grant of patent rights\n# can be found in the PATENTS file in the same directory.\n#\n\n\"\"\"\nWrapper around various loggers and progress bars (e.g., tqdm).\n\"\"\"\n\nfrom collections import OrderedDict\nimport json\nfrom numbers import Number\nimport sys\n\nfrom tqdm import tqdm\n\nfrom fairseq.meters import AverageMeter\n\n\nclass progress_bar(object):\n \"\"\"Abstract class for progress bars.\"\"\"\n def __init__(self, iterable, epoch=None, prefix=None):\n self.iterable = iterable\n self.epoch = epoch\n self.prefix = ''\n if epoch is not None:\n self.prefix += '| epoch {:03d}'.format(epoch)\n if prefix is not None:\n self.prefix += ' | {}'.format(prefix)\n\n def __enter__(self):\n return self\n\n def __exit__(self, *exc):\n return False\n\n def __iter__(self):\n raise NotImplementedError\n\n def log(self, stats):\n \"\"\"Log intermediate stats according to log_interval.\"\"\"\n raise NotImplementedError\n\n def print(self, stats):\n \"\"\"Print end-of-epoch stats.\"\"\"\n raise NotImplementedError\n\n def _str_commas(self, stats):\n return ', '.join(key + '=' + stats[key].strip()\n for key in stats.keys())\n\n def _str_pipes(self, stats):\n return ' | '.join(key + ' ' + stats[key].strip()\n for key in stats.keys())\n\n def _format_stats(self, stats):\n postfix = OrderedDict(stats)\n # Preprocess stats according to datatype\n for key in postfix.keys():\n # Number: limit the length of the string\n if isinstance(postfix[key], Number):\n postfix[key] = '{:g}'.format(postfix[key])\n # Meter: display both current and average value\n elif isinstance(postfix[key], AverageMeter):\n postfix[key] = '{:.2f} ({:.2f})'.format(\n postfix[key].val, postfix[key].avg)\n # Else for any other type, try to get the string conversion\n elif not isinstance(postfix[key], str):\n postfix[key] = str(postfix[key])\n # Else if it's a string, don't need to preprocess anything\n return postfix\n\n\nclass json_progress_bar(progress_bar):\n \"\"\"Log output in JSON format.\"\"\"\n\n def __init__(self, iterable, epoch=None, prefix=None, log_interval=1000):\n super().__init__(iterable, epoch, prefix)\n self.log_interval = log_interval\n self.stats = None\n\n def __iter__(self):\n size = float(len(self.iterable))\n for i, obj in enumerate(self.iterable):\n yield obj\n if self.stats is not None and i > 0 and \\\n self.log_interval is not None and i % self.log_interval == 0:\n update = self.epoch + float(i / size) if self.epoch is not None else None\n stats = self._format_stats(self.stats, epoch=self.epoch, update=update)\n print('sweep_log: ' + json.dumps(stats), flush=True)\n\n def log(self, stats):\n \"\"\"Log intermediate stats according to log_interval.\"\"\"\n self.stats = stats\n\n def print(self, stats):\n \"\"\"Print end-of-epoch stats.\"\"\"\n stats = self._format_stats(self.stats, epoch=self.epoch)\n print(\"sweep_log: \" + json.dumps(stats), flush=True)\n\n def _format_stats(self, stats, epoch=None, update=None):\n postfix = OrderedDict()\n if epoch is not None:\n postfix['epoch'] = epoch\n if update is not None:\n postfix['update'] = update\n # Preprocess stats according to datatype\n for key in stats.keys():\n # Meter: display both current and average value\n if isinstance(stats[key], AverageMeter):\n postfix[key] = stats[key].val\n postfix[key + '_avg'] = stats[key].avg\n else:\n postfix[key] = stats[key]\n return postfix\n\n\nclass noop_progress_bar(progress_bar):\n \"\"\"No logging.\"\"\"\n\n def __init__(self, iterable, epoch=None, prefix=None):\n super().__init__(iterable, epoch, prefix)\n\n def __iter__(self):\n for obj in self.iterable:\n yield obj\n\n def log(self, stats):\n \"\"\"Log intermediate stats according to log_interval.\"\"\"\n pass\n\n def print(self, stats):\n \"\"\"Print end-of-epoch stats.\"\"\"\n pass\n\n\nclass simple_progress_bar(progress_bar):\n \"\"\"A minimal logger for non-TTY environments.\"\"\"\n\n def __init__(self, iterable, epoch=None, prefix=None, log_interval=1000):\n super().__init__(iterable, epoch, prefix)\n self.log_interval = log_interval\n self.stats = None\n\n def __iter__(self):\n size = len(self.iterable)\n for i, obj in enumerate(self.iterable):\n yield obj\n if self.stats is not None and i > 0 and \\\n self.log_interval is not None and i % self.log_interval == 0:\n postfix = self._str_commas(self.stats)\n print('{}: {:5d} / {:d} {}'.format(self.prefix, i, size, postfix),\n flush=True)\n\n def log(self, stats):\n \"\"\"Log intermediate stats according to log_interval.\"\"\"\n self.stats = self._format_stats(stats)\n\n def print(self, stats):\n \"\"\"Print end-of-epoch stats.\"\"\"\n postfix = self._str_pipes(self._format_stats(stats))\n print('{} | {}'.format(self.prefix, postfix), flush=True)\n\n\nclass tqdm_progress_bar(progress_bar):\n \"\"\"Log to tqdm.\"\"\"\n\n def __init__(self, iterable, epoch=None, prefix=None):\n super().__init__(iterable, epoch, prefix)\n self.tqdm = tqdm(iterable, self.prefix, leave=False)\n\n def __iter__(self):\n return iter(self.tqdm)\n\n def log(self, stats):\n \"\"\"Log intermediate stats according to log_interval.\"\"\"\n self.tqdm.set_postfix(self._format_stats(stats), refresh=False)\n\n def print(self, stats):\n \"\"\"Print end-of-epoch stats.\"\"\"\n postfix = self._str_pipes(self._format_stats(stats))\n self.tqdm.write('{} | {}'.format(self.tqdm.desc, postfix))\n", "path": "fairseq/progress_bar.py"}], "after_files": [{"content": "# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n#\n# This source code is licensed under the license found in the LICENSE file in\n# the root directory of this source tree. An additional grant of patent rights\n# can be found in the PATENTS file in the same directory.\n#\n\n\"\"\"\nWrapper around various loggers and progress bars (e.g., tqdm).\n\"\"\"\n\nfrom collections import OrderedDict\nimport json\nfrom numbers import Number\n\nfrom tqdm import tqdm\n\nfrom fairseq.meters import AverageMeter\n\n\nclass progress_bar(object):\n \"\"\"Abstract class for progress bars.\"\"\"\n def __init__(self, iterable, epoch=None, prefix=None):\n self.iterable = iterable\n self.epoch = epoch\n self.prefix = ''\n if epoch is not None:\n self.prefix += '| epoch {:03d}'.format(epoch)\n if prefix is not None:\n self.prefix += ' | {}'.format(prefix)\n\n def __enter__(self):\n return self\n\n def __exit__(self, *exc):\n return False\n\n def __iter__(self):\n raise NotImplementedError\n\n def log(self, stats):\n \"\"\"Log intermediate stats according to log_interval.\"\"\"\n raise NotImplementedError\n\n def print(self, stats):\n \"\"\"Print end-of-epoch stats.\"\"\"\n raise NotImplementedError\n\n def _str_commas(self, stats):\n return ', '.join(key + '=' + stats[key].strip()\n for key in stats.keys())\n\n def _str_pipes(self, stats):\n return ' | '.join(key + ' ' + stats[key].strip()\n for key in stats.keys())\n\n def _format_stats(self, stats):\n postfix = OrderedDict(stats)\n # Preprocess stats according to datatype\n for key in postfix.keys():\n # Number: limit the length of the string\n if isinstance(postfix[key], Number):\n postfix[key] = '{:g}'.format(postfix[key])\n # Meter: display both current and average value\n elif isinstance(postfix[key], AverageMeter):\n postfix[key] = '{:.2f} ({:.2f})'.format(\n postfix[key].val, postfix[key].avg)\n # Else for any other type, try to get the string conversion\n elif not isinstance(postfix[key], str):\n postfix[key] = str(postfix[key])\n # Else if it's a string, don't need to preprocess anything\n return postfix\n\n\nclass json_progress_bar(progress_bar):\n \"\"\"Log output in JSON format.\"\"\"\n\n def __init__(self, iterable, epoch=None, prefix=None, log_interval=1000):\n super().__init__(iterable, epoch, prefix)\n self.log_interval = log_interval\n self.stats = None\n\n def __iter__(self):\n size = float(len(self.iterable))\n for i, obj in enumerate(self.iterable):\n yield obj\n if self.stats is not None and i > 0 and \\\n self.log_interval is not None and i % self.log_interval == 0:\n update = self.epoch + float(i / size) if self.epoch is not None else None\n stats = self._format_stats(self.stats, epoch=self.epoch, update=update)\n print('sweep_log: ' + json.dumps(stats), flush=True)\n\n def log(self, stats):\n \"\"\"Log intermediate stats according to log_interval.\"\"\"\n self.stats = stats\n\n def print(self, stats):\n \"\"\"Print end-of-epoch stats.\"\"\"\n stats = self._format_stats(self.stats, epoch=self.epoch)\n print(\"sweep_log: \" + json.dumps(stats), flush=True)\n\n def _format_stats(self, stats, epoch=None, update=None):\n postfix = OrderedDict()\n if epoch is not None:\n postfix['epoch'] = epoch\n if update is not None:\n postfix['update'] = update\n # Preprocess stats according to datatype\n for key in stats.keys():\n # Meter: display both current and average value\n if isinstance(stats[key], AverageMeter):\n postfix[key] = stats[key].val\n postfix[key + '_avg'] = stats[key].avg\n else:\n postfix[key] = stats[key]\n return postfix\n\n\nclass noop_progress_bar(progress_bar):\n \"\"\"No logging.\"\"\"\n\n def __init__(self, iterable, epoch=None, prefix=None):\n super().__init__(iterable, epoch, prefix)\n\n def __iter__(self):\n for obj in self.iterable:\n yield obj\n\n def log(self, stats):\n \"\"\"Log intermediate stats according to log_interval.\"\"\"\n pass\n\n def print(self, stats):\n \"\"\"Print end-of-epoch stats.\"\"\"\n pass\n\n\nclass simple_progress_bar(progress_bar):\n \"\"\"A minimal logger for non-TTY environments.\"\"\"\n\n def __init__(self, iterable, epoch=None, prefix=None, log_interval=1000):\n super().__init__(iterable, epoch, prefix)\n self.log_interval = log_interval\n self.stats = None\n\n def __iter__(self):\n size = len(self.iterable)\n for i, obj in enumerate(self.iterable):\n yield obj\n if self.stats is not None and i > 0 and \\\n self.log_interval is not None and i % self.log_interval == 0:\n postfix = self._str_commas(self.stats)\n print('{}: {:5d} / {:d} {}'.format(self.prefix, i, size, postfix),\n flush=True)\n\n def log(self, stats):\n \"\"\"Log intermediate stats according to log_interval.\"\"\"\n self.stats = self._format_stats(stats)\n\n def print(self, stats):\n \"\"\"Print end-of-epoch stats.\"\"\"\n postfix = self._str_pipes(self._format_stats(stats))\n print('{} | {}'.format(self.prefix, postfix), flush=True)\n\n\nclass tqdm_progress_bar(progress_bar):\n \"\"\"Log to tqdm.\"\"\"\n\n def __init__(self, iterable, epoch=None, prefix=None):\n super().__init__(iterable, epoch, prefix)\n self.tqdm = tqdm(iterable, self.prefix, leave=False)\n\n def __iter__(self):\n return iter(self.tqdm)\n\n def log(self, stats):\n \"\"\"Log intermediate stats according to log_interval.\"\"\"\n self.tqdm.set_postfix(self._format_stats(stats), refresh=False)\n\n def print(self, stats):\n \"\"\"Print end-of-epoch stats.\"\"\"\n postfix = self._str_pipes(self._format_stats(stats))\n self.tqdm.write('{} | {}'.format(self.tqdm.desc, postfix))\n", "path": "fairseq/progress_bar.py"}]} | 2,431 | 73 |
gh_patches_debug_15268 | rasdani/github-patches | git_diff | scikit-hep__pyhf-1273 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add commas around constrained maximal likelihood in docstring for clarity
# Description
In PR #905 the docstring for [`pyhf.infer.mle.fixed_poi_fit`](https://scikit-hep.org/pyhf/_generated/pyhf.infer.mle.fixed_poi_fit.html) was amended, but the lines
https://github.com/scikit-hep/pyhf/blob/fd7930cce36cbc3a2d0ee1828f060d7382129579/src/pyhf/infer/mle.py#L134-L135
are missing commas around the likelihood, making it difficult to read

It should read
```
,:math:`L\left(\mu, \hat{\hat{\boldsymbol{\theta}}}\right)`, in the profile
```
Add commas around constrained maximal likelihood in docstring for clarity
# Description
In PR #905 the docstring for [`pyhf.infer.mle.fixed_poi_fit`](https://scikit-hep.org/pyhf/_generated/pyhf.infer.mle.fixed_poi_fit.html) was amended, but the lines
https://github.com/scikit-hep/pyhf/blob/fd7930cce36cbc3a2d0ee1828f060d7382129579/src/pyhf/infer/mle.py#L134-L135
are missing commas around the likelihood, making it difficult to read

It should read
```
,:math:`L\left(\mu, \hat{\hat{\boldsymbol{\theta}}}\right)`, in the profile
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pyhf/infer/mle.py`
Content:
```
1 """Module for Maximum Likelihood Estimation."""
2 from .. import get_backend
3 from ..exceptions import UnspecifiedPOI
4
5
6 def twice_nll(pars, data, pdf):
7 r"""
8 Two times the negative log-likelihood of the model parameters, :math:`\left(\mu, \boldsymbol{\theta}\right)`, given the observed data.
9 It is used in the calculation of the test statistic, :math:`t_{\mu}`, as defiend in Equation (8) in :xref:`arXiv:1007.1727`
10
11 .. math::
12
13 t_{\mu} = -2\ln\lambda\left(\mu\right)
14
15 where :math:`\lambda\left(\mu\right)` is the profile likelihood ratio as defined in Equation (7)
16
17 .. math::
18
19 \lambda\left(\mu\right) = \frac{L\left(\mu, \hat{\hat{\boldsymbol{\theta}}}\right)}{L\left(\hat{\mu}, \hat{\boldsymbol{\theta}}\right)}\,.
20
21 It serves as the objective function to minimize in :func:`~pyhf.infer.mle.fit`
22 and :func:`~pyhf.infer.mle.fixed_poi_fit`.
23
24 Example:
25 >>> import pyhf
26 >>> pyhf.set_backend("numpy")
27 >>> model = pyhf.simplemodels.hepdata_like(
28 ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]
29 ... )
30 >>> observations = [51, 48]
31 >>> data = pyhf.tensorlib.astensor(observations + model.config.auxdata)
32 >>> parameters = model.config.suggested_init() # nominal parameters
33 >>> twice_nll = pyhf.infer.mle.twice_nll(parameters, data, model)
34 >>> twice_nll
35 array([30.77525435])
36 >>> -2 * model.logpdf(parameters, data) == twice_nll
37 array([ True])
38
39 Args:
40 pars (:obj:`tensor`): The parameters of the HistFactory model
41 data (:obj:`tensor`): The data to be considered
42 pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json
43
44 Returns:
45 Tensor: Two times the negative log-likelihood, :math:`-2\ln L\left(\mu, \boldsymbol{\theta}\right)`
46 """
47 return -2 * pdf.logpdf(pars, data)
48
49
50 def _validate_fit_inputs(init_pars, par_bounds, fixed_params):
51 for par_idx, (value, bound) in enumerate(zip(init_pars, par_bounds)):
52 if not (bound[0] <= value <= bound[1]):
53 raise ValueError(
54 f"fit initialization parameter (index: {par_idx}, value: {value}) lies outside of its bounds: {bound}"
55 + "\nTo correct this adjust the initialization parameter values in the model spec or those given"
56 + "\nas arguments to pyhf.infer.fit. If this value is intended, adjust the range of the parameter"
57 + "\nbounds."
58 )
59
60
61 def fit(data, pdf, init_pars=None, par_bounds=None, fixed_params=None, **kwargs):
62 r"""
63 Run a maximum likelihood fit.
64 This is done by minimizing the objective function :func:`~pyhf.infer.mle.twice_nll`
65 of the model parameters given the observed data.
66 This is used to produce the maximal likelihood :math:`L\left(\hat{\mu}, \hat{\boldsymbol{\theta}}\right)`
67 in the profile likelihood ratio in Equation (7) in :xref:`arXiv:1007.1727`
68
69 .. math::
70
71 \lambda\left(\mu\right) = \frac{L\left(\mu, \hat{\hat{\boldsymbol{\theta}}}\right)}{L\left(\hat{\mu}, \hat{\boldsymbol{\theta}}\right)}
72
73
74 .. note::
75
76 :func:`twice_nll` is the objective function given to the optimizer and
77 is returned evaluated at the best fit model parameters when the optional
78 kwarg ``return_fitted_val`` is ``True``.
79
80 Example:
81 >>> import pyhf
82 >>> pyhf.set_backend("numpy")
83 >>> model = pyhf.simplemodels.hepdata_like(
84 ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]
85 ... )
86 >>> observations = [51, 48]
87 >>> data = pyhf.tensorlib.astensor(observations + model.config.auxdata)
88 >>> bestfit_pars, twice_nll = pyhf.infer.mle.fit(data, model, return_fitted_val=True)
89 >>> bestfit_pars
90 array([0. , 1.0030512 , 0.96266961])
91 >>> twice_nll
92 array(24.98393521)
93 >>> -2 * model.logpdf(bestfit_pars, data) == twice_nll
94 array([ True])
95
96 Args:
97 data (:obj:`tensor`): The data
98 pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json
99 init_pars (:obj:`list`): Values to initialize the model parameters at for the fit
100 par_bounds (:obj:`list` of :obj:`list`\s or :obj:`tuple`\s): The extrema of values the model parameters are allowed to reach in the fit
101 fixed_params (:obj:`list`): Parameters to be held constant in the fit.
102 kwargs: Keyword arguments passed through to the optimizer API
103
104 Returns:
105 See optimizer API
106
107 """
108 _, opt = get_backend()
109 init_pars = init_pars or pdf.config.suggested_init()
110 par_bounds = par_bounds or pdf.config.suggested_bounds()
111 fixed_params = fixed_params or pdf.config.suggested_fixed()
112
113 _validate_fit_inputs(init_pars, par_bounds, fixed_params)
114
115 # get fixed vals from the model
116 fixed_vals = [
117 (index, init)
118 for index, (init, is_fixed) in enumerate(zip(init_pars, fixed_params))
119 if is_fixed
120 ]
121
122 return opt.minimize(
123 twice_nll, data, pdf, init_pars, par_bounds, fixed_vals, **kwargs
124 )
125
126
127 def fixed_poi_fit(
128 poi_val, data, pdf, init_pars=None, par_bounds=None, fixed_params=None, **kwargs
129 ):
130 r"""
131 Run a maximum likelihood fit with the POI value fixed.
132 This is done by minimizing the objective function of :func:`~pyhf.infer.mle.twice_nll`
133 of the model parameters given the observed data, for a given fixed value of :math:`\mu`.
134 This is used to produce the constrained maximal likelihood for the given :math:`\mu`
135 :math:`L\left(\mu, \hat{\hat{\boldsymbol{\theta}}}\right)` in the profile
136 likelihood ratio in Equation (7) in :xref:`arXiv:1007.1727`
137
138 .. math::
139
140 \lambda\left(\mu\right) = \frac{L\left(\mu, \hat{\hat{\boldsymbol{\theta}}}\right)}{L\left(\hat{\mu}, \hat{\boldsymbol{\theta}}\right)}
141
142 .. note::
143
144 :func:`twice_nll` is the objective function given to the optimizer and
145 is returned evaluated at the best fit model parameters when the optional
146 kwarg ``return_fitted_val`` is ``True``.
147
148 Example:
149 >>> import pyhf
150 >>> pyhf.set_backend("numpy")
151 >>> model = pyhf.simplemodels.hepdata_like(
152 ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]
153 ... )
154 >>> observations = [51, 48]
155 >>> data = pyhf.tensorlib.astensor(observations + model.config.auxdata)
156 >>> test_poi = 1.0
157 >>> bestfit_pars, twice_nll = pyhf.infer.mle.fixed_poi_fit(
158 ... test_poi, data, model, return_fitted_val=True
159 ... )
160 >>> bestfit_pars
161 array([1. , 0.97224597, 0.87553894])
162 >>> twice_nll
163 array(28.92218013)
164 >>> -2 * model.logpdf(bestfit_pars, data) == twice_nll
165 array([ True])
166
167 Args:
168 data: The data
169 pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json
170 init_pars (:obj:`list`): Values to initialize the model parameters at for the fit
171 par_bounds (:obj:`list` of :obj:`list`\s or :obj:`tuple`\s): The extrema of values the model parameters are allowed to reach in the fit
172 fixed_params (:obj:`list`): Parameters to be held constant in the fit.
173 kwargs: Keyword arguments passed through to the optimizer API
174
175 Returns:
176 See optimizer API
177
178 """
179 if pdf.config.poi_index is None:
180 raise UnspecifiedPOI(
181 'No POI is defined. A POI is required to fit with a fixed POI.'
182 )
183
184 init_pars = [*(init_pars or pdf.config.suggested_init())]
185 fixed_params = [*(fixed_params or pdf.config.suggested_fixed())]
186
187 init_pars[pdf.config.poi_index] = poi_val
188 fixed_params[pdf.config.poi_index] = True
189
190 return fit(data, pdf, init_pars, par_bounds, fixed_params, **kwargs)
191
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pyhf/infer/mle.py b/src/pyhf/infer/mle.py
--- a/src/pyhf/infer/mle.py
+++ b/src/pyhf/infer/mle.py
@@ -131,8 +131,8 @@
Run a maximum likelihood fit with the POI value fixed.
This is done by minimizing the objective function of :func:`~pyhf.infer.mle.twice_nll`
of the model parameters given the observed data, for a given fixed value of :math:`\mu`.
- This is used to produce the constrained maximal likelihood for the given :math:`\mu`
- :math:`L\left(\mu, \hat{\hat{\boldsymbol{\theta}}}\right)` in the profile
+ This is used to produce the constrained maximal likelihood for the given :math:`\mu`,
+ :math:`L\left(\mu, \hat{\hat{\boldsymbol{\theta}}}\right)`, in the profile
likelihood ratio in Equation (7) in :xref:`arXiv:1007.1727`
.. math::
| {"golden_diff": "diff --git a/src/pyhf/infer/mle.py b/src/pyhf/infer/mle.py\n--- a/src/pyhf/infer/mle.py\n+++ b/src/pyhf/infer/mle.py\n@@ -131,8 +131,8 @@\n Run a maximum likelihood fit with the POI value fixed.\n This is done by minimizing the objective function of :func:`~pyhf.infer.mle.twice_nll`\n of the model parameters given the observed data, for a given fixed value of :math:`\\mu`.\n- This is used to produce the constrained maximal likelihood for the given :math:`\\mu`\n- :math:`L\\left(\\mu, \\hat{\\hat{\\boldsymbol{\\theta}}}\\right)` in the profile\n+ This is used to produce the constrained maximal likelihood for the given :math:`\\mu`,\n+ :math:`L\\left(\\mu, \\hat{\\hat{\\boldsymbol{\\theta}}}\\right)`, in the profile\n likelihood ratio in Equation (7) in :xref:`arXiv:1007.1727`\n \n .. math::\n", "issue": "Add commas around constrained maximal likelihood in docstring for clarity\n# Description\r\n\r\nIn PR #905 the docstring for [`pyhf.infer.mle.fixed_poi_fit`](https://scikit-hep.org/pyhf/_generated/pyhf.infer.mle.fixed_poi_fit.html) was amended, but the lines\r\n\r\nhttps://github.com/scikit-hep/pyhf/blob/fd7930cce36cbc3a2d0ee1828f060d7382129579/src/pyhf/infer/mle.py#L134-L135\r\n\r\nare missing commas around the likelihood, making it difficult to read\r\n\r\n\r\n\r\nIt should read\r\n\r\n```\r\n,:math:`L\\left(\\mu, \\hat{\\hat{\\boldsymbol{\\theta}}}\\right)`, in the profile \r\n```\r\n\nAdd commas around constrained maximal likelihood in docstring for clarity\n# Description\r\n\r\nIn PR #905 the docstring for [`pyhf.infer.mle.fixed_poi_fit`](https://scikit-hep.org/pyhf/_generated/pyhf.infer.mle.fixed_poi_fit.html) was amended, but the lines\r\n\r\nhttps://github.com/scikit-hep/pyhf/blob/fd7930cce36cbc3a2d0ee1828f060d7382129579/src/pyhf/infer/mle.py#L134-L135\r\n\r\nare missing commas around the likelihood, making it difficult to read\r\n\r\n\r\n\r\nIt should read\r\n\r\n```\r\n,:math:`L\\left(\\mu, \\hat{\\hat{\\boldsymbol{\\theta}}}\\right)`, in the profile \r\n```\r\n\n", "before_files": [{"content": "\"\"\"Module for Maximum Likelihood Estimation.\"\"\"\nfrom .. import get_backend\nfrom ..exceptions import UnspecifiedPOI\n\n\ndef twice_nll(pars, data, pdf):\n r\"\"\"\n Two times the negative log-likelihood of the model parameters, :math:`\\left(\\mu, \\boldsymbol{\\theta}\\right)`, given the observed data.\n It is used in the calculation of the test statistic, :math:`t_{\\mu}`, as defiend in Equation (8) in :xref:`arXiv:1007.1727`\n\n .. math::\n\n t_{\\mu} = -2\\ln\\lambda\\left(\\mu\\right)\n\n where :math:`\\lambda\\left(\\mu\\right)` is the profile likelihood ratio as defined in Equation (7)\n\n .. math::\n\n \\lambda\\left(\\mu\\right) = \\frac{L\\left(\\mu, \\hat{\\hat{\\boldsymbol{\\theta}}}\\right)}{L\\left(\\hat{\\mu}, \\hat{\\boldsymbol{\\theta}}\\right)}\\,.\n\n It serves as the objective function to minimize in :func:`~pyhf.infer.mle.fit`\n and :func:`~pyhf.infer.mle.fixed_poi_fit`.\n\n Example:\n >>> import pyhf\n >>> pyhf.set_backend(\"numpy\")\n >>> model = pyhf.simplemodels.hepdata_like(\n ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]\n ... )\n >>> observations = [51, 48]\n >>> data = pyhf.tensorlib.astensor(observations + model.config.auxdata)\n >>> parameters = model.config.suggested_init() # nominal parameters\n >>> twice_nll = pyhf.infer.mle.twice_nll(parameters, data, model)\n >>> twice_nll\n array([30.77525435])\n >>> -2 * model.logpdf(parameters, data) == twice_nll\n array([ True])\n\n Args:\n pars (:obj:`tensor`): The parameters of the HistFactory model\n data (:obj:`tensor`): The data to be considered\n pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json\n\n Returns:\n Tensor: Two times the negative log-likelihood, :math:`-2\\ln L\\left(\\mu, \\boldsymbol{\\theta}\\right)`\n \"\"\"\n return -2 * pdf.logpdf(pars, data)\n\n\ndef _validate_fit_inputs(init_pars, par_bounds, fixed_params):\n for par_idx, (value, bound) in enumerate(zip(init_pars, par_bounds)):\n if not (bound[0] <= value <= bound[1]):\n raise ValueError(\n f\"fit initialization parameter (index: {par_idx}, value: {value}) lies outside of its bounds: {bound}\"\n + \"\\nTo correct this adjust the initialization parameter values in the model spec or those given\"\n + \"\\nas arguments to pyhf.infer.fit. If this value is intended, adjust the range of the parameter\"\n + \"\\nbounds.\"\n )\n\n\ndef fit(data, pdf, init_pars=None, par_bounds=None, fixed_params=None, **kwargs):\n r\"\"\"\n Run a maximum likelihood fit.\n This is done by minimizing the objective function :func:`~pyhf.infer.mle.twice_nll`\n of the model parameters given the observed data.\n This is used to produce the maximal likelihood :math:`L\\left(\\hat{\\mu}, \\hat{\\boldsymbol{\\theta}}\\right)`\n in the profile likelihood ratio in Equation (7) in :xref:`arXiv:1007.1727`\n\n .. math::\n\n \\lambda\\left(\\mu\\right) = \\frac{L\\left(\\mu, \\hat{\\hat{\\boldsymbol{\\theta}}}\\right)}{L\\left(\\hat{\\mu}, \\hat{\\boldsymbol{\\theta}}\\right)}\n\n\n .. note::\n\n :func:`twice_nll` is the objective function given to the optimizer and\n is returned evaluated at the best fit model parameters when the optional\n kwarg ``return_fitted_val`` is ``True``.\n\n Example:\n >>> import pyhf\n >>> pyhf.set_backend(\"numpy\")\n >>> model = pyhf.simplemodels.hepdata_like(\n ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]\n ... )\n >>> observations = [51, 48]\n >>> data = pyhf.tensorlib.astensor(observations + model.config.auxdata)\n >>> bestfit_pars, twice_nll = pyhf.infer.mle.fit(data, model, return_fitted_val=True)\n >>> bestfit_pars\n array([0. , 1.0030512 , 0.96266961])\n >>> twice_nll\n array(24.98393521)\n >>> -2 * model.logpdf(bestfit_pars, data) == twice_nll\n array([ True])\n\n Args:\n data (:obj:`tensor`): The data\n pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json\n init_pars (:obj:`list`): Values to initialize the model parameters at for the fit\n par_bounds (:obj:`list` of :obj:`list`\\s or :obj:`tuple`\\s): The extrema of values the model parameters are allowed to reach in the fit\n fixed_params (:obj:`list`): Parameters to be held constant in the fit.\n kwargs: Keyword arguments passed through to the optimizer API\n\n Returns:\n See optimizer API\n\n \"\"\"\n _, opt = get_backend()\n init_pars = init_pars or pdf.config.suggested_init()\n par_bounds = par_bounds or pdf.config.suggested_bounds()\n fixed_params = fixed_params or pdf.config.suggested_fixed()\n\n _validate_fit_inputs(init_pars, par_bounds, fixed_params)\n\n # get fixed vals from the model\n fixed_vals = [\n (index, init)\n for index, (init, is_fixed) in enumerate(zip(init_pars, fixed_params))\n if is_fixed\n ]\n\n return opt.minimize(\n twice_nll, data, pdf, init_pars, par_bounds, fixed_vals, **kwargs\n )\n\n\ndef fixed_poi_fit(\n poi_val, data, pdf, init_pars=None, par_bounds=None, fixed_params=None, **kwargs\n):\n r\"\"\"\n Run a maximum likelihood fit with the POI value fixed.\n This is done by minimizing the objective function of :func:`~pyhf.infer.mle.twice_nll`\n of the model parameters given the observed data, for a given fixed value of :math:`\\mu`.\n This is used to produce the constrained maximal likelihood for the given :math:`\\mu`\n :math:`L\\left(\\mu, \\hat{\\hat{\\boldsymbol{\\theta}}}\\right)` in the profile\n likelihood ratio in Equation (7) in :xref:`arXiv:1007.1727`\n\n .. math::\n\n \\lambda\\left(\\mu\\right) = \\frac{L\\left(\\mu, \\hat{\\hat{\\boldsymbol{\\theta}}}\\right)}{L\\left(\\hat{\\mu}, \\hat{\\boldsymbol{\\theta}}\\right)}\n\n .. note::\n\n :func:`twice_nll` is the objective function given to the optimizer and\n is returned evaluated at the best fit model parameters when the optional\n kwarg ``return_fitted_val`` is ``True``.\n\n Example:\n >>> import pyhf\n >>> pyhf.set_backend(\"numpy\")\n >>> model = pyhf.simplemodels.hepdata_like(\n ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]\n ... )\n >>> observations = [51, 48]\n >>> data = pyhf.tensorlib.astensor(observations + model.config.auxdata)\n >>> test_poi = 1.0\n >>> bestfit_pars, twice_nll = pyhf.infer.mle.fixed_poi_fit(\n ... test_poi, data, model, return_fitted_val=True\n ... )\n >>> bestfit_pars\n array([1. , 0.97224597, 0.87553894])\n >>> twice_nll\n array(28.92218013)\n >>> -2 * model.logpdf(bestfit_pars, data) == twice_nll\n array([ True])\n\n Args:\n data: The data\n pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json\n init_pars (:obj:`list`): Values to initialize the model parameters at for the fit\n par_bounds (:obj:`list` of :obj:`list`\\s or :obj:`tuple`\\s): The extrema of values the model parameters are allowed to reach in the fit\n fixed_params (:obj:`list`): Parameters to be held constant in the fit.\n kwargs: Keyword arguments passed through to the optimizer API\n\n Returns:\n See optimizer API\n\n \"\"\"\n if pdf.config.poi_index is None:\n raise UnspecifiedPOI(\n 'No POI is defined. A POI is required to fit with a fixed POI.'\n )\n\n init_pars = [*(init_pars or pdf.config.suggested_init())]\n fixed_params = [*(fixed_params or pdf.config.suggested_fixed())]\n\n init_pars[pdf.config.poi_index] = poi_val\n fixed_params[pdf.config.poi_index] = True\n\n return fit(data, pdf, init_pars, par_bounds, fixed_params, **kwargs)\n", "path": "src/pyhf/infer/mle.py"}], "after_files": [{"content": "\"\"\"Module for Maximum Likelihood Estimation.\"\"\"\nfrom .. import get_backend\nfrom ..exceptions import UnspecifiedPOI\n\n\ndef twice_nll(pars, data, pdf):\n r\"\"\"\n Two times the negative log-likelihood of the model parameters, :math:`\\left(\\mu, \\boldsymbol{\\theta}\\right)`, given the observed data.\n It is used in the calculation of the test statistic, :math:`t_{\\mu}`, as defiend in Equation (8) in :xref:`arXiv:1007.1727`\n\n .. math::\n\n t_{\\mu} = -2\\ln\\lambda\\left(\\mu\\right)\n\n where :math:`\\lambda\\left(\\mu\\right)` is the profile likelihood ratio as defined in Equation (7)\n\n .. math::\n\n \\lambda\\left(\\mu\\right) = \\frac{L\\left(\\mu, \\hat{\\hat{\\boldsymbol{\\theta}}}\\right)}{L\\left(\\hat{\\mu}, \\hat{\\boldsymbol{\\theta}}\\right)}\\,.\n\n It serves as the objective function to minimize in :func:`~pyhf.infer.mle.fit`\n and :func:`~pyhf.infer.mle.fixed_poi_fit`.\n\n Example:\n >>> import pyhf\n >>> pyhf.set_backend(\"numpy\")\n >>> model = pyhf.simplemodels.hepdata_like(\n ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]\n ... )\n >>> observations = [51, 48]\n >>> data = pyhf.tensorlib.astensor(observations + model.config.auxdata)\n >>> parameters = model.config.suggested_init() # nominal parameters\n >>> twice_nll = pyhf.infer.mle.twice_nll(parameters, data, model)\n >>> twice_nll\n array([30.77525435])\n >>> -2 * model.logpdf(parameters, data) == twice_nll\n array([ True])\n\n Args:\n pars (:obj:`tensor`): The parameters of the HistFactory model\n data (:obj:`tensor`): The data to be considered\n pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json\n\n Returns:\n Tensor: Two times the negative log-likelihood, :math:`-2\\ln L\\left(\\mu, \\boldsymbol{\\theta}\\right)`\n \"\"\"\n return -2 * pdf.logpdf(pars, data)\n\n\ndef _validate_fit_inputs(init_pars, par_bounds, fixed_params):\n for par_idx, (value, bound) in enumerate(zip(init_pars, par_bounds)):\n if not (bound[0] <= value <= bound[1]):\n raise ValueError(\n f\"fit initialization parameter (index: {par_idx}, value: {value}) lies outside of its bounds: {bound}\"\n + \"\\nTo correct this adjust the initialization parameter values in the model spec or those given\"\n + \"\\nas arguments to pyhf.infer.fit. If this value is intended, adjust the range of the parameter\"\n + \"\\nbounds.\"\n )\n\n\ndef fit(data, pdf, init_pars=None, par_bounds=None, fixed_params=None, **kwargs):\n r\"\"\"\n Run a maximum likelihood fit.\n This is done by minimizing the objective function :func:`~pyhf.infer.mle.twice_nll`\n of the model parameters given the observed data.\n This is used to produce the maximal likelihood :math:`L\\left(\\hat{\\mu}, \\hat{\\boldsymbol{\\theta}}\\right)`\n in the profile likelihood ratio in Equation (7) in :xref:`arXiv:1007.1727`\n\n .. math::\n\n \\lambda\\left(\\mu\\right) = \\frac{L\\left(\\mu, \\hat{\\hat{\\boldsymbol{\\theta}}}\\right)}{L\\left(\\hat{\\mu}, \\hat{\\boldsymbol{\\theta}}\\right)}\n\n\n .. note::\n\n :func:`twice_nll` is the objective function given to the optimizer and\n is returned evaluated at the best fit model parameters when the optional\n kwarg ``return_fitted_val`` is ``True``.\n\n Example:\n >>> import pyhf\n >>> pyhf.set_backend(\"numpy\")\n >>> model = pyhf.simplemodels.hepdata_like(\n ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]\n ... )\n >>> observations = [51, 48]\n >>> data = pyhf.tensorlib.astensor(observations + model.config.auxdata)\n >>> bestfit_pars, twice_nll = pyhf.infer.mle.fit(data, model, return_fitted_val=True)\n >>> bestfit_pars\n array([0. , 1.0030512 , 0.96266961])\n >>> twice_nll\n array(24.98393521)\n >>> -2 * model.logpdf(bestfit_pars, data) == twice_nll\n array([ True])\n\n Args:\n data (:obj:`tensor`): The data\n pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json\n init_pars (:obj:`list`): Values to initialize the model parameters at for the fit\n par_bounds (:obj:`list` of :obj:`list`\\s or :obj:`tuple`\\s): The extrema of values the model parameters are allowed to reach in the fit\n fixed_params (:obj:`list`): Parameters to be held constant in the fit.\n kwargs: Keyword arguments passed through to the optimizer API\n\n Returns:\n See optimizer API\n\n \"\"\"\n _, opt = get_backend()\n init_pars = init_pars or pdf.config.suggested_init()\n par_bounds = par_bounds or pdf.config.suggested_bounds()\n fixed_params = fixed_params or pdf.config.suggested_fixed()\n\n _validate_fit_inputs(init_pars, par_bounds, fixed_params)\n\n # get fixed vals from the model\n fixed_vals = [\n (index, init)\n for index, (init, is_fixed) in enumerate(zip(init_pars, fixed_params))\n if is_fixed\n ]\n\n return opt.minimize(\n twice_nll, data, pdf, init_pars, par_bounds, fixed_vals, **kwargs\n )\n\n\ndef fixed_poi_fit(\n poi_val, data, pdf, init_pars=None, par_bounds=None, fixed_params=None, **kwargs\n):\n r\"\"\"\n Run a maximum likelihood fit with the POI value fixed.\n This is done by minimizing the objective function of :func:`~pyhf.infer.mle.twice_nll`\n of the model parameters given the observed data, for a given fixed value of :math:`\\mu`.\n This is used to produce the constrained maximal likelihood for the given :math:`\\mu`,\n :math:`L\\left(\\mu, \\hat{\\hat{\\boldsymbol{\\theta}}}\\right)`, in the profile\n likelihood ratio in Equation (7) in :xref:`arXiv:1007.1727`\n\n .. math::\n\n \\lambda\\left(\\mu\\right) = \\frac{L\\left(\\mu, \\hat{\\hat{\\boldsymbol{\\theta}}}\\right)}{L\\left(\\hat{\\mu}, \\hat{\\boldsymbol{\\theta}}\\right)}\n\n .. note::\n\n :func:`twice_nll` is the objective function given to the optimizer and\n is returned evaluated at the best fit model parameters when the optional\n kwarg ``return_fitted_val`` is ``True``.\n\n Example:\n >>> import pyhf\n >>> pyhf.set_backend(\"numpy\")\n >>> model = pyhf.simplemodels.hepdata_like(\n ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]\n ... )\n >>> observations = [51, 48]\n >>> data = pyhf.tensorlib.astensor(observations + model.config.auxdata)\n >>> test_poi = 1.0\n >>> bestfit_pars, twice_nll = pyhf.infer.mle.fixed_poi_fit(\n ... test_poi, data, model, return_fitted_val=True\n ... )\n >>> bestfit_pars\n array([1. , 0.97224597, 0.87553894])\n >>> twice_nll\n array(28.92218013)\n >>> -2 * model.logpdf(bestfit_pars, data) == twice_nll\n array([ True])\n\n Args:\n data: The data\n pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json\n init_pars (:obj:`list`): Values to initialize the model parameters at for the fit\n par_bounds (:obj:`list` of :obj:`list`\\s or :obj:`tuple`\\s): The extrema of values the model parameters are allowed to reach in the fit\n fixed_params (:obj:`list`): Parameters to be held constant in the fit.\n kwargs: Keyword arguments passed through to the optimizer API\n\n Returns:\n See optimizer API\n\n \"\"\"\n if pdf.config.poi_index is None:\n raise UnspecifiedPOI(\n 'No POI is defined. A POI is required to fit with a fixed POI.'\n )\n\n init_pars = [*(init_pars or pdf.config.suggested_init())]\n fixed_params = [*(fixed_params or pdf.config.suggested_fixed())]\n\n init_pars[pdf.config.poi_index] = poi_val\n fixed_params[pdf.config.poi_index] = True\n\n return fit(data, pdf, init_pars, par_bounds, fixed_params, **kwargs)\n", "path": "src/pyhf/infer/mle.py"}]} | 3,419 | 246 |
gh_patches_debug_21180 | rasdani/github-patches | git_diff | Princeton-CDH__geniza-477 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
As a front-end user I want to see the PGP logo on the site
here are links to the temporary logo approved till we finish revising the permanent one:
- [for desktop light mode](https://www.figma.com/file/HpGBOZi9lO8B3nCAx3d4Fj/Princeton-Geniza-(Project-team)?node-id=2300%3A16460)
- [for mobile light mode](https://www.figma.com/file/HpGBOZi9lO8B3nCAx3d4Fj/Princeton-Geniza-(Project-team)?node-id=2300%3A16461)
- [for desktop dark mode](https://www.figma.com/file/HpGBOZi9lO8B3nCAx3d4Fj/Princeton-Geniza-(Project-team)?node-id=2300%3A16466)
- [for mobile dark mode](https://www.figma.com/file/HpGBOZi9lO8B3nCAx3d4Fj/Princeton-Geniza-(Project-team)?node-id=2301%3A16480)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geniza/pages/models.py`
Content:
```
1 from django.db import models
2 from django.http.response import HttpResponseRedirect
3 from wagtail.admin.edit_handlers import FieldPanel, RichTextFieldPanel
4 from wagtail.core.fields import RichTextField
5 from wagtail.core.models import Page
6
7
8 class HomePage(Page):
9 """:class:`wagtail.core.models.Page` model for Geniza home page."""
10
11 # fields
12 description = models.TextField(blank=True)
13 body = RichTextField(
14 features=[
15 "h2",
16 "h3",
17 "bold",
18 "italic",
19 "link",
20 "ol",
21 "ul",
22 "image",
23 "embed",
24 "blockquote",
25 "superscript",
26 "subscript",
27 "strikethrough",
28 ],
29 blank=True,
30 )
31 # can only be child of Root
32 parent_page_types = [Page]
33 subpage_types = ["pages.ContentPage", "pages.ContainerPage"]
34 content_panels = Page.content_panels + [
35 FieldPanel("description"),
36 RichTextFieldPanel("body"),
37 ]
38
39 class Meta:
40 verbose_name = "homepage"
41
42
43 class ContainerPage(Page):
44 """An empty :class:`Page` type that has :class:`ContentPage` instances
45 as its subpages."""
46
47 # can only be child of HomePage
48 parent_page_types = [HomePage]
49 subpage_types = ["pages.ContentPage"]
50
51 # show in menu by default
52 show_in_menus_default = True
53
54 # should not ever actually render
55 def serve(self, request):
56 # redirect to parent page instead
57 if self.get_parent():
58 return HttpResponseRedirect(self.get_parent().get_url(request))
59
60
61 class ContentPage(Page):
62 """A simple :class:`Page` type for content pages."""
63
64 # fields
65 description = models.TextField(blank=True)
66 body = RichTextField(
67 features=[
68 "h2",
69 "h3",
70 "bold",
71 "italic",
72 "link",
73 "ol",
74 "ul",
75 "image",
76 "embed",
77 "blockquote",
78 "superscript",
79 "subscript",
80 "strikethrough",
81 ],
82 blank=True,
83 )
84 # can be child of Home or Container page
85 parent_page_types = [HomePage, ContainerPage]
86 content_panels = Page.content_panels + [
87 FieldPanel("description"),
88 RichTextFieldPanel("body"),
89 ]
90
91 def get_context(self, request):
92 context = super(ContentPage, self).get_context(request)
93 context["page_type"] = "content-page"
94 return context
95
```
Path: `geniza/pages/management/commands/bootstrap_content.py`
Content:
```
1 from django.core.exceptions import ObjectDoesNotExist
2 from django.core.management.base import BaseCommand
3 from django.templatetags.static import static
4 from wagtail.core.models import Page
5 from wagtail.core.models.i18n import Locale
6 from wagtail.core.models.sites import Site
7
8 from geniza.pages.models import ContainerPage, ContentPage, HomePage
9
10
11 class Command(BaseCommand):
12 def add_arguments(self, parser):
13 parser.add_argument(
14 "-H",
15 "--hostname",
16 default="localhost",
17 help="hostname from which the app is served (default: localhost)",
18 )
19 parser.add_argument(
20 "-p",
21 "--port",
22 default="8000",
23 help="port from which the app is served (default: 8000)",
24 )
25 parser.add_argument(
26 "-f",
27 "--fixtures",
28 action="store_true",
29 help="include test fixture content page",
30 )
31
32 def handle(self, *args, **options):
33 """Bootstrap content for Geniza public project site.
34 NOTE: Not idempotent. Will recreate pages if they already exist."""
35
36 include_fixtures = options.get("fixtures")
37 hostname = options.get("hostname")
38 port = options.get("port")
39 (locale, _) = Locale.objects.get_or_create(language_code="en")
40
41 # Bootstrap empty home page, about page
42 home_page = HomePage(
43 title="The Princeton Geniza Project",
44 description="Home page",
45 locale=locale,
46 )
47
48 root = Page.get_first_root_node()
49 root.add_child(instance=home_page)
50
51 container_page = ContainerPage(title="About", slug="about", locale=locale)
52 home_page.add_child(instance=container_page)
53
54 # Bootstrap other empty content pages
55
56 # Pages for main navigation menu
57 root_pages = [
58 ContentPage(
59 title="Contact Us",
60 slug="contact",
61 description="Contact information",
62 locale=locale,
63 ),
64 ]
65 for page in root_pages:
66 page.show_in_menus = True
67 home_page.add_child(instance=page)
68
69 # Pages for About sub-navigation menu
70 container_pages = [
71 ContentPage(
72 title="Credits",
73 slug="credits",
74 description="List of Geniza Project contributors and their roles",
75 locale=locale,
76 ),
77 ContentPage(
78 title="How to Cite",
79 slug="how-to-cite",
80 description="Instructions for citing the Princeton Geniza Project",
81 locale=locale,
82 ),
83 ContentPage(
84 title="Data Exports",
85 slug="data-exports",
86 description="Information about exporting data",
87 locale=locale,
88 ),
89 ContentPage(
90 title="Technical",
91 slug="technical",
92 description="Technical information",
93 locale=locale,
94 ),
95 ContentPage(
96 title="FAQ",
97 slug="faq",
98 description="Frequently asked questions",
99 locale=locale,
100 ),
101 ]
102 for page in container_pages:
103 page.show_in_menus = True
104 container_page.add_child(instance=page)
105
106 if include_fixtures:
107 # Create test page
108 test_content_page = self.generate_test_content_page()
109 home_page.add_child(instance=test_content_page)
110
111 # Create or update site with page tree and other options
112 try:
113 default_site = Site.objects.get(is_default_site=True)
114 default_site.root_page = home_page
115 default_site.port = port
116 default_site.hostname = hostname
117 default_site.site_name = "Geniza"
118 default_site.save()
119 except ObjectDoesNotExist:
120 default_site = Site.objects.create(
121 hostname=hostname,
122 port=port,
123 root_page=home_page,
124 is_default_site=True,
125 site_name="Geniza",
126 )
127
128 def generate_test_content_page(self):
129 # Create test content page from fixture
130 with open(
131 "geniza/pages/fixtures/example_content_page.html", "r"
132 ) as content_fixture:
133 content = content_fixture.read()
134 return ContentPage(
135 title="Page Title",
136 description="Example page",
137 slug="content",
138 body=content.replace( # get static URLs for test images
139 "test-image-fragment.jpg", static("test-image-fragment.jpg")
140 ).replace("test-image-tagnetwork.png", static("test-image-tagnetwork.png")),
141 live=True,
142 )
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/geniza/pages/management/commands/bootstrap_content.py b/geniza/pages/management/commands/bootstrap_content.py
--- a/geniza/pages/management/commands/bootstrap_content.py
+++ b/geniza/pages/management/commands/bootstrap_content.py
@@ -136,7 +136,11 @@
description="Example page",
slug="content",
body=content.replace( # get static URLs for test images
- "test-image-fragment.jpg", static("test-image-fragment.jpg")
- ).replace("test-image-tagnetwork.png", static("test-image-tagnetwork.png")),
+ "test-image-fragment.jpg",
+ static("img/fixtures/test-image-fragment.jpg"),
+ ).replace(
+ "test-image-tagnetwork.png",
+ static("img/fixtures/test-image-tagnetwork.png"),
+ ),
live=True,
)
diff --git a/geniza/pages/models.py b/geniza/pages/models.py
--- a/geniza/pages/models.py
+++ b/geniza/pages/models.py
@@ -39,6 +39,11 @@
class Meta:
verbose_name = "homepage"
+ def get_context(self, request):
+ context = super(HomePage, self).get_context(request)
+ context["page_type"] = "homepage"
+ return context
+
class ContainerPage(Page):
"""An empty :class:`Page` type that has :class:`ContentPage` instances
| {"golden_diff": "diff --git a/geniza/pages/management/commands/bootstrap_content.py b/geniza/pages/management/commands/bootstrap_content.py\n--- a/geniza/pages/management/commands/bootstrap_content.py\n+++ b/geniza/pages/management/commands/bootstrap_content.py\n@@ -136,7 +136,11 @@\n description=\"Example page\",\n slug=\"content\",\n body=content.replace( # get static URLs for test images\n- \"test-image-fragment.jpg\", static(\"test-image-fragment.jpg\")\n- ).replace(\"test-image-tagnetwork.png\", static(\"test-image-tagnetwork.png\")),\n+ \"test-image-fragment.jpg\",\n+ static(\"img/fixtures/test-image-fragment.jpg\"),\n+ ).replace(\n+ \"test-image-tagnetwork.png\",\n+ static(\"img/fixtures/test-image-tagnetwork.png\"),\n+ ),\n live=True,\n )\ndiff --git a/geniza/pages/models.py b/geniza/pages/models.py\n--- a/geniza/pages/models.py\n+++ b/geniza/pages/models.py\n@@ -39,6 +39,11 @@\n class Meta:\n verbose_name = \"homepage\"\n \n+ def get_context(self, request):\n+ context = super(HomePage, self).get_context(request)\n+ context[\"page_type\"] = \"homepage\"\n+ return context\n+\n \n class ContainerPage(Page):\n \"\"\"An empty :class:`Page` type that has :class:`ContentPage` instances\n", "issue": "As a front-end user I want to see the PGP logo on the site\nhere are links to the temporary logo approved till we finish revising the permanent one: \n\n- [for desktop light mode](https://www.figma.com/file/HpGBOZi9lO8B3nCAx3d4Fj/Princeton-Geniza-(Project-team)?node-id=2300%3A16460)\n- [for mobile light mode](https://www.figma.com/file/HpGBOZi9lO8B3nCAx3d4Fj/Princeton-Geniza-(Project-team)?node-id=2300%3A16461)\n- [for desktop dark mode](https://www.figma.com/file/HpGBOZi9lO8B3nCAx3d4Fj/Princeton-Geniza-(Project-team)?node-id=2300%3A16466)\n- [for mobile dark mode](https://www.figma.com/file/HpGBOZi9lO8B3nCAx3d4Fj/Princeton-Geniza-(Project-team)?node-id=2301%3A16480)\n\n", "before_files": [{"content": "from django.db import models\nfrom django.http.response import HttpResponseRedirect\nfrom wagtail.admin.edit_handlers import FieldPanel, RichTextFieldPanel\nfrom wagtail.core.fields import RichTextField\nfrom wagtail.core.models import Page\n\n\nclass HomePage(Page):\n \"\"\":class:`wagtail.core.models.Page` model for Geniza home page.\"\"\"\n\n # fields\n description = models.TextField(blank=True)\n body = RichTextField(\n features=[\n \"h2\",\n \"h3\",\n \"bold\",\n \"italic\",\n \"link\",\n \"ol\",\n \"ul\",\n \"image\",\n \"embed\",\n \"blockquote\",\n \"superscript\",\n \"subscript\",\n \"strikethrough\",\n ],\n blank=True,\n )\n # can only be child of Root\n parent_page_types = [Page]\n subpage_types = [\"pages.ContentPage\", \"pages.ContainerPage\"]\n content_panels = Page.content_panels + [\n FieldPanel(\"description\"),\n RichTextFieldPanel(\"body\"),\n ]\n\n class Meta:\n verbose_name = \"homepage\"\n\n\nclass ContainerPage(Page):\n \"\"\"An empty :class:`Page` type that has :class:`ContentPage` instances\n as its subpages.\"\"\"\n\n # can only be child of HomePage\n parent_page_types = [HomePage]\n subpage_types = [\"pages.ContentPage\"]\n\n # show in menu by default\n show_in_menus_default = True\n\n # should not ever actually render\n def serve(self, request):\n # redirect to parent page instead\n if self.get_parent():\n return HttpResponseRedirect(self.get_parent().get_url(request))\n\n\nclass ContentPage(Page):\n \"\"\"A simple :class:`Page` type for content pages.\"\"\"\n\n # fields\n description = models.TextField(blank=True)\n body = RichTextField(\n features=[\n \"h2\",\n \"h3\",\n \"bold\",\n \"italic\",\n \"link\",\n \"ol\",\n \"ul\",\n \"image\",\n \"embed\",\n \"blockquote\",\n \"superscript\",\n \"subscript\",\n \"strikethrough\",\n ],\n blank=True,\n )\n # can be child of Home or Container page\n parent_page_types = [HomePage, ContainerPage]\n content_panels = Page.content_panels + [\n FieldPanel(\"description\"),\n RichTextFieldPanel(\"body\"),\n ]\n\n def get_context(self, request):\n context = super(ContentPage, self).get_context(request)\n context[\"page_type\"] = \"content-page\"\n return context\n", "path": "geniza/pages/models.py"}, {"content": "from django.core.exceptions import ObjectDoesNotExist\nfrom django.core.management.base import BaseCommand\nfrom django.templatetags.static import static\nfrom wagtail.core.models import Page\nfrom wagtail.core.models.i18n import Locale\nfrom wagtail.core.models.sites import Site\n\nfrom geniza.pages.models import ContainerPage, ContentPage, HomePage\n\n\nclass Command(BaseCommand):\n def add_arguments(self, parser):\n parser.add_argument(\n \"-H\",\n \"--hostname\",\n default=\"localhost\",\n help=\"hostname from which the app is served (default: localhost)\",\n )\n parser.add_argument(\n \"-p\",\n \"--port\",\n default=\"8000\",\n help=\"port from which the app is served (default: 8000)\",\n )\n parser.add_argument(\n \"-f\",\n \"--fixtures\",\n action=\"store_true\",\n help=\"include test fixture content page\",\n )\n\n def handle(self, *args, **options):\n \"\"\"Bootstrap content for Geniza public project site.\n NOTE: Not idempotent. Will recreate pages if they already exist.\"\"\"\n\n include_fixtures = options.get(\"fixtures\")\n hostname = options.get(\"hostname\")\n port = options.get(\"port\")\n (locale, _) = Locale.objects.get_or_create(language_code=\"en\")\n\n # Bootstrap empty home page, about page\n home_page = HomePage(\n title=\"The Princeton Geniza Project\",\n description=\"Home page\",\n locale=locale,\n )\n\n root = Page.get_first_root_node()\n root.add_child(instance=home_page)\n\n container_page = ContainerPage(title=\"About\", slug=\"about\", locale=locale)\n home_page.add_child(instance=container_page)\n\n # Bootstrap other empty content pages\n\n # Pages for main navigation menu\n root_pages = [\n ContentPage(\n title=\"Contact Us\",\n slug=\"contact\",\n description=\"Contact information\",\n locale=locale,\n ),\n ]\n for page in root_pages:\n page.show_in_menus = True\n home_page.add_child(instance=page)\n\n # Pages for About sub-navigation menu\n container_pages = [\n ContentPage(\n title=\"Credits\",\n slug=\"credits\",\n description=\"List of Geniza Project contributors and their roles\",\n locale=locale,\n ),\n ContentPage(\n title=\"How to Cite\",\n slug=\"how-to-cite\",\n description=\"Instructions for citing the Princeton Geniza Project\",\n locale=locale,\n ),\n ContentPage(\n title=\"Data Exports\",\n slug=\"data-exports\",\n description=\"Information about exporting data\",\n locale=locale,\n ),\n ContentPage(\n title=\"Technical\",\n slug=\"technical\",\n description=\"Technical information\",\n locale=locale,\n ),\n ContentPage(\n title=\"FAQ\",\n slug=\"faq\",\n description=\"Frequently asked questions\",\n locale=locale,\n ),\n ]\n for page in container_pages:\n page.show_in_menus = True\n container_page.add_child(instance=page)\n\n if include_fixtures:\n # Create test page\n test_content_page = self.generate_test_content_page()\n home_page.add_child(instance=test_content_page)\n\n # Create or update site with page tree and other options\n try:\n default_site = Site.objects.get(is_default_site=True)\n default_site.root_page = home_page\n default_site.port = port\n default_site.hostname = hostname\n default_site.site_name = \"Geniza\"\n default_site.save()\n except ObjectDoesNotExist:\n default_site = Site.objects.create(\n hostname=hostname,\n port=port,\n root_page=home_page,\n is_default_site=True,\n site_name=\"Geniza\",\n )\n\n def generate_test_content_page(self):\n # Create test content page from fixture\n with open(\n \"geniza/pages/fixtures/example_content_page.html\", \"r\"\n ) as content_fixture:\n content = content_fixture.read()\n return ContentPage(\n title=\"Page Title\",\n description=\"Example page\",\n slug=\"content\",\n body=content.replace( # get static URLs for test images\n \"test-image-fragment.jpg\", static(\"test-image-fragment.jpg\")\n ).replace(\"test-image-tagnetwork.png\", static(\"test-image-tagnetwork.png\")),\n live=True,\n )\n", "path": "geniza/pages/management/commands/bootstrap_content.py"}], "after_files": [{"content": "from django.db import models\nfrom django.http.response import HttpResponseRedirect\nfrom wagtail.admin.edit_handlers import FieldPanel, RichTextFieldPanel\nfrom wagtail.core.fields import RichTextField\nfrom wagtail.core.models import Page\n\n\nclass HomePage(Page):\n \"\"\":class:`wagtail.core.models.Page` model for Geniza home page.\"\"\"\n\n # fields\n description = models.TextField(blank=True)\n body = RichTextField(\n features=[\n \"h2\",\n \"h3\",\n \"bold\",\n \"italic\",\n \"link\",\n \"ol\",\n \"ul\",\n \"image\",\n \"embed\",\n \"blockquote\",\n \"superscript\",\n \"subscript\",\n \"strikethrough\",\n ],\n blank=True,\n )\n # can only be child of Root\n parent_page_types = [Page]\n subpage_types = [\"pages.ContentPage\", \"pages.ContainerPage\"]\n content_panels = Page.content_panels + [\n FieldPanel(\"description\"),\n RichTextFieldPanel(\"body\"),\n ]\n\n class Meta:\n verbose_name = \"homepage\"\n\n def get_context(self, request):\n context = super(HomePage, self).get_context(request)\n context[\"page_type\"] = \"homepage\"\n return context\n\n\nclass ContainerPage(Page):\n \"\"\"An empty :class:`Page` type that has :class:`ContentPage` instances\n as its subpages.\"\"\"\n\n # can only be child of HomePage\n parent_page_types = [HomePage]\n subpage_types = [\"pages.ContentPage\"]\n\n # show in menu by default\n show_in_menus_default = True\n\n # should not ever actually render\n def serve(self, request):\n # redirect to parent page instead\n if self.get_parent():\n return HttpResponseRedirect(self.get_parent().get_url(request))\n\n\nclass ContentPage(Page):\n \"\"\"A simple :class:`Page` type for content pages.\"\"\"\n\n # fields\n description = models.TextField(blank=True)\n body = RichTextField(\n features=[\n \"h2\",\n \"h3\",\n \"bold\",\n \"italic\",\n \"link\",\n \"ol\",\n \"ul\",\n \"image\",\n \"embed\",\n \"blockquote\",\n \"superscript\",\n \"subscript\",\n \"strikethrough\",\n ],\n blank=True,\n )\n # can be child of Home or Container page\n parent_page_types = [HomePage, ContainerPage]\n content_panels = Page.content_panels + [\n FieldPanel(\"description\"),\n RichTextFieldPanel(\"body\"),\n ]\n\n def get_context(self, request):\n context = super(ContentPage, self).get_context(request)\n context[\"page_type\"] = \"content-page\"\n return context\n", "path": "geniza/pages/models.py"}, {"content": "from django.core.exceptions import ObjectDoesNotExist\nfrom django.core.management.base import BaseCommand\nfrom django.templatetags.static import static\nfrom wagtail.core.models import Page\nfrom wagtail.core.models.i18n import Locale\nfrom wagtail.core.models.sites import Site\n\nfrom geniza.pages.models import ContainerPage, ContentPage, HomePage\n\n\nclass Command(BaseCommand):\n def add_arguments(self, parser):\n parser.add_argument(\n \"-H\",\n \"--hostname\",\n default=\"localhost\",\n help=\"hostname from which the app is served (default: localhost)\",\n )\n parser.add_argument(\n \"-p\",\n \"--port\",\n default=\"8000\",\n help=\"port from which the app is served (default: 8000)\",\n )\n parser.add_argument(\n \"-f\",\n \"--fixtures\",\n action=\"store_true\",\n help=\"include test fixture content page\",\n )\n\n def handle(self, *args, **options):\n \"\"\"Bootstrap content for Geniza public project site.\n NOTE: Not idempotent. Will recreate pages if they already exist.\"\"\"\n\n include_fixtures = options.get(\"fixtures\")\n hostname = options.get(\"hostname\")\n port = options.get(\"port\")\n (locale, _) = Locale.objects.get_or_create(language_code=\"en\")\n\n # Bootstrap empty home page, about page\n home_page = HomePage(\n title=\"The Princeton Geniza Project\",\n description=\"Home page\",\n locale=locale,\n )\n\n root = Page.get_first_root_node()\n root.add_child(instance=home_page)\n\n container_page = ContainerPage(title=\"About\", slug=\"about\", locale=locale)\n home_page.add_child(instance=container_page)\n\n # Bootstrap other empty content pages\n\n # Pages for main navigation menu\n root_pages = [\n ContentPage(\n title=\"Contact Us\",\n slug=\"contact\",\n description=\"Contact information\",\n locale=locale,\n ),\n ]\n for page in root_pages:\n page.show_in_menus = True\n home_page.add_child(instance=page)\n\n # Pages for About sub-navigation menu\n container_pages = [\n ContentPage(\n title=\"Credits\",\n slug=\"credits\",\n description=\"List of Geniza Project contributors and their roles\",\n locale=locale,\n ),\n ContentPage(\n title=\"How to Cite\",\n slug=\"how-to-cite\",\n description=\"Instructions for citing the Princeton Geniza Project\",\n locale=locale,\n ),\n ContentPage(\n title=\"Data Exports\",\n slug=\"data-exports\",\n description=\"Information about exporting data\",\n locale=locale,\n ),\n ContentPage(\n title=\"Technical\",\n slug=\"technical\",\n description=\"Technical information\",\n locale=locale,\n ),\n ContentPage(\n title=\"FAQ\",\n slug=\"faq\",\n description=\"Frequently asked questions\",\n locale=locale,\n ),\n ]\n for page in container_pages:\n page.show_in_menus = True\n container_page.add_child(instance=page)\n\n if include_fixtures:\n # Create test page\n test_content_page = self.generate_test_content_page()\n home_page.add_child(instance=test_content_page)\n\n # Create or update site with page tree and other options\n try:\n default_site = Site.objects.get(is_default_site=True)\n default_site.root_page = home_page\n default_site.port = port\n default_site.hostname = hostname\n default_site.site_name = \"Geniza\"\n default_site.save()\n except ObjectDoesNotExist:\n default_site = Site.objects.create(\n hostname=hostname,\n port=port,\n root_page=home_page,\n is_default_site=True,\n site_name=\"Geniza\",\n )\n\n def generate_test_content_page(self):\n # Create test content page from fixture\n with open(\n \"geniza/pages/fixtures/example_content_page.html\", \"r\"\n ) as content_fixture:\n content = content_fixture.read()\n return ContentPage(\n title=\"Page Title\",\n description=\"Example page\",\n slug=\"content\",\n body=content.replace( # get static URLs for test images\n \"test-image-fragment.jpg\",\n static(\"img/fixtures/test-image-fragment.jpg\"),\n ).replace(\n \"test-image-tagnetwork.png\",\n static(\"img/fixtures/test-image-tagnetwork.png\"),\n ),\n live=True,\n )\n", "path": "geniza/pages/management/commands/bootstrap_content.py"}]} | 2,521 | 312 |
gh_patches_debug_39856 | rasdani/github-patches | git_diff | ckan__ckan-4102 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
recaptcha v1 will stop working 2018-3-31
### CKAN Version if known (or site URL)
All since 2.0ish
### The problem
Users will not be able to register, due to the re-captcha being switched off by Google on 2018-3-31.
Google's deprecation info: https://developers.google.com/recaptcha/docs/versions#v1
This affects those sites that:
* have setup recaptcha (i.e. registered with Google and [added their keys to the CKAN config](http://docs.ckan.org/en/latest/maintaining/configuration.html?highlight=captcha#ckan-recaptcha-publickey)). This is not part of the default CKAN install. Most installs only use the 'user' functionality for admins, so the impact should be limited.
* AND they use v1 of recaptcha. This is the default in the CKAN config, but Google deprecated v1 May 2016 and prevented new sites using it (I imagine it was the same time), so the issue only affects sites set-up before then.
### How to check if you have this problem
Show the relevant bits of your CKAN config:
```
grep recaptcha /etc/ckan/default/production.ini
```
IF `ckan.recaptcha.publickey` has a value (i.e. not blank or unset)
AND (`ckan.recaptcha.version = 1` OR `ckan.recaptcha.version` is blank or unset)
THEN you need to upgrade to recaptcha 2 before 2018-03-31.
### Action
I think we should change the default to v2 and warn the community.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckan/lib/captcha.py`
Content:
```
1 # encoding: utf-8
2
3 from ckan.common import config
4
5 import urllib
6 import urllib2
7 import json
8
9 def check_recaptcha(request):
10 '''Check a user\'s recaptcha submission is valid, and raise CaptchaError
11 on failure.'''
12 recaptcha_private_key = config.get('ckan.recaptcha.privatekey', '')
13 if not recaptcha_private_key:
14 # Recaptcha not enabled
15 return
16
17 client_ip_address = request.environ.get('REMOTE_ADDR', 'Unknown IP Address')
18
19 recaptcha_version = config.get('ckan.recaptcha.version', '1')
20 if recaptcha_version is '1':
21 recaptcha_response_field = request.params.get('recaptcha_response_field', '')
22 recaptcha_server_name = 'http://api-verify.recaptcha.net/verify'
23 recaptcha_challenge_field = request.params.get('recaptcha_challenge_field')
24
25 # recaptcha_response_field will be unicode if there are foreign chars in
26 # the user input. So we need to encode it as utf8 before urlencoding or
27 # we get an exception (#1431).
28 params = urllib.urlencode(dict(privatekey=recaptcha_private_key,
29 remoteip=client_ip_address,
30 challenge=recaptcha_challenge_field,
31 response=recaptcha_response_field.encode('utf8')))
32 f = urllib2.urlopen(recaptcha_server_name, params)
33 data = f.read()
34 f.close()
35
36 if not data.lower().startswith('true'):
37 raise CaptchaError()
38 elif recaptcha_version is '2':
39 recaptcha_response_field = request.params.get('g-recaptcha-response', '')
40 recaptcha_server_name = 'https://www.google.com/recaptcha/api/siteverify'
41
42 # recaptcha_response_field will be unicode if there are foreign chars in
43 # the user input. So we need to encode it as utf8 before urlencoding or
44 # we get an exception (#1431).
45 params = urllib.urlencode(dict(secret=recaptcha_private_key,
46 remoteip=client_ip_address,
47 response=recaptcha_response_field.encode('utf8')))
48 f = urllib2.urlopen(recaptcha_server_name, params)
49 data = json.load(f)
50 f.close()
51
52 try:
53 if not data['success']:
54 raise CaptchaError()
55 except IndexError:
56 # Something weird with recaptcha response
57 raise CaptchaError()
58
59 class CaptchaError(ValueError):
60 pass
```
Path: `ckan/lib/app_globals.py`
Content:
```
1 # encoding: utf-8
2
3 ''' The application's Globals object '''
4
5 import logging
6 import time
7 from threading import Lock
8 import re
9
10 from paste.deploy.converters import asbool
11 from ckan.common import config
12
13 import ckan
14 import ckan.model as model
15 import ckan.logic as logic
16 from logic.schema import update_configuration_schema
17
18
19 log = logging.getLogger(__name__)
20
21
22 # mappings translate between config settings and globals because our naming
23 # conventions are not well defined and/or implemented
24 mappings = {
25 # 'config_key': 'globals_key',
26 }
27
28
29 # This mapping is only used to define the configuration options (from the
30 # `config` object) that should be copied to the `app_globals` (`g`) object.
31 app_globals_from_config_details = {
32 'ckan.site_title': {},
33 'ckan.site_logo': {},
34 'ckan.site_url': {},
35 'ckan.site_description': {},
36 'ckan.site_about': {},
37 'ckan.site_intro_text': {},
38 'ckan.site_custom_css': {},
39 'ckan.favicon': {}, # default gets set in config.environment.py
40 'ckan.template_head_end': {},
41 'ckan.template_footer_end': {},
42 # has been setup in load_environment():
43 'ckan.site_id': {},
44 'ckan.recaptcha.publickey': {'name': 'recaptcha_publickey'},
45 'ckan.recaptcha.version': {'name': 'recaptcha_version', 'default': '1'},
46 'ckan.template_title_deliminater': {'default': '-'},
47 'ckan.template_head_end': {},
48 'ckan.template_footer_end': {},
49 'ckan.dumps_url': {},
50 'ckan.dumps_format': {},
51 'ofs.impl': {'name': 'ofs_impl'},
52 'ckan.homepage_style': {'default': '1'},
53
54 # split string
55 'search.facets': {'default': 'organization groups tags res_format license_id',
56 'type': 'split',
57 'name': 'facets'},
58 'package_hide_extras': {'type': 'split'},
59 'ckan.plugins': {'type': 'split'},
60
61 # bool
62 'debug': {'default': 'false', 'type' : 'bool'},
63 'ckan.debug_supress_header' : {'default': 'false', 'type' : 'bool'},
64 'ckan.legacy_templates' : {'default': 'false', 'type' : 'bool'},
65 'ckan.tracking_enabled' : {'default': 'false', 'type' : 'bool'},
66
67 # int
68 'ckan.datasets_per_page': {'default': '20', 'type': 'int'},
69 'ckan.activity_list_limit': {'default': '30', 'type': 'int'},
70 'ckan.user_list_limit': {'default': '20', 'type': 'int'},
71 'search.facets.default': {'default': '10', 'type': 'int',
72 'name': 'facets_default_number'},
73 }
74
75
76 # A place to store the origional config options of we override them
77 _CONFIG_CACHE = {}
78
79 def set_main_css(css_file):
80 ''' Sets the main_css. The css_file must be of the form file.css '''
81 assert css_file.endswith('.css')
82 new_css = css_file
83 # FIXME we should check the css file exists
84 app_globals.main_css = str(new_css)
85
86
87 def set_app_global(key, value):
88 '''
89 Set a new key on the app_globals (g) object
90
91 It will process the value according to the options on
92 app_globals_from_config_details (if any)
93 '''
94 key, value = process_app_global(key, value)
95 setattr(app_globals, key, value)
96
97
98 def process_app_global(key, value):
99 '''
100 Tweak a key, value pair meant to be set on the app_globals (g) object
101
102 According to the options on app_globals_from_config_details (if any)
103 '''
104 options = app_globals_from_config_details.get(key)
105 key = get_globals_key(key)
106 if options:
107 if 'name' in options:
108 key = options['name']
109 value = value or options.get('default', '')
110
111 data_type = options.get('type')
112 if data_type == 'bool':
113 value = asbool(value)
114 elif data_type == 'int':
115 value = int(value)
116 elif data_type == 'split':
117 value = value.split()
118
119 return key, value
120
121
122 def get_globals_key(key):
123 # create our globals key
124 # these can be specified in mappings or else we remove
125 # the `ckan.` part this is to keep the existing namings
126 # set the value
127 if key in mappings:
128 return mappings[key]
129 elif key.startswith('ckan.'):
130 return key[5:]
131 else:
132 return key
133
134
135 def reset():
136 ''' set updatable values from config '''
137 def get_config_value(key, default=''):
138 if model.meta.engine.has_table('system_info'):
139 value = model.get_system_info(key)
140 else:
141 value = None
142 config_value = config.get(key)
143 # sort encodeings if needed
144 if isinstance(config_value, str):
145 try:
146 config_value = config_value.decode('utf-8')
147 except UnicodeDecodeError:
148 config_value = config_value.decode('latin-1')
149 # we want to store the config the first time we get here so we can
150 # reset them if needed
151 if key not in _CONFIG_CACHE:
152 _CONFIG_CACHE[key] = config_value
153 if value is not None:
154 log.debug('config `%s` set to `%s` from db' % (key, value))
155 else:
156 value = _CONFIG_CACHE[key]
157 if value:
158 log.debug('config `%s` set to `%s` from config' % (key, value))
159 else:
160 value = default
161
162 set_app_global(key, value)
163
164 # update the config
165 config[key] = value
166
167 return value
168
169 # update the config settings in auto update
170 schema = update_configuration_schema()
171 for key in schema.keys():
172 get_config_value(key)
173
174 # custom styling
175 main_css = get_config_value('ckan.main_css', '/base/css/main.css')
176 set_main_css(main_css)
177
178 if app_globals.site_logo:
179 app_globals.header_class = 'header-image'
180 elif not app_globals.site_description:
181 app_globals.header_class = 'header-text-logo'
182 else:
183 app_globals.header_class = 'header-text-logo-tagline'
184
185
186 class _Globals(object):
187
188 ''' Globals acts as a container for objects available throughout the
189 life of the application. '''
190
191 def __init__(self):
192 '''One instance of Globals is created during application
193 initialization and is available during requests via the
194 'app_globals' variable
195 '''
196 self._init()
197 self._config_update = None
198 self._mutex = Lock()
199
200 def _check_uptodate(self):
201 ''' check the config is uptodate needed when several instances are
202 running '''
203 value = model.get_system_info('ckan.config_update')
204 if self._config_update != value:
205 if self._mutex.acquire(False):
206 reset()
207 self._config_update = value
208 self._mutex.release()
209
210 def _init(self):
211
212 self.ckan_version = ckan.__version__
213 self.ckan_base_version = re.sub('[^0-9\.]', '', self.ckan_version)
214 if self.ckan_base_version == self.ckan_version:
215 self.ckan_doc_version = 'ckan-{0}'.format(self.ckan_version)
216 else:
217 self.ckan_doc_version = 'latest'
218
219 # process the config details to set globals
220 for key in app_globals_from_config_details.keys():
221 new_key, value = process_app_global(key, config.get(key) or '')
222 setattr(self, new_key, value)
223
224
225 app_globals = _Globals()
226 del _Globals
227
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckan/lib/app_globals.py b/ckan/lib/app_globals.py
--- a/ckan/lib/app_globals.py
+++ b/ckan/lib/app_globals.py
@@ -42,7 +42,6 @@
# has been setup in load_environment():
'ckan.site_id': {},
'ckan.recaptcha.publickey': {'name': 'recaptcha_publickey'},
- 'ckan.recaptcha.version': {'name': 'recaptcha_version', 'default': '1'},
'ckan.template_title_deliminater': {'default': '-'},
'ckan.template_head_end': {},
'ckan.template_footer_end': {},
diff --git a/ckan/lib/captcha.py b/ckan/lib/captcha.py
--- a/ckan/lib/captcha.py
+++ b/ckan/lib/captcha.py
@@ -13,48 +13,29 @@
if not recaptcha_private_key:
# Recaptcha not enabled
return
-
+
client_ip_address = request.environ.get('REMOTE_ADDR', 'Unknown IP Address')
-
- recaptcha_version = config.get('ckan.recaptcha.version', '1')
- if recaptcha_version is '1':
- recaptcha_response_field = request.params.get('recaptcha_response_field', '')
- recaptcha_server_name = 'http://api-verify.recaptcha.net/verify'
- recaptcha_challenge_field = request.params.get('recaptcha_challenge_field')
-
- # recaptcha_response_field will be unicode if there are foreign chars in
- # the user input. So we need to encode it as utf8 before urlencoding or
- # we get an exception (#1431).
- params = urllib.urlencode(dict(privatekey=recaptcha_private_key,
- remoteip=client_ip_address,
- challenge=recaptcha_challenge_field,
- response=recaptcha_response_field.encode('utf8')))
- f = urllib2.urlopen(recaptcha_server_name, params)
- data = f.read()
- f.close()
-
- if not data.lower().startswith('true'):
- raise CaptchaError()
- elif recaptcha_version is '2':
- recaptcha_response_field = request.params.get('g-recaptcha-response', '')
- recaptcha_server_name = 'https://www.google.com/recaptcha/api/siteverify'
-
- # recaptcha_response_field will be unicode if there are foreign chars in
- # the user input. So we need to encode it as utf8 before urlencoding or
- # we get an exception (#1431).
- params = urllib.urlencode(dict(secret=recaptcha_private_key,
- remoteip=client_ip_address,
- response=recaptcha_response_field.encode('utf8')))
- f = urllib2.urlopen(recaptcha_server_name, params)
- data = json.load(f)
- f.close()
-
- try:
- if not data['success']:
- raise CaptchaError()
- except IndexError:
- # Something weird with recaptcha response
+
+ # reCAPTCHA v2
+ recaptcha_response_field = request.params.get('g-recaptcha-response', '')
+ recaptcha_server_name = 'https://www.google.com/recaptcha/api/siteverify'
+
+ # recaptcha_response_field will be unicode if there are foreign chars in
+ # the user input. So we need to encode it as utf8 before urlencoding or
+ # we get an exception (#1431).
+ params = urllib.urlencode(dict(secret=recaptcha_private_key,
+ remoteip=client_ip_address,
+ response=recaptcha_response_field.encode('utf8')))
+ f = urllib2.urlopen(recaptcha_server_name, params)
+ data = json.load(f)
+ f.close()
+
+ try:
+ if not data['success']:
raise CaptchaError()
+ except IndexError:
+ # Something weird with recaptcha response
+ raise CaptchaError()
class CaptchaError(ValueError):
- pass
\ No newline at end of file
+ pass
| {"golden_diff": "diff --git a/ckan/lib/app_globals.py b/ckan/lib/app_globals.py\n--- a/ckan/lib/app_globals.py\n+++ b/ckan/lib/app_globals.py\n@@ -42,7 +42,6 @@\n # has been setup in load_environment():\n 'ckan.site_id': {},\n 'ckan.recaptcha.publickey': {'name': 'recaptcha_publickey'},\n- 'ckan.recaptcha.version': {'name': 'recaptcha_version', 'default': '1'},\n 'ckan.template_title_deliminater': {'default': '-'},\n 'ckan.template_head_end': {},\n 'ckan.template_footer_end': {},\ndiff --git a/ckan/lib/captcha.py b/ckan/lib/captcha.py\n--- a/ckan/lib/captcha.py\n+++ b/ckan/lib/captcha.py\n@@ -13,48 +13,29 @@\n if not recaptcha_private_key:\n # Recaptcha not enabled\n return\n- \n+\n client_ip_address = request.environ.get('REMOTE_ADDR', 'Unknown IP Address')\n- \n- recaptcha_version = config.get('ckan.recaptcha.version', '1')\n- if recaptcha_version is '1':\n- recaptcha_response_field = request.params.get('recaptcha_response_field', '')\n- recaptcha_server_name = 'http://api-verify.recaptcha.net/verify'\n- recaptcha_challenge_field = request.params.get('recaptcha_challenge_field')\n-\n- # recaptcha_response_field will be unicode if there are foreign chars in\n- # the user input. So we need to encode it as utf8 before urlencoding or\n- # we get an exception (#1431).\n- params = urllib.urlencode(dict(privatekey=recaptcha_private_key,\n- remoteip=client_ip_address,\n- challenge=recaptcha_challenge_field,\n- response=recaptcha_response_field.encode('utf8')))\n- f = urllib2.urlopen(recaptcha_server_name, params)\n- data = f.read()\n- f.close()\n- \n- if not data.lower().startswith('true'):\n- raise CaptchaError()\n- elif recaptcha_version is '2':\n- recaptcha_response_field = request.params.get('g-recaptcha-response', '')\n- recaptcha_server_name = 'https://www.google.com/recaptcha/api/siteverify'\n-\n- # recaptcha_response_field will be unicode if there are foreign chars in\n- # the user input. So we need to encode it as utf8 before urlencoding or\n- # we get an exception (#1431).\n- params = urllib.urlencode(dict(secret=recaptcha_private_key,\n- remoteip=client_ip_address,\n- response=recaptcha_response_field.encode('utf8')))\n- f = urllib2.urlopen(recaptcha_server_name, params)\n- data = json.load(f) \n- f.close()\n- \n- try:\n- if not data['success']:\n- raise CaptchaError()\n- except IndexError:\n- # Something weird with recaptcha response\n+\n+ # reCAPTCHA v2\n+ recaptcha_response_field = request.params.get('g-recaptcha-response', '')\n+ recaptcha_server_name = 'https://www.google.com/recaptcha/api/siteverify'\n+\n+ # recaptcha_response_field will be unicode if there are foreign chars in\n+ # the user input. So we need to encode it as utf8 before urlencoding or\n+ # we get an exception (#1431).\n+ params = urllib.urlencode(dict(secret=recaptcha_private_key,\n+ remoteip=client_ip_address,\n+ response=recaptcha_response_field.encode('utf8')))\n+ f = urllib2.urlopen(recaptcha_server_name, params)\n+ data = json.load(f)\n+ f.close()\n+\n+ try:\n+ if not data['success']:\n raise CaptchaError()\n+ except IndexError:\n+ # Something weird with recaptcha response\n+ raise CaptchaError()\n \n class CaptchaError(ValueError):\n- pass\n\\ No newline at end of file\n+ pass\n", "issue": "recaptcha v1 will stop working 2018-3-31\n### CKAN Version if known (or site URL)\r\nAll since 2.0ish\r\n\r\n### The problem\r\nUsers will not be able to register, due to the re-captcha being switched off by Google on 2018-3-31.\r\n\r\nGoogle's deprecation info: https://developers.google.com/recaptcha/docs/versions#v1\r\n\r\nThis affects those sites that:\r\n* have setup recaptcha (i.e. registered with Google and [added their keys to the CKAN config](http://docs.ckan.org/en/latest/maintaining/configuration.html?highlight=captcha#ckan-recaptcha-publickey)). This is not part of the default CKAN install. Most installs only use the 'user' functionality for admins, so the impact should be limited.\r\n* AND they use v1 of recaptcha. This is the default in the CKAN config, but Google deprecated v1 May 2016 and prevented new sites using it (I imagine it was the same time), so the issue only affects sites set-up before then.\r\n\r\n### How to check if you have this problem\r\nShow the relevant bits of your CKAN config:\r\n```\r\ngrep recaptcha /etc/ckan/default/production.ini\r\n```\r\nIF `ckan.recaptcha.publickey` has a value (i.e. not blank or unset)\r\nAND (`ckan.recaptcha.version = 1` OR `ckan.recaptcha.version` is blank or unset)\r\nTHEN you need to upgrade to recaptcha 2 before 2018-03-31.\r\n\r\n### Action\r\nI think we should change the default to v2 and warn the community.\n", "before_files": [{"content": "# encoding: utf-8\n\nfrom ckan.common import config\n\nimport urllib\nimport urllib2\nimport json\n\ndef check_recaptcha(request):\n '''Check a user\\'s recaptcha submission is valid, and raise CaptchaError\n on failure.'''\n recaptcha_private_key = config.get('ckan.recaptcha.privatekey', '')\n if not recaptcha_private_key:\n # Recaptcha not enabled\n return\n \n client_ip_address = request.environ.get('REMOTE_ADDR', 'Unknown IP Address')\n \n recaptcha_version = config.get('ckan.recaptcha.version', '1')\n if recaptcha_version is '1':\n recaptcha_response_field = request.params.get('recaptcha_response_field', '')\n recaptcha_server_name = 'http://api-verify.recaptcha.net/verify'\n recaptcha_challenge_field = request.params.get('recaptcha_challenge_field')\n\n # recaptcha_response_field will be unicode if there are foreign chars in\n # the user input. So we need to encode it as utf8 before urlencoding or\n # we get an exception (#1431).\n params = urllib.urlencode(dict(privatekey=recaptcha_private_key,\n remoteip=client_ip_address,\n challenge=recaptcha_challenge_field,\n response=recaptcha_response_field.encode('utf8')))\n f = urllib2.urlopen(recaptcha_server_name, params)\n data = f.read()\n f.close()\n \n if not data.lower().startswith('true'):\n raise CaptchaError()\n elif recaptcha_version is '2':\n recaptcha_response_field = request.params.get('g-recaptcha-response', '')\n recaptcha_server_name = 'https://www.google.com/recaptcha/api/siteverify'\n\n # recaptcha_response_field will be unicode if there are foreign chars in\n # the user input. So we need to encode it as utf8 before urlencoding or\n # we get an exception (#1431).\n params = urllib.urlencode(dict(secret=recaptcha_private_key,\n remoteip=client_ip_address,\n response=recaptcha_response_field.encode('utf8')))\n f = urllib2.urlopen(recaptcha_server_name, params)\n data = json.load(f) \n f.close()\n \n try:\n if not data['success']:\n raise CaptchaError()\n except IndexError:\n # Something weird with recaptcha response\n raise CaptchaError()\n\nclass CaptchaError(ValueError):\n pass", "path": "ckan/lib/captcha.py"}, {"content": "# encoding: utf-8\n\n''' The application's Globals object '''\n\nimport logging\nimport time\nfrom threading import Lock\nimport re\n\nfrom paste.deploy.converters import asbool\nfrom ckan.common import config\n\nimport ckan\nimport ckan.model as model\nimport ckan.logic as logic\nfrom logic.schema import update_configuration_schema\n\n\nlog = logging.getLogger(__name__)\n\n\n# mappings translate between config settings and globals because our naming\n# conventions are not well defined and/or implemented\nmappings = {\n# 'config_key': 'globals_key',\n}\n\n\n# This mapping is only used to define the configuration options (from the\n# `config` object) that should be copied to the `app_globals` (`g`) object.\napp_globals_from_config_details = {\n 'ckan.site_title': {},\n 'ckan.site_logo': {},\n 'ckan.site_url': {},\n 'ckan.site_description': {},\n 'ckan.site_about': {},\n 'ckan.site_intro_text': {},\n 'ckan.site_custom_css': {},\n 'ckan.favicon': {}, # default gets set in config.environment.py\n 'ckan.template_head_end': {},\n 'ckan.template_footer_end': {},\n # has been setup in load_environment():\n 'ckan.site_id': {},\n 'ckan.recaptcha.publickey': {'name': 'recaptcha_publickey'},\n 'ckan.recaptcha.version': {'name': 'recaptcha_version', 'default': '1'},\n 'ckan.template_title_deliminater': {'default': '-'},\n 'ckan.template_head_end': {},\n 'ckan.template_footer_end': {},\n 'ckan.dumps_url': {},\n 'ckan.dumps_format': {},\n 'ofs.impl': {'name': 'ofs_impl'},\n 'ckan.homepage_style': {'default': '1'},\n\n # split string\n 'search.facets': {'default': 'organization groups tags res_format license_id',\n 'type': 'split',\n 'name': 'facets'},\n 'package_hide_extras': {'type': 'split'},\n 'ckan.plugins': {'type': 'split'},\n\n # bool\n 'debug': {'default': 'false', 'type' : 'bool'},\n 'ckan.debug_supress_header' : {'default': 'false', 'type' : 'bool'},\n 'ckan.legacy_templates' : {'default': 'false', 'type' : 'bool'},\n 'ckan.tracking_enabled' : {'default': 'false', 'type' : 'bool'},\n\n # int\n 'ckan.datasets_per_page': {'default': '20', 'type': 'int'},\n 'ckan.activity_list_limit': {'default': '30', 'type': 'int'},\n 'ckan.user_list_limit': {'default': '20', 'type': 'int'},\n 'search.facets.default': {'default': '10', 'type': 'int',\n 'name': 'facets_default_number'},\n}\n\n\n# A place to store the origional config options of we override them\n_CONFIG_CACHE = {}\n\ndef set_main_css(css_file):\n ''' Sets the main_css. The css_file must be of the form file.css '''\n assert css_file.endswith('.css')\n new_css = css_file\n # FIXME we should check the css file exists\n app_globals.main_css = str(new_css)\n\n\ndef set_app_global(key, value):\n '''\n Set a new key on the app_globals (g) object\n\n It will process the value according to the options on\n app_globals_from_config_details (if any)\n '''\n key, value = process_app_global(key, value)\n setattr(app_globals, key, value)\n\n\ndef process_app_global(key, value):\n '''\n Tweak a key, value pair meant to be set on the app_globals (g) object\n\n According to the options on app_globals_from_config_details (if any)\n '''\n options = app_globals_from_config_details.get(key)\n key = get_globals_key(key)\n if options:\n if 'name' in options:\n key = options['name']\n value = value or options.get('default', '')\n\n data_type = options.get('type')\n if data_type == 'bool':\n value = asbool(value)\n elif data_type == 'int':\n value = int(value)\n elif data_type == 'split':\n value = value.split()\n\n return key, value\n\n\ndef get_globals_key(key):\n # create our globals key\n # these can be specified in mappings or else we remove\n # the `ckan.` part this is to keep the existing namings\n # set the value\n if key in mappings:\n return mappings[key]\n elif key.startswith('ckan.'):\n return key[5:]\n else:\n return key\n\n\ndef reset():\n ''' set updatable values from config '''\n def get_config_value(key, default=''):\n if model.meta.engine.has_table('system_info'):\n value = model.get_system_info(key)\n else:\n value = None\n config_value = config.get(key)\n # sort encodeings if needed\n if isinstance(config_value, str):\n try:\n config_value = config_value.decode('utf-8')\n except UnicodeDecodeError:\n config_value = config_value.decode('latin-1')\n # we want to store the config the first time we get here so we can\n # reset them if needed\n if key not in _CONFIG_CACHE:\n _CONFIG_CACHE[key] = config_value\n if value is not None:\n log.debug('config `%s` set to `%s` from db' % (key, value))\n else:\n value = _CONFIG_CACHE[key]\n if value:\n log.debug('config `%s` set to `%s` from config' % (key, value))\n else:\n value = default\n\n set_app_global(key, value)\n\n # update the config\n config[key] = value\n\n return value\n\n # update the config settings in auto update\n schema = update_configuration_schema()\n for key in schema.keys():\n get_config_value(key)\n\n # custom styling\n main_css = get_config_value('ckan.main_css', '/base/css/main.css')\n set_main_css(main_css)\n\n if app_globals.site_logo:\n app_globals.header_class = 'header-image'\n elif not app_globals.site_description:\n app_globals.header_class = 'header-text-logo'\n else:\n app_globals.header_class = 'header-text-logo-tagline'\n\n\nclass _Globals(object):\n\n ''' Globals acts as a container for objects available throughout the\n life of the application. '''\n\n def __init__(self):\n '''One instance of Globals is created during application\n initialization and is available during requests via the\n 'app_globals' variable\n '''\n self._init()\n self._config_update = None\n self._mutex = Lock()\n\n def _check_uptodate(self):\n ''' check the config is uptodate needed when several instances are\n running '''\n value = model.get_system_info('ckan.config_update')\n if self._config_update != value:\n if self._mutex.acquire(False):\n reset()\n self._config_update = value\n self._mutex.release()\n\n def _init(self):\n\n self.ckan_version = ckan.__version__\n self.ckan_base_version = re.sub('[^0-9\\.]', '', self.ckan_version)\n if self.ckan_base_version == self.ckan_version:\n self.ckan_doc_version = 'ckan-{0}'.format(self.ckan_version)\n else:\n self.ckan_doc_version = 'latest'\n\n # process the config details to set globals\n for key in app_globals_from_config_details.keys():\n new_key, value = process_app_global(key, config.get(key) or '')\n setattr(self, new_key, value)\n\n\napp_globals = _Globals()\ndel _Globals\n", "path": "ckan/lib/app_globals.py"}], "after_files": [{"content": "# encoding: utf-8\n\nfrom ckan.common import config\n\nimport urllib\nimport urllib2\nimport json\n\ndef check_recaptcha(request):\n '''Check a user\\'s recaptcha submission is valid, and raise CaptchaError\n on failure.'''\n recaptcha_private_key = config.get('ckan.recaptcha.privatekey', '')\n if not recaptcha_private_key:\n # Recaptcha not enabled\n return\n\n client_ip_address = request.environ.get('REMOTE_ADDR', 'Unknown IP Address')\n\n # reCAPTCHA v2\n recaptcha_response_field = request.params.get('g-recaptcha-response', '')\n recaptcha_server_name = 'https://www.google.com/recaptcha/api/siteverify'\n\n # recaptcha_response_field will be unicode if there are foreign chars in\n # the user input. So we need to encode it as utf8 before urlencoding or\n # we get an exception (#1431).\n params = urllib.urlencode(dict(secret=recaptcha_private_key,\n remoteip=client_ip_address,\n response=recaptcha_response_field.encode('utf8')))\n f = urllib2.urlopen(recaptcha_server_name, params)\n data = json.load(f)\n f.close()\n\n try:\n if not data['success']:\n raise CaptchaError()\n except IndexError:\n # Something weird with recaptcha response\n raise CaptchaError()\n\nclass CaptchaError(ValueError):\n pass\n", "path": "ckan/lib/captcha.py"}, {"content": "# encoding: utf-8\n\n''' The application's Globals object '''\n\nimport logging\nimport time\nfrom threading import Lock\nimport re\n\nfrom paste.deploy.converters import asbool\nfrom ckan.common import config\n\nimport ckan\nimport ckan.model as model\nimport ckan.logic as logic\nfrom logic.schema import update_configuration_schema\n\n\nlog = logging.getLogger(__name__)\n\n\n# mappings translate between config settings and globals because our naming\n# conventions are not well defined and/or implemented\nmappings = {\n# 'config_key': 'globals_key',\n}\n\n\n# This mapping is only used to define the configuration options (from the\n# `config` object) that should be copied to the `app_globals` (`g`) object.\napp_globals_from_config_details = {\n 'ckan.site_title': {},\n 'ckan.site_logo': {},\n 'ckan.site_url': {},\n 'ckan.site_description': {},\n 'ckan.site_about': {},\n 'ckan.site_intro_text': {},\n 'ckan.site_custom_css': {},\n 'ckan.favicon': {}, # default gets set in config.environment.py\n 'ckan.template_head_end': {},\n 'ckan.template_footer_end': {},\n # has been setup in load_environment():\n 'ckan.site_id': {},\n 'ckan.recaptcha.publickey': {'name': 'recaptcha_publickey'},\n 'ckan.template_title_deliminater': {'default': '-'},\n 'ckan.template_head_end': {},\n 'ckan.template_footer_end': {},\n 'ckan.dumps_url': {},\n 'ckan.dumps_format': {},\n 'ofs.impl': {'name': 'ofs_impl'},\n 'ckan.homepage_style': {'default': '1'},\n\n # split string\n 'search.facets': {'default': 'organization groups tags res_format license_id',\n 'type': 'split',\n 'name': 'facets'},\n 'package_hide_extras': {'type': 'split'},\n 'ckan.plugins': {'type': 'split'},\n\n # bool\n 'debug': {'default': 'false', 'type' : 'bool'},\n 'ckan.debug_supress_header' : {'default': 'false', 'type' : 'bool'},\n 'ckan.legacy_templates' : {'default': 'false', 'type' : 'bool'},\n 'ckan.tracking_enabled' : {'default': 'false', 'type' : 'bool'},\n\n # int\n 'ckan.datasets_per_page': {'default': '20', 'type': 'int'},\n 'ckan.activity_list_limit': {'default': '30', 'type': 'int'},\n 'ckan.user_list_limit': {'default': '20', 'type': 'int'},\n 'search.facets.default': {'default': '10', 'type': 'int',\n 'name': 'facets_default_number'},\n}\n\n\n# A place to store the origional config options of we override them\n_CONFIG_CACHE = {}\n\ndef set_main_css(css_file):\n ''' Sets the main_css. The css_file must be of the form file.css '''\n assert css_file.endswith('.css')\n new_css = css_file\n # FIXME we should check the css file exists\n app_globals.main_css = str(new_css)\n\n\ndef set_app_global(key, value):\n '''\n Set a new key on the app_globals (g) object\n\n It will process the value according to the options on\n app_globals_from_config_details (if any)\n '''\n key, value = process_app_global(key, value)\n setattr(app_globals, key, value)\n\n\ndef process_app_global(key, value):\n '''\n Tweak a key, value pair meant to be set on the app_globals (g) object\n\n According to the options on app_globals_from_config_details (if any)\n '''\n options = app_globals_from_config_details.get(key)\n key = get_globals_key(key)\n if options:\n if 'name' in options:\n key = options['name']\n value = value or options.get('default', '')\n\n data_type = options.get('type')\n if data_type == 'bool':\n value = asbool(value)\n elif data_type == 'int':\n value = int(value)\n elif data_type == 'split':\n value = value.split()\n\n return key, value\n\n\ndef get_globals_key(key):\n # create our globals key\n # these can be specified in mappings or else we remove\n # the `ckan.` part this is to keep the existing namings\n # set the value\n if key in mappings:\n return mappings[key]\n elif key.startswith('ckan.'):\n return key[5:]\n else:\n return key\n\n\ndef reset():\n ''' set updatable values from config '''\n def get_config_value(key, default=''):\n if model.meta.engine.has_table('system_info'):\n value = model.get_system_info(key)\n else:\n value = None\n config_value = config.get(key)\n # sort encodeings if needed\n if isinstance(config_value, str):\n try:\n config_value = config_value.decode('utf-8')\n except UnicodeDecodeError:\n config_value = config_value.decode('latin-1')\n # we want to store the config the first time we get here so we can\n # reset them if needed\n if key not in _CONFIG_CACHE:\n _CONFIG_CACHE[key] = config_value\n if value is not None:\n log.debug('config `%s` set to `%s` from db' % (key, value))\n else:\n value = _CONFIG_CACHE[key]\n if value:\n log.debug('config `%s` set to `%s` from config' % (key, value))\n else:\n value = default\n\n set_app_global(key, value)\n\n # update the config\n config[key] = value\n\n return value\n\n # update the config settings in auto update\n schema = update_configuration_schema()\n for key in schema.keys():\n get_config_value(key)\n\n # custom styling\n main_css = get_config_value('ckan.main_css', '/base/css/main.css')\n set_main_css(main_css)\n\n if app_globals.site_logo:\n app_globals.header_class = 'header-image'\n elif not app_globals.site_description:\n app_globals.header_class = 'header-text-logo'\n else:\n app_globals.header_class = 'header-text-logo-tagline'\n\n\nclass _Globals(object):\n\n ''' Globals acts as a container for objects available throughout the\n life of the application. '''\n\n def __init__(self):\n '''One instance of Globals is created during application\n initialization and is available during requests via the\n 'app_globals' variable\n '''\n self._init()\n self._config_update = None\n self._mutex = Lock()\n\n def _check_uptodate(self):\n ''' check the config is uptodate needed when several instances are\n running '''\n value = model.get_system_info('ckan.config_update')\n if self._config_update != value:\n if self._mutex.acquire(False):\n reset()\n self._config_update = value\n self._mutex.release()\n\n def _init(self):\n\n self.ckan_version = ckan.__version__\n self.ckan_base_version = re.sub('[^0-9\\.]', '', self.ckan_version)\n if self.ckan_base_version == self.ckan_version:\n self.ckan_doc_version = 'ckan-{0}'.format(self.ckan_version)\n else:\n self.ckan_doc_version = 'latest'\n\n # process the config details to set globals\n for key in app_globals_from_config_details.keys():\n new_key, value = process_app_global(key, config.get(key) or '')\n setattr(self, new_key, value)\n\n\napp_globals = _Globals()\ndel _Globals\n", "path": "ckan/lib/app_globals.py"}]} | 3,559 | 887 |
gh_patches_debug_48420 | rasdani/github-patches | git_diff | gammapy__gammapy-1622 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cryptic error from MapMaker / make_counts_image
I accidentally typed this:
```python
import astropy.units as u
from gammapy.maps import WcsGeom
from gammapy.cube import MapMaker
from gammapy.data import DataStore
data_store = DataStore.from_dir('$GAMMAPY_EXTRA/datasets/cta-1dc/index/gps/')
obs_id = [110380, 111140, 111159]
obs_list = data_store.obs_list(obs_id)
geom = WcsGeom.create(
skydir=(0, 0),
npix=(800, 600),
binsz=0.02,
coordsys='GAL',
)
maker = MapMaker(geom, offset_max=u.Quantity('2 deg'))
images = maker.run(obs_list)
```
and it blows up with a cryptic error message:
```
$ python temp.py
|===========================================>--------------------------------------------------------------------------------------| 1 / 3 (33.33%) ETA 0sTraceback (most recent call last):
File "temp.py", line 15, in <module>
images = maker.run(obs_list)
File "/Users/deil/work/code/gammapy/gammapy/cube/new.py", line 324, in run
self.process_obs(obs)
File "/Users/deil/work/code/gammapy/gammapy/cube/new.py", line 280, in process_obs
obs.events, cutout_geom, obs.pointing_radec, self.offset_max,
File "/Users/deil/work/code/gammapy/gammapy/cube/new.py", line 79, in make_map_counts
counts_map.data[:, offset_mask] = 0
IndexError: too many indices for array
```
The problem is in `make_map_counts` here:
https://github.com/gammapy/gammapy/blob/a013ff8ac532ab8b15cee95c5da2abb8937bde9c/gammapy/cube/new.py#L79
It doesn't work for 2D images.
There's other obvious issues one encounters when making maps, e.g. replacing `offset_max=u.Quantity('2 deg')` with `offset_max='2 deg'` above gives another cryptic error, because the mapmaker just does `self.offset_max = offset_max` but should do `self.offset_max = Angle(offset_max)` to be kind to users.
The solution is to rewrite the functions in `new.py` to take a mask instead of a max offset, and to improve their test coverage, e.g. also trying to run them on a 2D geom (and either succeed, or error out with a good error message).
I consider this high priority, we should do that tomorrow.
@registerrier - you or me?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gammapy/cube/make.py`
Content:
```
1 # Licensed under a 3-clause BSD style license - see LICENSE.rst
2 from __future__ import absolute_import, division, print_function, unicode_literals
3 import logging
4 from astropy.utils.console import ProgressBar
5 from astropy.nddata.utils import PartialOverlapError
6 from astropy.coordinates import Angle
7 from ..maps import WcsNDMap
8 from .counts import make_map_counts
9 from .exposure import make_map_exposure_true_energy
10 from .background import make_map_background_irf, make_map_background_fov
11
12 __all__ = [
13 'MapMaker',
14 ]
15
16 log = logging.getLogger(__name__)
17
18
19 class MapMaker(object):
20 """Make all basic maps from observations.
21
22 Parameters
23 ----------
24 geom : `~gammapy.maps.WcsGeom`
25 Reference image geometry
26 offset_max : `~astropy.coordinates.Angle`
27 Maximum offset angle
28 cutout_mode : {'trim', 'strict'}, optional
29 Options for making cutouts, see :func: `~gammapy.maps.WcsNDMap.make_cutout`
30 Should be left to the default value 'trim'
31 unless you want only fully contained observations to be added to the map
32 """
33
34 def __init__(self, geom, offset_max, cutout_mode="trim"):
35 self.geom = geom
36 self.offset_max = Angle(offset_max)
37
38 # We instantiate the end products of the MakeMaps class
39 self.counts_map = WcsNDMap(self.geom)
40
41 self.exposure_map = WcsNDMap(self.geom, unit="m2 s")
42
43 self.background_map = WcsNDMap(self.geom)
44
45 # We will need this general exclusion mask for the analysis
46 self.exclusion_map = WcsNDMap(self.geom)
47 self.exclusion_map.data += 1
48
49 self.cutout_mode = cutout_mode
50 self.maps = {}
51
52 def process_obs(self, obs):
53 """Process one observation and add it to the cutout image
54
55 Parameters
56 ----------
57 obs : `~gammapy.data.DataStoreObservation`
58 Observation
59 """
60 # First make cutout of the global image
61 try:
62 exclusion_mask_cutout, cutout_slices = self.exclusion_map.make_cutout(
63 obs.pointing_radec, 2 * self.offset_max, mode=self.cutout_mode
64 )
65 except PartialOverlapError:
66 # TODO: can we silently do the right thing here? Discuss
67 log.info("Observation {} not fully contained in target image. Skipping it.".format(obs.obs_id))
68 return
69
70 cutout_geom = exclusion_mask_cutout.geom
71
72 offset = exclusion_mask_cutout.geom.separation(obs.pointing_radec)
73 offset_mask = offset >= self.offset_max
74
75 counts_obs_map = make_map_counts(obs.events, cutout_geom)
76 counts_obs_map.data[:, offset_mask] = 0
77
78 expo_obs_map = make_map_exposure_true_energy(
79 obs.pointing_radec, obs.observation_live_time_duration,
80 obs.aeff, cutout_geom
81 )
82 expo_obs_map.data[:, offset_mask] = 0
83
84 acceptance_obs_map = make_map_background_irf(
85 obs.pointing_radec, obs.observation_live_time_duration,
86 obs.bkg, cutout_geom
87 )
88 acceptance_obs_map.data[:, offset_mask] = 0
89
90 background_obs_map = make_map_background_fov(
91 acceptance_obs_map, counts_obs_map, exclusion_mask_cutout,
92 )
93 background_obs_map.data[:, offset_mask] = 0
94
95 self._add_cutouts(cutout_slices, counts_obs_map, expo_obs_map, background_obs_map)
96
97 def _add_cutouts(self, cutout_slices, counts_obs_map, expo_obs_map, acceptance_obs_map):
98 """Add current cutout to global maps."""
99 self.counts_map.data[cutout_slices] += counts_obs_map.data
100 self.exposure_map.data[cutout_slices] += expo_obs_map.quantity.to(self.exposure_map.unit).value
101 self.background_map.data[cutout_slices] += acceptance_obs_map.data
102
103 def run(self, obs_list):
104 """
105 Run MapMaker for a list of observations to create
106 stacked counts, exposure and background maps
107
108 Parameters
109 --------------
110 obs_list: `~gammapy.data.ObservationList`
111 List of observations
112
113 Returns
114 -----------
115 maps: dict of stacked counts, background and exposure maps.
116 """
117 for obs in ProgressBar(obs_list):
118 self.process_obs(obs)
119
120 self.maps = {
121 'counts_map': self.counts_map,
122 'background_map': self.background_map,
123 'exposure_map': self.exposure_map
124 }
125 return self.maps
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gammapy/cube/make.py b/gammapy/cube/make.py
--- a/gammapy/cube/make.py
+++ b/gammapy/cube/make.py
@@ -32,6 +32,9 @@
"""
def __init__(self, geom, offset_max, cutout_mode="trim"):
+ if geom.is_image:
+ raise ValueError('MapMaker only works with geom with an energy axis')
+
self.geom = geom
self.offset_max = Angle(offset_max)
| {"golden_diff": "diff --git a/gammapy/cube/make.py b/gammapy/cube/make.py\n--- a/gammapy/cube/make.py\n+++ b/gammapy/cube/make.py\n@@ -32,6 +32,9 @@\n \"\"\"\n \n def __init__(self, geom, offset_max, cutout_mode=\"trim\"):\n+ if geom.is_image:\n+ raise ValueError('MapMaker only works with geom with an energy axis')\n+\n self.geom = geom\n self.offset_max = Angle(offset_max)\n", "issue": "Cryptic error from MapMaker / make_counts_image\nI accidentally typed this:\r\n```python\r\nimport astropy.units as u\r\nfrom gammapy.maps import WcsGeom\r\nfrom gammapy.cube import MapMaker\r\nfrom gammapy.data import DataStore\r\ndata_store = DataStore.from_dir('$GAMMAPY_EXTRA/datasets/cta-1dc/index/gps/')\r\nobs_id = [110380, 111140, 111159]\r\nobs_list = data_store.obs_list(obs_id)\r\ngeom = WcsGeom.create(\r\n skydir=(0, 0),\r\n npix=(800, 600),\r\n binsz=0.02,\r\n coordsys='GAL',\r\n)\r\nmaker = MapMaker(geom, offset_max=u.Quantity('2 deg'))\r\nimages = maker.run(obs_list)\r\n```\r\nand it blows up with a cryptic error message:\r\n```\r\n$ python temp.py \r\n|===========================================>--------------------------------------------------------------------------------------| 1 / 3 (33.33%) ETA 0sTraceback (most recent call last):\r\n File \"temp.py\", line 15, in <module>\r\n images = maker.run(obs_list)\r\n File \"/Users/deil/work/code/gammapy/gammapy/cube/new.py\", line 324, in run\r\n self.process_obs(obs)\r\n File \"/Users/deil/work/code/gammapy/gammapy/cube/new.py\", line 280, in process_obs\r\n obs.events, cutout_geom, obs.pointing_radec, self.offset_max,\r\n File \"/Users/deil/work/code/gammapy/gammapy/cube/new.py\", line 79, in make_map_counts\r\n counts_map.data[:, offset_mask] = 0\r\nIndexError: too many indices for array\r\n```\r\n\r\nThe problem is in `make_map_counts` here:\r\nhttps://github.com/gammapy/gammapy/blob/a013ff8ac532ab8b15cee95c5da2abb8937bde9c/gammapy/cube/new.py#L79\r\n\r\nIt doesn't work for 2D images.\r\n\r\nThere's other obvious issues one encounters when making maps, e.g. replacing `offset_max=u.Quantity('2 deg')` with `offset_max='2 deg'` above gives another cryptic error, because the mapmaker just does `self.offset_max = offset_max` but should do `self.offset_max = Angle(offset_max)` to be kind to users.\r\n\r\nThe solution is to rewrite the functions in `new.py` to take a mask instead of a max offset, and to improve their test coverage, e.g. also trying to run them on a 2D geom (and either succeed, or error out with a good error message).\r\n\r\nI consider this high priority, we should do that tomorrow.\r\n\r\n@registerrier - you or me?\n", "before_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\nfrom __future__ import absolute_import, division, print_function, unicode_literals\nimport logging\nfrom astropy.utils.console import ProgressBar\nfrom astropy.nddata.utils import PartialOverlapError\nfrom astropy.coordinates import Angle\nfrom ..maps import WcsNDMap\nfrom .counts import make_map_counts\nfrom .exposure import make_map_exposure_true_energy\nfrom .background import make_map_background_irf, make_map_background_fov\n\n__all__ = [\n 'MapMaker',\n]\n\nlog = logging.getLogger(__name__)\n\n\nclass MapMaker(object):\n \"\"\"Make all basic maps from observations.\n\n Parameters\n ----------\n geom : `~gammapy.maps.WcsGeom`\n Reference image geometry\n offset_max : `~astropy.coordinates.Angle`\n Maximum offset angle\n cutout_mode : {'trim', 'strict'}, optional\n Options for making cutouts, see :func: `~gammapy.maps.WcsNDMap.make_cutout`\n Should be left to the default value 'trim'\n unless you want only fully contained observations to be added to the map\n \"\"\"\n\n def __init__(self, geom, offset_max, cutout_mode=\"trim\"):\n self.geom = geom\n self.offset_max = Angle(offset_max)\n\n # We instantiate the end products of the MakeMaps class\n self.counts_map = WcsNDMap(self.geom)\n\n self.exposure_map = WcsNDMap(self.geom, unit=\"m2 s\")\n\n self.background_map = WcsNDMap(self.geom)\n\n # We will need this general exclusion mask for the analysis\n self.exclusion_map = WcsNDMap(self.geom)\n self.exclusion_map.data += 1\n\n self.cutout_mode = cutout_mode\n self.maps = {}\n\n def process_obs(self, obs):\n \"\"\"Process one observation and add it to the cutout image\n\n Parameters\n ----------\n obs : `~gammapy.data.DataStoreObservation`\n Observation\n \"\"\"\n # First make cutout of the global image\n try:\n exclusion_mask_cutout, cutout_slices = self.exclusion_map.make_cutout(\n obs.pointing_radec, 2 * self.offset_max, mode=self.cutout_mode\n )\n except PartialOverlapError:\n # TODO: can we silently do the right thing here? Discuss\n log.info(\"Observation {} not fully contained in target image. Skipping it.\".format(obs.obs_id))\n return\n\n cutout_geom = exclusion_mask_cutout.geom\n\n offset = exclusion_mask_cutout.geom.separation(obs.pointing_radec)\n offset_mask = offset >= self.offset_max\n\n counts_obs_map = make_map_counts(obs.events, cutout_geom)\n counts_obs_map.data[:, offset_mask] = 0\n\n expo_obs_map = make_map_exposure_true_energy(\n obs.pointing_radec, obs.observation_live_time_duration,\n obs.aeff, cutout_geom\n )\n expo_obs_map.data[:, offset_mask] = 0\n\n acceptance_obs_map = make_map_background_irf(\n obs.pointing_radec, obs.observation_live_time_duration,\n obs.bkg, cutout_geom\n )\n acceptance_obs_map.data[:, offset_mask] = 0\n\n background_obs_map = make_map_background_fov(\n acceptance_obs_map, counts_obs_map, exclusion_mask_cutout,\n )\n background_obs_map.data[:, offset_mask] = 0\n\n self._add_cutouts(cutout_slices, counts_obs_map, expo_obs_map, background_obs_map)\n\n def _add_cutouts(self, cutout_slices, counts_obs_map, expo_obs_map, acceptance_obs_map):\n \"\"\"Add current cutout to global maps.\"\"\"\n self.counts_map.data[cutout_slices] += counts_obs_map.data\n self.exposure_map.data[cutout_slices] += expo_obs_map.quantity.to(self.exposure_map.unit).value\n self.background_map.data[cutout_slices] += acceptance_obs_map.data\n\n def run(self, obs_list):\n \"\"\"\n Run MapMaker for a list of observations to create\n stacked counts, exposure and background maps\n\n Parameters\n --------------\n obs_list: `~gammapy.data.ObservationList`\n List of observations\n\n Returns\n -----------\n maps: dict of stacked counts, background and exposure maps.\n \"\"\"\n for obs in ProgressBar(obs_list):\n self.process_obs(obs)\n\n self.maps = {\n 'counts_map': self.counts_map,\n 'background_map': self.background_map,\n 'exposure_map': self.exposure_map\n }\n return self.maps\n", "path": "gammapy/cube/make.py"}], "after_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\nfrom __future__ import absolute_import, division, print_function, unicode_literals\nimport logging\nfrom astropy.utils.console import ProgressBar\nfrom astropy.nddata.utils import PartialOverlapError\nfrom astropy.coordinates import Angle\nfrom ..maps import WcsNDMap\nfrom .counts import make_map_counts\nfrom .exposure import make_map_exposure_true_energy\nfrom .background import make_map_background_irf, make_map_background_fov\n\n__all__ = [\n 'MapMaker',\n]\n\nlog = logging.getLogger(__name__)\n\n\nclass MapMaker(object):\n \"\"\"Make all basic maps from observations.\n\n Parameters\n ----------\n geom : `~gammapy.maps.WcsGeom`\n Reference image geometry\n offset_max : `~astropy.coordinates.Angle`\n Maximum offset angle\n cutout_mode : {'trim', 'strict'}, optional\n Options for making cutouts, see :func: `~gammapy.maps.WcsNDMap.make_cutout`\n Should be left to the default value 'trim'\n unless you want only fully contained observations to be added to the map\n \"\"\"\n\n def __init__(self, geom, offset_max, cutout_mode=\"trim\"):\n if geom.is_image:\n raise ValueError('MapMaker only works with geom with an energy axis')\n\n self.geom = geom\n self.offset_max = Angle(offset_max)\n\n # We instantiate the end products of the MakeMaps class\n self.counts_map = WcsNDMap(self.geom)\n\n self.exposure_map = WcsNDMap(self.geom, unit=\"m2 s\")\n\n self.background_map = WcsNDMap(self.geom)\n\n # We will need this general exclusion mask for the analysis\n self.exclusion_map = WcsNDMap(self.geom)\n self.exclusion_map.data += 1\n\n self.cutout_mode = cutout_mode\n self.maps = {}\n\n def process_obs(self, obs):\n \"\"\"Process one observation and add it to the cutout image\n\n Parameters\n ----------\n obs : `~gammapy.data.DataStoreObservation`\n Observation\n \"\"\"\n # First make cutout of the global image\n try:\n exclusion_mask_cutout, cutout_slices = self.exclusion_map.make_cutout(\n obs.pointing_radec, 2 * self.offset_max, mode=self.cutout_mode\n )\n except PartialOverlapError:\n # TODO: can we silently do the right thing here? Discuss\n log.info(\"Observation {} not fully contained in target image. Skipping it.\".format(obs.obs_id))\n return\n\n cutout_geom = exclusion_mask_cutout.geom\n\n offset = exclusion_mask_cutout.geom.separation(obs.pointing_radec)\n offset_mask = offset >= self.offset_max\n\n counts_obs_map = make_map_counts(obs.events, cutout_geom)\n counts_obs_map.data[:, offset_mask] = 0\n\n expo_obs_map = make_map_exposure_true_energy(\n obs.pointing_radec, obs.observation_live_time_duration,\n obs.aeff, cutout_geom\n )\n expo_obs_map.data[:, offset_mask] = 0\n\n acceptance_obs_map = make_map_background_irf(\n obs.pointing_radec, obs.observation_live_time_duration,\n obs.bkg, cutout_geom\n )\n acceptance_obs_map.data[:, offset_mask] = 0\n\n background_obs_map = make_map_background_fov(\n acceptance_obs_map, counts_obs_map, exclusion_mask_cutout,\n )\n background_obs_map.data[:, offset_mask] = 0\n\n self._add_cutouts(cutout_slices, counts_obs_map, expo_obs_map, background_obs_map)\n\n def _add_cutouts(self, cutout_slices, counts_obs_map, expo_obs_map, acceptance_obs_map):\n \"\"\"Add current cutout to global maps.\"\"\"\n self.counts_map.data[cutout_slices] += counts_obs_map.data\n self.exposure_map.data[cutout_slices] += expo_obs_map.quantity.to(self.exposure_map.unit).value\n self.background_map.data[cutout_slices] += acceptance_obs_map.data\n\n def run(self, obs_list):\n \"\"\"\n Run MapMaker for a list of observations to create\n stacked counts, exposure and background maps\n\n Parameters\n --------------\n obs_list: `~gammapy.data.ObservationList`\n List of observations\n\n Returns\n -----------\n maps: dict of stacked counts, background and exposure maps.\n \"\"\"\n for obs in ProgressBar(obs_list):\n self.process_obs(obs)\n\n self.maps = {\n 'counts_map': self.counts_map,\n 'background_map': self.background_map,\n 'exposure_map': self.exposure_map\n }\n return self.maps\n", "path": "gammapy/cube/make.py"}]} | 2,171 | 120 |
gh_patches_debug_32068 | rasdani/github-patches | git_diff | conan-io__conan-center-index-16242 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] libudev/system: Fails build for conan 2.0
### Description
libudev/system fails to download or build with conan 2.0 installed. it needs an update to use conan 2.0 code for conan tools as it currently is dependent on conan 1.x code.
### Package and Environment Details
* Package Name/Version: **libudev/system**
* Operating System+version: **Linux Ubuntu 20.04**
### Conan profile
[settings]
arch=x86_64
build_type=Release
compiler=gcc
compiler.cppstd=gnu17
compiler.libcxx=libstdc++11
compiler.version=9
os=Linux
### Steps to reproduce
conan download -r conancenter libudev/system@
### Logs
ERROR: Error loading conanfile at '/home/tbitz/.conan2/p/libudadcb0d08572c6/e/conanfile.py': Unable to load conanfile in /home/tbitz/.conan2/p/libudadcb0d08572c6/e/conanfile.py
File "<frozen importlib._bootstrap_external>", line 848, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/tbitz/.conan2/p/libudadcb0d08572c6/e/conanfile.py", line 4, in <module>
from conans import tools
ImportError: cannot import name 'tools' from 'conans' (/home/tbitz/.local/lib/python3.8/site-packages/conans/__init__.py)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/libudev/all/conanfile.py`
Content:
```
1 from conan import ConanFile
2 from conan.errors import ConanException, ConanInvalidConfiguration
3 from conan.tools.system import package_manager
4 from conans import tools
5
6 required_conan_version = ">=1.47"
7
8
9 class LibUDEVConan(ConanFile):
10 name = "libudev"
11 version = "system"
12 description = "API for enumerating and introspecting local devices"
13 topics = ("udev", "devices", "enumerating")
14 url = "https://github.com/conan-io/conan-center-index"
15 homepage = "https://www.freedesktop.org/software/systemd/man/udev.html"
16 license = "GPL-2.0-or-later", "LGPL-2.1-or-later"
17 settings = "os", "arch", "compiler", "build_type"
18
19 def validate(self):
20 if self.settings.os != "Linux":
21 raise ConanInvalidConfiguration("libudev is only supported on Linux.")
22
23 def package_id(self):
24 self.info.header_only()
25
26 def _fill_cppinfo_from_pkgconfig(self, name):
27 pkg_config = tools.PkgConfig(name)
28 if not pkg_config.provides:
29 raise ConanException("libudev development files aren't available, give up")
30 libs = [lib[2:] for lib in pkg_config.libs_only_l]
31 lib_dirs = [lib[2:] for lib in pkg_config.libs_only_L]
32 ldflags = [flag for flag in pkg_config.libs_only_other]
33 include_dirs = [include[2:] for include in pkg_config.cflags_only_I]
34 cflags = [flag for flag in pkg_config.cflags_only_other if not flag.startswith("-D")]
35 defines = [flag[2:] for flag in pkg_config.cflags_only_other if flag.startswith("-D")]
36
37 self.cpp_info.system_libs = libs
38 self.cpp_info.libdirs = lib_dirs
39 self.cpp_info.sharedlinkflags = ldflags
40 self.cpp_info.exelinkflags = ldflags
41 self.cpp_info.defines = defines
42 self.cpp_info.includedirs = include_dirs
43 self.cpp_info.cflags = cflags
44 self.cpp_info.cxxflags = cflags
45
46 def system_requirements(self):
47 dnf = package_manager.Dnf(self)
48 dnf.install(["systemd-devel"], update=True, check=True)
49
50 yum = package_manager.Yum(self)
51 yum.install(["systemd-devel"], update=True, check=True)
52
53 apt = package_manager.Apt(self)
54 apt.install(["libudev-dev"], update=True, check=True)
55
56 pacman = package_manager.PacMan(self)
57 pacman.install(["systemd-libs"], update=True, check=True)
58
59 zypper = package_manager.Zypper(self)
60 zypper.install(["libudev-devel"], update=True, check=True)
61
62 def package_info(self):
63 self.cpp_info.includedirs = []
64 self.cpp_info.libdirs = []
65 self._fill_cppinfo_from_pkgconfig("libudev")
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/libudev/all/conanfile.py b/recipes/libudev/all/conanfile.py
--- a/recipes/libudev/all/conanfile.py
+++ b/recipes/libudev/all/conanfile.py
@@ -1,7 +1,7 @@
from conan import ConanFile
-from conan.errors import ConanException, ConanInvalidConfiguration
+from conan.errors import ConanInvalidConfiguration
from conan.tools.system import package_manager
-from conans import tools
+from conan.tools.gnu import PkgConfig
required_conan_version = ">=1.47"
@@ -21,27 +21,7 @@
raise ConanInvalidConfiguration("libudev is only supported on Linux.")
def package_id(self):
- self.info.header_only()
-
- def _fill_cppinfo_from_pkgconfig(self, name):
- pkg_config = tools.PkgConfig(name)
- if not pkg_config.provides:
- raise ConanException("libudev development files aren't available, give up")
- libs = [lib[2:] for lib in pkg_config.libs_only_l]
- lib_dirs = [lib[2:] for lib in pkg_config.libs_only_L]
- ldflags = [flag for flag in pkg_config.libs_only_other]
- include_dirs = [include[2:] for include in pkg_config.cflags_only_I]
- cflags = [flag for flag in pkg_config.cflags_only_other if not flag.startswith("-D")]
- defines = [flag[2:] for flag in pkg_config.cflags_only_other if flag.startswith("-D")]
-
- self.cpp_info.system_libs = libs
- self.cpp_info.libdirs = lib_dirs
- self.cpp_info.sharedlinkflags = ldflags
- self.cpp_info.exelinkflags = ldflags
- self.cpp_info.defines = defines
- self.cpp_info.includedirs = include_dirs
- self.cpp_info.cflags = cflags
- self.cpp_info.cxxflags = cflags
+ self.info.clear()
def system_requirements(self):
dnf = package_manager.Dnf(self)
@@ -62,4 +42,5 @@
def package_info(self):
self.cpp_info.includedirs = []
self.cpp_info.libdirs = []
- self._fill_cppinfo_from_pkgconfig("libudev")
+ pkg_config = PkgConfig(self, "libudev")
+ pkg_config.fill_cpp_info(self.cpp_info)
| {"golden_diff": "diff --git a/recipes/libudev/all/conanfile.py b/recipes/libudev/all/conanfile.py\n--- a/recipes/libudev/all/conanfile.py\n+++ b/recipes/libudev/all/conanfile.py\n@@ -1,7 +1,7 @@\n from conan import ConanFile\n-from conan.errors import ConanException, ConanInvalidConfiguration\n+from conan.errors import ConanInvalidConfiguration\n from conan.tools.system import package_manager\n-from conans import tools\n+from conan.tools.gnu import PkgConfig\n \n required_conan_version = \">=1.47\"\n \n@@ -21,27 +21,7 @@\n raise ConanInvalidConfiguration(\"libudev is only supported on Linux.\")\n \n def package_id(self):\n- self.info.header_only()\n-\n- def _fill_cppinfo_from_pkgconfig(self, name):\n- pkg_config = tools.PkgConfig(name)\n- if not pkg_config.provides:\n- raise ConanException(\"libudev development files aren't available, give up\")\n- libs = [lib[2:] for lib in pkg_config.libs_only_l]\n- lib_dirs = [lib[2:] for lib in pkg_config.libs_only_L]\n- ldflags = [flag for flag in pkg_config.libs_only_other]\n- include_dirs = [include[2:] for include in pkg_config.cflags_only_I]\n- cflags = [flag for flag in pkg_config.cflags_only_other if not flag.startswith(\"-D\")]\n- defines = [flag[2:] for flag in pkg_config.cflags_only_other if flag.startswith(\"-D\")]\n-\n- self.cpp_info.system_libs = libs\n- self.cpp_info.libdirs = lib_dirs\n- self.cpp_info.sharedlinkflags = ldflags\n- self.cpp_info.exelinkflags = ldflags\n- self.cpp_info.defines = defines\n- self.cpp_info.includedirs = include_dirs\n- self.cpp_info.cflags = cflags\n- self.cpp_info.cxxflags = cflags\n+ self.info.clear()\n \n def system_requirements(self):\n dnf = package_manager.Dnf(self)\n@@ -62,4 +42,5 @@\n def package_info(self):\n self.cpp_info.includedirs = []\n self.cpp_info.libdirs = []\n- self._fill_cppinfo_from_pkgconfig(\"libudev\")\n+ pkg_config = PkgConfig(self, \"libudev\")\n+ pkg_config.fill_cpp_info(self.cpp_info)\n", "issue": "[package] libudev/system: Fails build for conan 2.0\n### Description\n\nlibudev/system fails to download or build with conan 2.0 installed. it needs an update to use conan 2.0 code for conan tools as it currently is dependent on conan 1.x code. \n\n### Package and Environment Details\n\n* Package Name/Version: **libudev/system**\r\n* Operating System+version: **Linux Ubuntu 20.04**\r\n\n\n### Conan profile\n\n[settings]\r\narch=x86_64\r\nbuild_type=Release\r\ncompiler=gcc\r\ncompiler.cppstd=gnu17\r\ncompiler.libcxx=libstdc++11\r\ncompiler.version=9\r\nos=Linux\r\n\n\n### Steps to reproduce\n\nconan download -r conancenter libudev/system@\n\n### Logs\n\nERROR: Error loading conanfile at '/home/tbitz/.conan2/p/libudadcb0d08572c6/e/conanfile.py': Unable to load conanfile in /home/tbitz/.conan2/p/libudadcb0d08572c6/e/conanfile.py\r\n File \"<frozen importlib._bootstrap_external>\", line 848, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/home/tbitz/.conan2/p/libudadcb0d08572c6/e/conanfile.py\", line 4, in <module>\r\n from conans import tools\r\nImportError: cannot import name 'tools' from 'conans' (/home/tbitz/.local/lib/python3.8/site-packages/conans/__init__.py)\r\n\r\n\n", "before_files": [{"content": "from conan import ConanFile\nfrom conan.errors import ConanException, ConanInvalidConfiguration\nfrom conan.tools.system import package_manager\nfrom conans import tools\n\nrequired_conan_version = \">=1.47\"\n\n\nclass LibUDEVConan(ConanFile):\n name = \"libudev\"\n version = \"system\"\n description = \"API for enumerating and introspecting local devices\"\n topics = (\"udev\", \"devices\", \"enumerating\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://www.freedesktop.org/software/systemd/man/udev.html\"\n license = \"GPL-2.0-or-later\", \"LGPL-2.1-or-later\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n\n def validate(self):\n if self.settings.os != \"Linux\":\n raise ConanInvalidConfiguration(\"libudev is only supported on Linux.\")\n\n def package_id(self):\n self.info.header_only()\n\n def _fill_cppinfo_from_pkgconfig(self, name):\n pkg_config = tools.PkgConfig(name)\n if not pkg_config.provides:\n raise ConanException(\"libudev development files aren't available, give up\")\n libs = [lib[2:] for lib in pkg_config.libs_only_l]\n lib_dirs = [lib[2:] for lib in pkg_config.libs_only_L]\n ldflags = [flag for flag in pkg_config.libs_only_other]\n include_dirs = [include[2:] for include in pkg_config.cflags_only_I]\n cflags = [flag for flag in pkg_config.cflags_only_other if not flag.startswith(\"-D\")]\n defines = [flag[2:] for flag in pkg_config.cflags_only_other if flag.startswith(\"-D\")]\n\n self.cpp_info.system_libs = libs\n self.cpp_info.libdirs = lib_dirs\n self.cpp_info.sharedlinkflags = ldflags\n self.cpp_info.exelinkflags = ldflags\n self.cpp_info.defines = defines\n self.cpp_info.includedirs = include_dirs\n self.cpp_info.cflags = cflags\n self.cpp_info.cxxflags = cflags\n\n def system_requirements(self):\n dnf = package_manager.Dnf(self)\n dnf.install([\"systemd-devel\"], update=True, check=True)\n\n yum = package_manager.Yum(self)\n yum.install([\"systemd-devel\"], update=True, check=True)\n\n apt = package_manager.Apt(self)\n apt.install([\"libudev-dev\"], update=True, check=True)\n\n pacman = package_manager.PacMan(self)\n pacman.install([\"systemd-libs\"], update=True, check=True)\n\n zypper = package_manager.Zypper(self)\n zypper.install([\"libudev-devel\"], update=True, check=True)\n\n def package_info(self):\n self.cpp_info.includedirs = []\n self.cpp_info.libdirs = []\n self._fill_cppinfo_from_pkgconfig(\"libudev\")\n", "path": "recipes/libudev/all/conanfile.py"}], "after_files": [{"content": "from conan import ConanFile\nfrom conan.errors import ConanInvalidConfiguration\nfrom conan.tools.system import package_manager\nfrom conan.tools.gnu import PkgConfig\n\nrequired_conan_version = \">=1.47\"\n\n\nclass LibUDEVConan(ConanFile):\n name = \"libudev\"\n version = \"system\"\n description = \"API for enumerating and introspecting local devices\"\n topics = (\"udev\", \"devices\", \"enumerating\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://www.freedesktop.org/software/systemd/man/udev.html\"\n license = \"GPL-2.0-or-later\", \"LGPL-2.1-or-later\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n\n def validate(self):\n if self.settings.os != \"Linux\":\n raise ConanInvalidConfiguration(\"libudev is only supported on Linux.\")\n\n def package_id(self):\n self.info.clear()\n\n def system_requirements(self):\n dnf = package_manager.Dnf(self)\n dnf.install([\"systemd-devel\"], update=True, check=True)\n\n yum = package_manager.Yum(self)\n yum.install([\"systemd-devel\"], update=True, check=True)\n\n apt = package_manager.Apt(self)\n apt.install([\"libudev-dev\"], update=True, check=True)\n\n pacman = package_manager.PacMan(self)\n pacman.install([\"systemd-libs\"], update=True, check=True)\n\n zypper = package_manager.Zypper(self)\n zypper.install([\"libudev-devel\"], update=True, check=True)\n\n def package_info(self):\n self.cpp_info.includedirs = []\n self.cpp_info.libdirs = []\n pkg_config = PkgConfig(self, \"libudev\")\n pkg_config.fill_cpp_info(self.cpp_info)\n", "path": "recipes/libudev/all/conanfile.py"}]} | 1,396 | 529 |
gh_patches_debug_29919 | rasdani/github-patches | git_diff | uccser__cs-unplugged-737 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support RTL/BiDi website layout
Currently, the html attribute `dir` is set to RTL when required, which gets us part of the way there. However more changes are needed to truly mirror the layout. This essentially boils down to switching 'left' with 'right' in css rules and html classes in all but a few exceptional cases.
Some good suggestions for how to use `sass`/`scss` features to achieve this are included in this blog: http://matanich.com/2013/09/06/rtl-css-with-sass
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `csunplugged/config/settings/base.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Base Django settings for CS Unplugged project.
4
5 For more information on this file, see
6 https://docs.djangoproject.com/en/dev/topics/settings/
7
8 For the full list of settings and their values, see
9 https://docs.djangoproject.com/en/dev/ref/settings/
10 """
11
12 import environ
13 import os.path
14
15 # Add custom languages not provided by Django
16 import django.conf.locale
17 from django.conf import global_settings
18 from django.utils.translation import ugettext_lazy as _
19
20 # cs-unplugged/csunplugged/config/settings/base.py - 3 = csunplugged/
21 ROOT_DIR = environ.Path(__file__) - 3
22
23 # Load operating system environment variables and then prepare to use them
24 env = environ.Env()
25
26 # APP CONFIGURATION
27 # ----------------------------------------------------------------------------
28 DJANGO_APPS = [
29 # Default Django apps:
30 "django.contrib.auth",
31 "django.contrib.contenttypes",
32 "django.contrib.sessions",
33 "django.contrib.messages",
34 "django.contrib.staticfiles",
35 "django.contrib.postgres",
36
37 # Useful template tags
38 "django.contrib.humanize",
39
40 # Admin
41 "django.contrib.admin",
42 ]
43 THIRD_PARTY_APPS = [
44 "django_bootstrap_breadcrumbs",
45 "modeltranslation",
46 ]
47
48 # Apps specific for this project go here.
49 LOCAL_APPS = [
50 "general.apps.GeneralConfig",
51 "topics.apps.TopicsConfig",
52 "resources.apps.ResourcesConfig",
53 ]
54
55 # See: https://docs.djangoproject.com/en/dev/ref/settings/#installed-apps
56 INSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + LOCAL_APPS
57
58 # MIDDLEWARE CONFIGURATION
59 # ----------------------------------------------------------------------------
60 MIDDLEWARE = [
61 "django.middleware.security.SecurityMiddleware",
62 "django.contrib.sessions.middleware.SessionMiddleware",
63 "django.middleware.locale.LocaleMiddleware",
64 "django.middleware.common.CommonMiddleware",
65 "django.middleware.csrf.CsrfViewMiddleware",
66 "django.contrib.auth.middleware.AuthenticationMiddleware",
67 "django.contrib.messages.middleware.MessageMiddleware",
68 "django.middleware.clickjacking.XFrameOptionsMiddleware",
69 ]
70
71 # DEBUG
72 # ----------------------------------------------------------------------------
73 # See: https://docs.djangoproject.com/en/dev/ref/settings/#debug
74 DEBUG = env.bool("DJANGO_DEBUG", False)
75
76 # FIXTURE CONFIGURATION
77 # ----------------------------------------------------------------------------
78 # See: https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-FIXTURE_DIRS
79 FIXTURE_DIRS = (
80 str(ROOT_DIR.path("fixtures")),
81 )
82
83 # EMAIL CONFIGURATION
84 # -----------------------------------------------------------------------------
85 # EMAIL_BACKEND = env("DJANGO_EMAIL_BACKEND",
86 # default="django.core.mail.backends.smtp.EmailBackend")
87
88 # MANAGER CONFIGURATION
89 # ----------------------------------------------------------------------------
90 # See: https://docs.djangoproject.com/en/dev/ref/settings/#admins
91 # ADMINS = [
92 # ("University of Canterbury Computer Science Research Group",
93 # "[email protected]"),
94 # ]
95
96 # See: https://docs.djangoproject.com/en/dev/ref/settings/#managers
97 # MANAGERS = ADMINS
98
99 # GENERAL CONFIGURATION
100 # ----------------------------------------------------------------------------
101 # Local time zone for this installation. Choices can be found here:
102 # http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
103 # although not all choices may be available on all operating systems.
104 # In a Windows environment this must be set to your system time zone.
105 TIME_ZONE = "UTC"
106
107 # See: https://docs.djangoproject.com/en/dev/ref/settings/#language-code
108 LANGUAGE_CODE = "en"
109
110 INCONTEXT_L10N_PSEUDOLANGUAGE = "xx-lr"
111 INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI = "yy-rl"
112 INCONTEXT_L10N_PSEUDOLANGUAGES = (
113 INCONTEXT_L10N_PSEUDOLANGUAGE,
114 INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI
115 )
116
117 LANGUAGES = (
118 ("en", "English"),
119 )
120
121 if env.bool("INCLUDE_INCONTEXT_L10N", False):
122 EXTRA_LANGUAGES = [
123 (INCONTEXT_L10N_PSEUDOLANGUAGE, "Translation mode"),
124 (INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI, "Translation mode (Bi-directional)"),
125 ]
126
127 EXTRA_LANG_INFO = {
128 INCONTEXT_L10N_PSEUDOLANGUAGE: {
129 'bidi': False,
130 'code': INCONTEXT_L10N_PSEUDOLANGUAGE,
131 'name': "Translation mode",
132 'name_local': _("Translation mode"),
133 },
134 INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI: {
135 'bidi': True,
136 'code': INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI,
137 'name': "Translation mode (Bi-directional)",
138 'name_local': _("Translation mode (Bi-directional)"),
139 }
140 }
141
142 django.conf.locale.LANG_INFO.update(EXTRA_LANG_INFO)
143 # Add new languages to the list of all django languages
144 global_settings.LANGUAGES = global_settings.LANGUAGES + EXTRA_LANGUAGES
145 # Add new languages to the list of languages used for this project
146 LANGUAGES += tuple(EXTRA_LANGUAGES)
147
148
149 # See: https://docs.djangoproject.com/en/dev/ref/settings/#site-id
150 SITE_ID = 1
151
152 # See: https://docs.djangoproject.com/en/dev/ref/settings/#use-i18n
153 USE_I18N = True
154
155 # See: https://docs.djangoproject.com/en/dev/ref/settings/#use-l10n
156 USE_L10N = True
157
158 # See: https://docs.djangoproject.com/en/dev/ref/settings/#use-tz
159 USE_TZ = True
160
161 # See: https://docs.djangoproject.com/en/dev/ref/settings/#locale-paths
162 LOCALE_PATHS = ["locale"]
163
164 # TEMPLATE CONFIGURATION
165 # ----------------------------------------------------------------------------
166 # See: https://docs.djangoproject.com/en/dev/ref/settings/#templates
167 TEMPLATES = [
168 {
169 # See: https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-TEMPLATES-BACKEND
170 "BACKEND": "django.template.backends.django.DjangoTemplates",
171 # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-dirs
172 "DIRS": [
173 str(ROOT_DIR.path("templates")),
174 ],
175 "OPTIONS": {
176 # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-debug
177 "debug": DEBUG,
178 # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-loaders
179 # https://docs.djangoproject.com/en/dev/ref/templates/api/#loader-types
180 "loaders": [
181 "django.template.loaders.filesystem.Loader",
182 "django.template.loaders.app_directories.Loader",
183 ],
184 # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-context-processors
185 "context_processors": [
186 "django.template.context_processors.debug",
187 "django.template.context_processors.request",
188 "django.contrib.auth.context_processors.auth",
189 "django.template.context_processors.i18n",
190 "django.template.context_processors.media",
191 "django.template.context_processors.static",
192 "django.template.context_processors.tz",
193 "django.contrib.messages.context_processors.messages",
194 "config.context_processors.version_number.version_number",
195 "config.context_processors.deployed.deployed",
196 ],
197 "libraries": {
198 "render_html_field": "config.templatetags.render_html_field",
199 "translate_url": "config.templatetags.translate_url",
200 },
201 },
202 },
203 ]
204
205 # STATIC FILE CONFIGURATION
206 # ------------------------------------------------------------------------------
207 # See: https://docs.djangoproject.com/en/dev/ref/settings/#static-root
208 STATIC_ROOT = os.path.join(str(ROOT_DIR.path("staticfiles")), "")
209
210 # See: https://docs.djangoproject.com/en/dev/ref/settings/#static-url
211 BUILD_ROOT = os.path.join(str(ROOT_DIR.path("build")), "")
212 STATIC_URL = "/staticfiles/"
213
214 # See: https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#std:setting-STATICFILES_DIRS
215 STATICFILES_DIRS = [
216 BUILD_ROOT,
217 ]
218
219 # See: https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#staticfiles-finders
220 STATICFILES_FINDERS = [
221 "django.contrib.staticfiles.finders.FileSystemFinder",
222 "django.contrib.staticfiles.finders.AppDirectoriesFinder",
223 ]
224
225 # MEDIA CONFIGURATION
226 # ------------------------------------------------------------------------------
227 # See: https://docs.djangoproject.com/en/dev/ref/settings/#media-root
228 MEDIA_ROOT = str(ROOT_DIR("media"))
229
230 # See: https://docs.djangoproject.com/en/dev/ref/settings/#media-url
231 MEDIA_URL = "/media/"
232
233 # URL Configuration
234 # ------------------------------------------------------------------------------
235 ROOT_URLCONF = "config.urls"
236
237 # See: https://docs.djangoproject.com/en/dev/ref/settings/#wsgi-application
238 WSGI_APPLICATION = "config.wsgi.application"
239
240 # PASSWORD VALIDATION
241 # https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators
242 # ------------------------------------------------------------------------------
243
244 AUTH_PASSWORD_VALIDATORS = [
245 {
246 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
247 },
248 {
249 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
250 },
251 {
252 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
253 },
254 {
255 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
256 },
257 ]
258 # OTHER SETTINGS
259 # ------------------------------------------------------------------------------
260 DJANGO_PRODUCTION = env.bool("DJANGO_PRODUCTION")
261 TOPICS_CONTENT_BASE_PATH = os.path.join(str(ROOT_DIR.path("topics")), "content")
262 RESOURCES_CONTENT_BASE_PATH = os.path.join(str(ROOT_DIR.path("resources")), "content")
263 RESOURCE_GENERATION_LOCATION = os.path.join(str(ROOT_DIR.path("staticfiles")), "resources")
264 RESOURCE_GENERATORS_PACKAGE = "resources.generators"
265 RESOURCE_COPY_AMOUNT = 20
266 SCRATCH_GENERATION_LOCATION = str(ROOT_DIR.path("temp"))
267 CUSTOM_VERTO_TEMPLATES = os.path.join(str(ROOT_DIR.path("utils")), "custom_converter_templates", "")
268 MODELTRANSLATION_CUSTOM_FIELDS = ("JSONField",)
269
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/csunplugged/config/settings/base.py b/csunplugged/config/settings/base.py
--- a/csunplugged/config/settings/base.py
+++ b/csunplugged/config/settings/base.py
@@ -43,6 +43,7 @@
THIRD_PARTY_APPS = [
"django_bootstrap_breadcrumbs",
"modeltranslation",
+ "bidiutils",
]
# Apps specific for this project go here.
@@ -142,8 +143,11 @@
django.conf.locale.LANG_INFO.update(EXTRA_LANG_INFO)
# Add new languages to the list of all django languages
global_settings.LANGUAGES = global_settings.LANGUAGES + EXTRA_LANGUAGES
+ global_settings.LANGUAGES_BIDI = (global_settings.LANGUAGES_BIDI +
+ [INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI.split('-')[0]])
# Add new languages to the list of languages used for this project
LANGUAGES += tuple(EXTRA_LANGUAGES)
+ LANGUAGES_BIDI = global_settings.LANGUAGES_BIDI
# See: https://docs.djangoproject.com/en/dev/ref/settings/#site-id
@@ -193,6 +197,7 @@
"django.contrib.messages.context_processors.messages",
"config.context_processors.version_number.version_number",
"config.context_processors.deployed.deployed",
+ "bidiutils.context_processors.bidi",
],
"libraries": {
"render_html_field": "config.templatetags.render_html_field",
| {"golden_diff": "diff --git a/csunplugged/config/settings/base.py b/csunplugged/config/settings/base.py\n--- a/csunplugged/config/settings/base.py\n+++ b/csunplugged/config/settings/base.py\n@@ -43,6 +43,7 @@\n THIRD_PARTY_APPS = [\n \"django_bootstrap_breadcrumbs\",\n \"modeltranslation\",\n+ \"bidiutils\",\n ]\n \n # Apps specific for this project go here.\n@@ -142,8 +143,11 @@\n django.conf.locale.LANG_INFO.update(EXTRA_LANG_INFO)\n # Add new languages to the list of all django languages\n global_settings.LANGUAGES = global_settings.LANGUAGES + EXTRA_LANGUAGES\n+ global_settings.LANGUAGES_BIDI = (global_settings.LANGUAGES_BIDI +\n+ [INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI.split('-')[0]])\n # Add new languages to the list of languages used for this project\n LANGUAGES += tuple(EXTRA_LANGUAGES)\n+ LANGUAGES_BIDI = global_settings.LANGUAGES_BIDI\n \n \n # See: https://docs.djangoproject.com/en/dev/ref/settings/#site-id\n@@ -193,6 +197,7 @@\n \"django.contrib.messages.context_processors.messages\",\n \"config.context_processors.version_number.version_number\",\n \"config.context_processors.deployed.deployed\",\n+ \"bidiutils.context_processors.bidi\",\n ],\n \"libraries\": {\n \"render_html_field\": \"config.templatetags.render_html_field\",\n", "issue": "Support RTL/BiDi website layout\nCurrently, the html attribute `dir` is set to RTL when required, which gets us part of the way there. However more changes are needed to truly mirror the layout. This essentially boils down to switching 'left' with 'right' in css rules and html classes in all but a few exceptional cases. \r\n\r\nSome good suggestions for how to use `sass`/`scss` features to achieve this are included in this blog: http://matanich.com/2013/09/06/rtl-css-with-sass\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nBase Django settings for CS Unplugged project.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/dev/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/dev/ref/settings/\n\"\"\"\n\nimport environ\nimport os.path\n\n# Add custom languages not provided by Django\nimport django.conf.locale\nfrom django.conf import global_settings\nfrom django.utils.translation import ugettext_lazy as _\n\n# cs-unplugged/csunplugged/config/settings/base.py - 3 = csunplugged/\nROOT_DIR = environ.Path(__file__) - 3\n\n# Load operating system environment variables and then prepare to use them\nenv = environ.Env()\n\n# APP CONFIGURATION\n# ----------------------------------------------------------------------------\nDJANGO_APPS = [\n # Default Django apps:\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.postgres\",\n\n # Useful template tags\n \"django.contrib.humanize\",\n\n # Admin\n \"django.contrib.admin\",\n]\nTHIRD_PARTY_APPS = [\n \"django_bootstrap_breadcrumbs\",\n \"modeltranslation\",\n]\n\n# Apps specific for this project go here.\nLOCAL_APPS = [\n \"general.apps.GeneralConfig\",\n \"topics.apps.TopicsConfig\",\n \"resources.apps.ResourcesConfig\",\n]\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#installed-apps\nINSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + LOCAL_APPS\n\n# MIDDLEWARE CONFIGURATION\n# ----------------------------------------------------------------------------\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\n# DEBUG\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#debug\nDEBUG = env.bool(\"DJANGO_DEBUG\", False)\n\n# FIXTURE CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-FIXTURE_DIRS\nFIXTURE_DIRS = (\n str(ROOT_DIR.path(\"fixtures\")),\n)\n\n# EMAIL CONFIGURATION\n# -----------------------------------------------------------------------------\n# EMAIL_BACKEND = env(\"DJANGO_EMAIL_BACKEND\",\n# default=\"django.core.mail.backends.smtp.EmailBackend\")\n\n# MANAGER CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#admins\n# ADMINS = [\n# (\"University of Canterbury Computer Science Research Group\",\n# \"[email protected]\"),\n# ]\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#managers\n# MANAGERS = ADMINS\n\n# GENERAL CONFIGURATION\n# ----------------------------------------------------------------------------\n# Local time zone for this installation. Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# In a Windows environment this must be set to your system time zone.\nTIME_ZONE = \"UTC\"\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#language-code\nLANGUAGE_CODE = \"en\"\n\nINCONTEXT_L10N_PSEUDOLANGUAGE = \"xx-lr\"\nINCONTEXT_L10N_PSEUDOLANGUAGE_BIDI = \"yy-rl\"\nINCONTEXT_L10N_PSEUDOLANGUAGES = (\n INCONTEXT_L10N_PSEUDOLANGUAGE,\n INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI\n)\n\nLANGUAGES = (\n (\"en\", \"English\"),\n)\n\nif env.bool(\"INCLUDE_INCONTEXT_L10N\", False):\n EXTRA_LANGUAGES = [\n (INCONTEXT_L10N_PSEUDOLANGUAGE, \"Translation mode\"),\n (INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI, \"Translation mode (Bi-directional)\"),\n ]\n\n EXTRA_LANG_INFO = {\n INCONTEXT_L10N_PSEUDOLANGUAGE: {\n 'bidi': False,\n 'code': INCONTEXT_L10N_PSEUDOLANGUAGE,\n 'name': \"Translation mode\",\n 'name_local': _(\"Translation mode\"),\n },\n INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI: {\n 'bidi': True,\n 'code': INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI,\n 'name': \"Translation mode (Bi-directional)\",\n 'name_local': _(\"Translation mode (Bi-directional)\"),\n }\n }\n\n django.conf.locale.LANG_INFO.update(EXTRA_LANG_INFO)\n # Add new languages to the list of all django languages\n global_settings.LANGUAGES = global_settings.LANGUAGES + EXTRA_LANGUAGES\n # Add new languages to the list of languages used for this project\n LANGUAGES += tuple(EXTRA_LANGUAGES)\n\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#site-id\nSITE_ID = 1\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#use-i18n\nUSE_I18N = True\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#use-l10n\nUSE_L10N = True\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#use-tz\nUSE_TZ = True\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#locale-paths\nLOCALE_PATHS = [\"locale\"]\n\n# TEMPLATE CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#templates\nTEMPLATES = [\n {\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-TEMPLATES-BACKEND\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-dirs\n \"DIRS\": [\n str(ROOT_DIR.path(\"templates\")),\n ],\n \"OPTIONS\": {\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-debug\n \"debug\": DEBUG,\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-loaders\n # https://docs.djangoproject.com/en/dev/ref/templates/api/#loader-types\n \"loaders\": [\n \"django.template.loaders.filesystem.Loader\",\n \"django.template.loaders.app_directories.Loader\",\n ],\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-context-processors\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.template.context_processors.i18n\",\n \"django.template.context_processors.media\",\n \"django.template.context_processors.static\",\n \"django.template.context_processors.tz\",\n \"django.contrib.messages.context_processors.messages\",\n \"config.context_processors.version_number.version_number\",\n \"config.context_processors.deployed.deployed\",\n ],\n \"libraries\": {\n \"render_html_field\": \"config.templatetags.render_html_field\",\n \"translate_url\": \"config.templatetags.translate_url\",\n },\n },\n },\n]\n\n# STATIC FILE CONFIGURATION\n# ------------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#static-root\nSTATIC_ROOT = os.path.join(str(ROOT_DIR.path(\"staticfiles\")), \"\")\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#static-url\nBUILD_ROOT = os.path.join(str(ROOT_DIR.path(\"build\")), \"\")\nSTATIC_URL = \"/staticfiles/\"\n\n# See: https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#std:setting-STATICFILES_DIRS\nSTATICFILES_DIRS = [\n BUILD_ROOT,\n]\n\n# See: https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#staticfiles-finders\nSTATICFILES_FINDERS = [\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n]\n\n# MEDIA CONFIGURATION\n# ------------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#media-root\nMEDIA_ROOT = str(ROOT_DIR(\"media\"))\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#media-url\nMEDIA_URL = \"/media/\"\n\n# URL Configuration\n# ------------------------------------------------------------------------------\nROOT_URLCONF = \"config.urls\"\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#wsgi-application\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# PASSWORD VALIDATION\n# https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators\n# ------------------------------------------------------------------------------\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n# OTHER SETTINGS\n# ------------------------------------------------------------------------------\nDJANGO_PRODUCTION = env.bool(\"DJANGO_PRODUCTION\")\nTOPICS_CONTENT_BASE_PATH = os.path.join(str(ROOT_DIR.path(\"topics\")), \"content\")\nRESOURCES_CONTENT_BASE_PATH = os.path.join(str(ROOT_DIR.path(\"resources\")), \"content\")\nRESOURCE_GENERATION_LOCATION = os.path.join(str(ROOT_DIR.path(\"staticfiles\")), \"resources\")\nRESOURCE_GENERATORS_PACKAGE = \"resources.generators\"\nRESOURCE_COPY_AMOUNT = 20\nSCRATCH_GENERATION_LOCATION = str(ROOT_DIR.path(\"temp\"))\nCUSTOM_VERTO_TEMPLATES = os.path.join(str(ROOT_DIR.path(\"utils\")), \"custom_converter_templates\", \"\")\nMODELTRANSLATION_CUSTOM_FIELDS = (\"JSONField\",)\n", "path": "csunplugged/config/settings/base.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nBase Django settings for CS Unplugged project.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/dev/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/dev/ref/settings/\n\"\"\"\n\nimport environ\nimport os.path\n\n# Add custom languages not provided by Django\nimport django.conf.locale\nfrom django.conf import global_settings\nfrom django.utils.translation import ugettext_lazy as _\n\n# cs-unplugged/csunplugged/config/settings/base.py - 3 = csunplugged/\nROOT_DIR = environ.Path(__file__) - 3\n\n# Load operating system environment variables and then prepare to use them\nenv = environ.Env()\n\n# APP CONFIGURATION\n# ----------------------------------------------------------------------------\nDJANGO_APPS = [\n # Default Django apps:\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.postgres\",\n\n # Useful template tags\n \"django.contrib.humanize\",\n\n # Admin\n \"django.contrib.admin\",\n]\nTHIRD_PARTY_APPS = [\n \"django_bootstrap_breadcrumbs\",\n \"modeltranslation\",\n \"bidiutils\",\n]\n\n# Apps specific for this project go here.\nLOCAL_APPS = [\n \"general.apps.GeneralConfig\",\n \"topics.apps.TopicsConfig\",\n \"resources.apps.ResourcesConfig\",\n]\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#installed-apps\nINSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + LOCAL_APPS\n\n# MIDDLEWARE CONFIGURATION\n# ----------------------------------------------------------------------------\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\n# DEBUG\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#debug\nDEBUG = env.bool(\"DJANGO_DEBUG\", False)\n\n# FIXTURE CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-FIXTURE_DIRS\nFIXTURE_DIRS = (\n str(ROOT_DIR.path(\"fixtures\")),\n)\n\n# EMAIL CONFIGURATION\n# -----------------------------------------------------------------------------\n# EMAIL_BACKEND = env(\"DJANGO_EMAIL_BACKEND\",\n# default=\"django.core.mail.backends.smtp.EmailBackend\")\n\n# MANAGER CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#admins\n# ADMINS = [\n# (\"University of Canterbury Computer Science Research Group\",\n# \"[email protected]\"),\n# ]\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#managers\n# MANAGERS = ADMINS\n\n# GENERAL CONFIGURATION\n# ----------------------------------------------------------------------------\n# Local time zone for this installation. Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# In a Windows environment this must be set to your system time zone.\nTIME_ZONE = \"UTC\"\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#language-code\nLANGUAGE_CODE = \"en\"\n\nINCONTEXT_L10N_PSEUDOLANGUAGE = \"xx-lr\"\nINCONTEXT_L10N_PSEUDOLANGUAGE_BIDI = \"yy-rl\"\nINCONTEXT_L10N_PSEUDOLANGUAGES = (\n INCONTEXT_L10N_PSEUDOLANGUAGE,\n INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI\n)\n\nLANGUAGES = (\n (\"en\", \"English\"),\n)\n\nif env.bool(\"INCLUDE_INCONTEXT_L10N\", False):\n EXTRA_LANGUAGES = [\n (INCONTEXT_L10N_PSEUDOLANGUAGE, \"Translation mode\"),\n (INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI, \"Translation mode (Bi-directional)\"),\n ]\n\n EXTRA_LANG_INFO = {\n INCONTEXT_L10N_PSEUDOLANGUAGE: {\n 'bidi': False,\n 'code': INCONTEXT_L10N_PSEUDOLANGUAGE,\n 'name': \"Translation mode\",\n 'name_local': _(\"Translation mode\"),\n },\n INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI: {\n 'bidi': True,\n 'code': INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI,\n 'name': \"Translation mode (Bi-directional)\",\n 'name_local': _(\"Translation mode (Bi-directional)\"),\n }\n }\n\n django.conf.locale.LANG_INFO.update(EXTRA_LANG_INFO)\n # Add new languages to the list of all django languages\n global_settings.LANGUAGES = global_settings.LANGUAGES + EXTRA_LANGUAGES\n global_settings.LANGUAGES_BIDI = (global_settings.LANGUAGES_BIDI +\n [INCONTEXT_L10N_PSEUDOLANGUAGE_BIDI.split('-')[0]])\n # Add new languages to the list of languages used for this project\n LANGUAGES += tuple(EXTRA_LANGUAGES)\n LANGUAGES_BIDI = global_settings.LANGUAGES_BIDI\n\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#site-id\nSITE_ID = 1\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#use-i18n\nUSE_I18N = True\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#use-l10n\nUSE_L10N = True\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#use-tz\nUSE_TZ = True\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#locale-paths\nLOCALE_PATHS = [\"locale\"]\n\n# TEMPLATE CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#templates\nTEMPLATES = [\n {\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-TEMPLATES-BACKEND\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-dirs\n \"DIRS\": [\n str(ROOT_DIR.path(\"templates\")),\n ],\n \"OPTIONS\": {\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-debug\n \"debug\": DEBUG,\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-loaders\n # https://docs.djangoproject.com/en/dev/ref/templates/api/#loader-types\n \"loaders\": [\n \"django.template.loaders.filesystem.Loader\",\n \"django.template.loaders.app_directories.Loader\",\n ],\n # See: https://docs.djangoproject.com/en/dev/ref/settings/#template-context-processors\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.template.context_processors.i18n\",\n \"django.template.context_processors.media\",\n \"django.template.context_processors.static\",\n \"django.template.context_processors.tz\",\n \"django.contrib.messages.context_processors.messages\",\n \"config.context_processors.version_number.version_number\",\n \"config.context_processors.deployed.deployed\",\n \"bidiutils.context_processors.bidi\",\n ],\n \"libraries\": {\n \"render_html_field\": \"config.templatetags.render_html_field\",\n \"translate_url\": \"config.templatetags.translate_url\",\n },\n },\n },\n]\n\n# STATIC FILE CONFIGURATION\n# ------------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#static-root\nSTATIC_ROOT = os.path.join(str(ROOT_DIR.path(\"staticfiles\")), \"\")\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#static-url\nBUILD_ROOT = os.path.join(str(ROOT_DIR.path(\"build\")), \"\")\nSTATIC_URL = \"/staticfiles/\"\n\n# See: https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#std:setting-STATICFILES_DIRS\nSTATICFILES_DIRS = [\n BUILD_ROOT,\n]\n\n# See: https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#staticfiles-finders\nSTATICFILES_FINDERS = [\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n]\n\n# MEDIA CONFIGURATION\n# ------------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#media-root\nMEDIA_ROOT = str(ROOT_DIR(\"media\"))\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#media-url\nMEDIA_URL = \"/media/\"\n\n# URL Configuration\n# ------------------------------------------------------------------------------\nROOT_URLCONF = \"config.urls\"\n\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#wsgi-application\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# PASSWORD VALIDATION\n# https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators\n# ------------------------------------------------------------------------------\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n# OTHER SETTINGS\n# ------------------------------------------------------------------------------\nDJANGO_PRODUCTION = env.bool(\"DJANGO_PRODUCTION\")\nTOPICS_CONTENT_BASE_PATH = os.path.join(str(ROOT_DIR.path(\"topics\")), \"content\")\nRESOURCES_CONTENT_BASE_PATH = os.path.join(str(ROOT_DIR.path(\"resources\")), \"content\")\nRESOURCE_GENERATION_LOCATION = os.path.join(str(ROOT_DIR.path(\"staticfiles\")), \"resources\")\nRESOURCE_GENERATORS_PACKAGE = \"resources.generators\"\nRESOURCE_COPY_AMOUNT = 20\nSCRATCH_GENERATION_LOCATION = str(ROOT_DIR.path(\"temp\"))\nCUSTOM_VERTO_TEMPLATES = os.path.join(str(ROOT_DIR.path(\"utils\")), \"custom_converter_templates\", \"\")\nMODELTRANSLATION_CUSTOM_FIELDS = (\"JSONField\",)\n", "path": "csunplugged/config/settings/base.py"}]} | 3,161 | 334 |
gh_patches_debug_40910 | rasdani/github-patches | git_diff | DDMAL__CantusDB-200 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
currently missing a user list page
implement it similar to indexer list page
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/urls.py`
Content:
```
1 from django.urls import path, include
2 from main_app.views import *
3 from main_app.views import views
4 from main_app.views.sequence import SequenceEditView
5 from main_app.views.source import SourceCreateView, SourceEditView
6 from main_app.views.chant import ChantEditVolpianoView
7 from django.contrib.auth import views as auth_views
8 from main_app.views.user import UserDetailView, UserSourceListView, CustomLogoutView
9
10 urlpatterns = [
11 # static pages
12 path("index/", FullIndexView.as_view(), name="chant-index"),
13 path("contact/", views.contact_us, name="contact"),
14 # login/logout/user
15 path('login/', auth_views.LoginView.as_view(redirect_authenticated_user=True), name="login"),
16 path('logout/', CustomLogoutView.as_view(), name="logout"),
17 path("my-sources/", UserSourceListView.as_view(), name="my-sources"),
18 path("users/<int:user_id>", UserDetailView.as_view(), name="user-detail"),
19
20 # chant
21 path("chants/", ChantListView.as_view(), name="chant-list"),
22 path("chant/<int:pk>", ChantDetailView.as_view(), name="chant-detail"),
23 path("chant-search/", ChantSearchView.as_view(), name="chant-search"),
24 path(
25 "chant-create/<int:source_pk>", ChantCreateView.as_view(), name="chant-create"
26 ),
27 path("chant-update/<int:pk>", ChantUpdateView.as_view(), name="chant-update"),
28 path(
29 "id/<str:cantus_id>", ChantByCantusIDView.as_view(), name="chant-by-cantus-id"
30 ),
31 path("chant-delete/<int:pk>", ChantDeleteView.as_view(), name="chant-delete"),
32 path(
33 "edit-volpiano/<int:source_id>",
34 ChantEditVolpianoView.as_view(),
35 name="source-edit-volpiano"
36 ),
37 # feast
38 path("feasts/", FeastListView.as_view(), name="feast-list"),
39 path("feast/<int:pk>", FeastDetailView.as_view(), name="feast-detail"),
40 # genre
41 path("genres/", GenreListView.as_view(), name="genre-list"),
42 path("genre/<int:pk>", GenreDetailView.as_view(), name="genre-detail"),
43 # indexer
44 path("indexers/", IndexerListView.as_view(), name="indexer-list"),
45 path("indexer/<int:pk>", IndexerDetailView.as_view(), name="indexer-detail"),
46 # office
47 path("offices/", OfficeListView.as_view(), name="office-list"),
48 path("office/<int:pk>", OfficeDetailView.as_view(), name="office-detail"),
49 # sequence
50 path("sequences/", SequenceListView.as_view(), name="sequence-list"),
51 path("sequence/<int:pk>", SequenceDetailView.as_view(), name="sequence-detail",),
52 path("edit-sequence/<int:sequence_id>", SequenceEditView.as_view(), name="sequence-edit",),
53 # source
54 path("sources/", SourceListView.as_view(), name="source-list"),
55 path("source/<int:pk>", SourceDetailView.as_view(), name="source-detail"),
56 path(
57 "source-create/",
58 SourceCreateView.as_view(),
59 name="source-create"
60 ),
61 path(
62 "edit-source/<int:source_id>",
63 SourceEditView.as_view(),
64 name="source-edit"
65 ),
66 # melody
67 path("melody/", MelodySearchView.as_view(), name="melody-search"),
68 path("ajax/melody/<str:cantus_id>", views.ajax_melody_list, name="ajax-melody"),
69 path("ajax/melody-search/", views.ajax_melody_search, name="ajax-melody-search",),
70 # json api
71 path("json-sources/", views.json_sources_export, name="json-sources-export"),
72 path("json-node/<str:id>", views.json_node_export, name="json-node-export"),
73 path("json-nextchants/<str:cantus_id>", views.json_nextchants, name="json-nextchants"),
74 path(
75 "json-melody/<str:cantus_id>",
76 views.json_melody_export,
77 name="json-melody-export",
78 ),
79 # misc search
80 path(
81 "chant-search-ms/<int:source_pk>",
82 ChantSearchMSView.as_view(),
83 name="chant-search-ms",
84 ),
85 path("ci-search/<str:search_term>", CISearchView.as_view(), name="ci-search"),
86 path(
87 "ajax/search-bar/<str:search_term>",
88 views.ajax_search_bar,
89 name="ajax-search-bar",
90 ),
91 # misc
92 path("content-statistics", views.items_count, name="items-count"),
93 path("csv/<str:source_id>", views.csv_export, name="csv-export"),
94 path(
95 "ajax/concordance/<str:cantus_id>",
96 views.ajax_concordance_list,
97 name="ajax-concordance",
98 ),
99 ]
100
101 handler404 = 'main_app.views.views.handle404'
102
```
Path: `django/cantusdb_project/main_app/views/user.py`
Content:
```
1 from urllib import request
2 from django.views.generic import DetailView
3 from django.contrib.auth import get_user_model
4 from main_app.models import Source
5 from django.views.generic import ListView
6 from django.contrib.auth.mixins import LoginRequiredMixin
7 from django.db.models import Q
8 from django.core.paginator import Paginator
9 from django.contrib.auth.views import LogoutView
10 from django.contrib import messages
11
12 class UserDetailView(DetailView):
13 """Detail view for User model
14
15 Accessed by /users/<user_id>
16 """
17
18 model = get_user_model()
19 context_object_name = "user"
20 template_name = "user_detail.html"
21 pk_url_kwarg = 'user_id'
22
23 class UserSourceListView(LoginRequiredMixin, ListView):
24 model = Source
25 context_object_name = "sources"
26 template_name = "user_source_list.html"
27 paginate_by = 100
28
29 def get_queryset(self):
30 return Source.objects.filter(
31 Q(current_editors=self.request.user)
32 | Q(created_by=self.request.user)
33 # | Q(inventoried_by=self.request.user)
34 # | Q(full_text_entered_by=self.request.user)
35 # | Q(melodies_entered_by=self.request.user)
36 # | Q(proofreaders=self.request.user)
37 # | Q(other_editors=self.request.user)
38 ).order_by("title")
39
40 def get_context_data(self, **kwargs):
41 context = super().get_context_data(**kwargs)
42
43 user_created_sources = Source.objects.filter(created_by=self.request.user)
44 paginator = Paginator(user_created_sources, 10)
45 page_number = self.request.GET.get('page2')
46 page_obj = paginator.get_page(page_number)
47
48 context["user_created_sources_page_obj"] = page_obj
49 return context
50
51 class CustomLogoutView(LogoutView):
52 def get_next_page(self):
53 next_page = super().get_next_page()
54 messages.success(
55 self.request,
56 'You have successfully logged out!'
57 )
58 return next_page
59
```
Path: `django/cantusdb_project/users/models.py`
Content:
```
1 from django.db import models
2 from django.contrib.auth.models import AbstractUser
3
4
5 class User(AbstractUser):
6 institution = models.CharField(max_length=255, blank=True, null=True)
7 city = models.CharField(max_length=255, blank=True, null=True)
8 country = models.CharField(max_length=255, blank=True, null=True)
9 website = models.URLField(blank=True, null=True)
10 sources_user_can_edit = models.ManyToManyField("main_app.Source", related_name="users_who_can_edit_this_source", blank=True)
11 full_name = models.CharField(max_length=255, blank=True, null=True)
12
13 @property
14 def name(self):
15 if self.full_name:
16 return self.full_name
17 elif self.first_name and self.last_name:
18 return f'{self.first_name} {self.last_name}'
19
20 def __str__(self):
21 if self.name:
22 return self.name
23 else:
24 return self.username
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django/cantusdb_project/main_app/urls.py b/django/cantusdb_project/main_app/urls.py
--- a/django/cantusdb_project/main_app/urls.py
+++ b/django/cantusdb_project/main_app/urls.py
@@ -5,7 +5,7 @@
from main_app.views.source import SourceCreateView, SourceEditView
from main_app.views.chant import ChantEditVolpianoView
from django.contrib.auth import views as auth_views
-from main_app.views.user import UserDetailView, UserSourceListView, CustomLogoutView
+from main_app.views.user import UserDetailView, UserSourceListView, CustomLogoutView, UserListView
urlpatterns = [
# static pages
@@ -15,7 +15,8 @@
path('login/', auth_views.LoginView.as_view(redirect_authenticated_user=True), name="login"),
path('logout/', CustomLogoutView.as_view(), name="logout"),
path("my-sources/", UserSourceListView.as_view(), name="my-sources"),
- path("users/<int:user_id>", UserDetailView.as_view(), name="user-detail"),
+ path("users/<int:pk>", UserDetailView.as_view(), name="user-detail"),
+ path("users/", UserListView.as_view(), name="user-list"),
# chant
path("chants/", ChantListView.as_view(), name="chant-list"),
diff --git a/django/cantusdb_project/main_app/views/user.py b/django/cantusdb_project/main_app/views/user.py
--- a/django/cantusdb_project/main_app/views/user.py
+++ b/django/cantusdb_project/main_app/views/user.py
@@ -8,17 +8,17 @@
from django.core.paginator import Paginator
from django.contrib.auth.views import LogoutView
from django.contrib import messages
+from extra_views import SearchableListMixin
class UserDetailView(DetailView):
"""Detail view for User model
- Accessed by /users/<user_id>
+ Accessed by /users/<pk>
"""
model = get_user_model()
context_object_name = "user"
template_name = "user_detail.html"
- pk_url_kwarg = 'user_id'
class UserSourceListView(LoginRequiredMixin, ListView):
model = Source
@@ -56,3 +56,18 @@
'You have successfully logged out!'
)
return next_page
+
+class UserListView(LoginRequiredMixin, SearchableListMixin, ListView):
+ """Searchable List view for User model
+
+ Accessed by /users/
+
+ When passed a ``?q=<query>`` argument in the GET request, it will filter users
+ based on the fields defined in ``search_fields`` with the ``icontains`` lookup
+ """
+
+ model = get_user_model()
+ search_fields = ["first_name", "last_name", "institution", "city", "country"]
+ paginate_by = 100
+ template_name = "user_list.html"
+ context_object_name = "users"
diff --git a/django/cantusdb_project/users/models.py b/django/cantusdb_project/users/models.py
--- a/django/cantusdb_project/users/models.py
+++ b/django/cantusdb_project/users/models.py
@@ -1,5 +1,6 @@
from django.db import models
from django.contrib.auth.models import AbstractUser
+from django.urls.base import reverse
class User(AbstractUser):
@@ -21,4 +22,9 @@
if self.name:
return self.name
else:
- return self.username
\ No newline at end of file
+ return self.username
+
+ def get_absolute_url(self) -> str:
+ """Get the absolute URL for an instance of a model."""
+ detail_name = self.__class__.__name__.lower() + "-detail"
+ return reverse(detail_name, kwargs={"pk": self.pk})
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/urls.py b/django/cantusdb_project/main_app/urls.py\n--- a/django/cantusdb_project/main_app/urls.py\n+++ b/django/cantusdb_project/main_app/urls.py\n@@ -5,7 +5,7 @@\n from main_app.views.source import SourceCreateView, SourceEditView\n from main_app.views.chant import ChantEditVolpianoView\n from django.contrib.auth import views as auth_views\n-from main_app.views.user import UserDetailView, UserSourceListView, CustomLogoutView\n+from main_app.views.user import UserDetailView, UserSourceListView, CustomLogoutView, UserListView\n \n urlpatterns = [\n # static pages\n@@ -15,7 +15,8 @@\n path('login/', auth_views.LoginView.as_view(redirect_authenticated_user=True), name=\"login\"),\n path('logout/', CustomLogoutView.as_view(), name=\"logout\"),\n path(\"my-sources/\", UserSourceListView.as_view(), name=\"my-sources\"),\n- path(\"users/<int:user_id>\", UserDetailView.as_view(), name=\"user-detail\"),\n+ path(\"users/<int:pk>\", UserDetailView.as_view(), name=\"user-detail\"),\n+ path(\"users/\", UserListView.as_view(), name=\"user-list\"),\n \n # chant\n path(\"chants/\", ChantListView.as_view(), name=\"chant-list\"),\ndiff --git a/django/cantusdb_project/main_app/views/user.py b/django/cantusdb_project/main_app/views/user.py\n--- a/django/cantusdb_project/main_app/views/user.py\n+++ b/django/cantusdb_project/main_app/views/user.py\n@@ -8,17 +8,17 @@\n from django.core.paginator import Paginator\n from django.contrib.auth.views import LogoutView\n from django.contrib import messages\n+from extra_views import SearchableListMixin\n \n class UserDetailView(DetailView):\n \"\"\"Detail view for User model\n \n- Accessed by /users/<user_id>\n+ Accessed by /users/<pk>\n \"\"\"\n \n model = get_user_model()\n context_object_name = \"user\"\n template_name = \"user_detail.html\"\n- pk_url_kwarg = 'user_id'\t\n \n class UserSourceListView(LoginRequiredMixin, ListView):\n model = Source\n@@ -56,3 +56,18 @@\n 'You have successfully logged out!'\n )\n return next_page\n+\n+class UserListView(LoginRequiredMixin, SearchableListMixin, ListView):\n+ \"\"\"Searchable List view for User model\n+\n+ Accessed by /users/\n+\n+ When passed a ``?q=<query>`` argument in the GET request, it will filter users\n+ based on the fields defined in ``search_fields`` with the ``icontains`` lookup\n+ \"\"\"\n+\n+ model = get_user_model()\n+ search_fields = [\"first_name\", \"last_name\", \"institution\", \"city\", \"country\"]\n+ paginate_by = 100\n+ template_name = \"user_list.html\"\n+ context_object_name = \"users\"\ndiff --git a/django/cantusdb_project/users/models.py b/django/cantusdb_project/users/models.py\n--- a/django/cantusdb_project/users/models.py\n+++ b/django/cantusdb_project/users/models.py\n@@ -1,5 +1,6 @@\n from django.db import models\n from django.contrib.auth.models import AbstractUser\n+from django.urls.base import reverse\n \n \n class User(AbstractUser):\n@@ -21,4 +22,9 @@\n if self.name:\n return self.name\n else:\n- return self.username\n\\ No newline at end of file\n+ return self.username\n+\n+ def get_absolute_url(self) -> str:\n+ \"\"\"Get the absolute URL for an instance of a model.\"\"\"\n+ detail_name = self.__class__.__name__.lower() + \"-detail\"\n+ return reverse(detail_name, kwargs={\"pk\": self.pk})\n", "issue": "currently missing a user list page\nimplement it similar to indexer list page\n", "before_files": [{"content": "from django.urls import path, include\nfrom main_app.views import *\nfrom main_app.views import views\nfrom main_app.views.sequence import SequenceEditView\nfrom main_app.views.source import SourceCreateView, SourceEditView\nfrom main_app.views.chant import ChantEditVolpianoView\nfrom django.contrib.auth import views as auth_views\nfrom main_app.views.user import UserDetailView, UserSourceListView, CustomLogoutView\n\nurlpatterns = [\n # static pages\n path(\"index/\", FullIndexView.as_view(), name=\"chant-index\"),\n path(\"contact/\", views.contact_us, name=\"contact\"),\n # login/logout/user\n path('login/', auth_views.LoginView.as_view(redirect_authenticated_user=True), name=\"login\"),\n path('logout/', CustomLogoutView.as_view(), name=\"logout\"),\n path(\"my-sources/\", UserSourceListView.as_view(), name=\"my-sources\"),\n path(\"users/<int:user_id>\", UserDetailView.as_view(), name=\"user-detail\"),\n\n # chant\n path(\"chants/\", ChantListView.as_view(), name=\"chant-list\"),\n path(\"chant/<int:pk>\", ChantDetailView.as_view(), name=\"chant-detail\"),\n path(\"chant-search/\", ChantSearchView.as_view(), name=\"chant-search\"),\n path(\n \"chant-create/<int:source_pk>\", ChantCreateView.as_view(), name=\"chant-create\"\n ),\n path(\"chant-update/<int:pk>\", ChantUpdateView.as_view(), name=\"chant-update\"),\n path(\n \"id/<str:cantus_id>\", ChantByCantusIDView.as_view(), name=\"chant-by-cantus-id\"\n ),\n path(\"chant-delete/<int:pk>\", ChantDeleteView.as_view(), name=\"chant-delete\"),\n path(\n \"edit-volpiano/<int:source_id>\", \n ChantEditVolpianoView.as_view(), \n name=\"source-edit-volpiano\"\n ),\n # feast\n path(\"feasts/\", FeastListView.as_view(), name=\"feast-list\"),\n path(\"feast/<int:pk>\", FeastDetailView.as_view(), name=\"feast-detail\"),\n # genre\n path(\"genres/\", GenreListView.as_view(), name=\"genre-list\"),\n path(\"genre/<int:pk>\", GenreDetailView.as_view(), name=\"genre-detail\"),\n # indexer\n path(\"indexers/\", IndexerListView.as_view(), name=\"indexer-list\"),\n path(\"indexer/<int:pk>\", IndexerDetailView.as_view(), name=\"indexer-detail\"),\n # office\n path(\"offices/\", OfficeListView.as_view(), name=\"office-list\"),\n path(\"office/<int:pk>\", OfficeDetailView.as_view(), name=\"office-detail\"),\n # sequence\n path(\"sequences/\", SequenceListView.as_view(), name=\"sequence-list\"),\n path(\"sequence/<int:pk>\", SequenceDetailView.as_view(), name=\"sequence-detail\",),\n path(\"edit-sequence/<int:sequence_id>\", SequenceEditView.as_view(), name=\"sequence-edit\",),\n # source\n path(\"sources/\", SourceListView.as_view(), name=\"source-list\"),\n path(\"source/<int:pk>\", SourceDetailView.as_view(), name=\"source-detail\"),\n path(\n \"source-create/\", \n SourceCreateView.as_view(), \n name=\"source-create\"\n ),\n path(\n \"edit-source/<int:source_id>\", \n SourceEditView.as_view(), \n name=\"source-edit\"\n ),\n # melody\n path(\"melody/\", MelodySearchView.as_view(), name=\"melody-search\"),\n path(\"ajax/melody/<str:cantus_id>\", views.ajax_melody_list, name=\"ajax-melody\"),\n path(\"ajax/melody-search/\", views.ajax_melody_search, name=\"ajax-melody-search\",),\n # json api\n path(\"json-sources/\", views.json_sources_export, name=\"json-sources-export\"),\n path(\"json-node/<str:id>\", views.json_node_export, name=\"json-node-export\"),\n path(\"json-nextchants/<str:cantus_id>\", views.json_nextchants, name=\"json-nextchants\"),\n path(\n \"json-melody/<str:cantus_id>\",\n views.json_melody_export,\n name=\"json-melody-export\",\n ),\n # misc search\n path(\n \"chant-search-ms/<int:source_pk>\",\n ChantSearchMSView.as_view(),\n name=\"chant-search-ms\",\n ),\n path(\"ci-search/<str:search_term>\", CISearchView.as_view(), name=\"ci-search\"),\n path(\n \"ajax/search-bar/<str:search_term>\",\n views.ajax_search_bar,\n name=\"ajax-search-bar\",\n ),\n # misc\n path(\"content-statistics\", views.items_count, name=\"items-count\"),\n path(\"csv/<str:source_id>\", views.csv_export, name=\"csv-export\"),\n path(\n \"ajax/concordance/<str:cantus_id>\",\n views.ajax_concordance_list,\n name=\"ajax-concordance\",\n ),\n]\n\nhandler404 = 'main_app.views.views.handle404'\n", "path": "django/cantusdb_project/main_app/urls.py"}, {"content": "from urllib import request\nfrom django.views.generic import DetailView\nfrom django.contrib.auth import get_user_model\nfrom main_app.models import Source\nfrom django.views.generic import ListView\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.db.models import Q\nfrom django.core.paginator import Paginator\nfrom django.contrib.auth.views import LogoutView\nfrom django.contrib import messages\n\nclass UserDetailView(DetailView):\n \"\"\"Detail view for User model\n\n Accessed by /users/<user_id>\n \"\"\"\n\n model = get_user_model()\n context_object_name = \"user\"\n template_name = \"user_detail.html\"\n pk_url_kwarg = 'user_id'\t\n\nclass UserSourceListView(LoginRequiredMixin, ListView):\n model = Source\n context_object_name = \"sources\"\n template_name = \"user_source_list.html\"\n paginate_by = 100\n\n def get_queryset(self):\n return Source.objects.filter(\n Q(current_editors=self.request.user)\n | Q(created_by=self.request.user)\n # | Q(inventoried_by=self.request.user)\n # | Q(full_text_entered_by=self.request.user)\n # | Q(melodies_entered_by=self.request.user)\n # | Q(proofreaders=self.request.user)\n # | Q(other_editors=self.request.user) \n ).order_by(\"title\")\n \n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n user_created_sources = Source.objects.filter(created_by=self.request.user)\n paginator = Paginator(user_created_sources, 10)\n page_number = self.request.GET.get('page2')\n page_obj = paginator.get_page(page_number)\n\n context[\"user_created_sources_page_obj\"] = page_obj\n return context\n\nclass CustomLogoutView(LogoutView):\n def get_next_page(self):\n next_page = super().get_next_page()\n messages.success(\n self.request, \n 'You have successfully logged out!'\n )\n return next_page\n", "path": "django/cantusdb_project/main_app/views/user.py"}, {"content": "from django.db import models\nfrom django.contrib.auth.models import AbstractUser\n\n\nclass User(AbstractUser):\n institution = models.CharField(max_length=255, blank=True, null=True)\n city = models.CharField(max_length=255, blank=True, null=True)\n country = models.CharField(max_length=255, blank=True, null=True)\n website = models.URLField(blank=True, null=True)\n sources_user_can_edit = models.ManyToManyField(\"main_app.Source\", related_name=\"users_who_can_edit_this_source\", blank=True)\n full_name = models.CharField(max_length=255, blank=True, null=True)\n\n @property\n def name(self):\n if self.full_name:\n return self.full_name\n elif self.first_name and self.last_name:\n return f'{self.first_name} {self.last_name}'\n\n def __str__(self):\n if self.name:\n return self.name\n else:\n return self.username", "path": "django/cantusdb_project/users/models.py"}], "after_files": [{"content": "from django.urls import path, include\nfrom main_app.views import *\nfrom main_app.views import views\nfrom main_app.views.sequence import SequenceEditView\nfrom main_app.views.source import SourceCreateView, SourceEditView\nfrom main_app.views.chant import ChantEditVolpianoView\nfrom django.contrib.auth import views as auth_views\nfrom main_app.views.user import UserDetailView, UserSourceListView, CustomLogoutView, UserListView\n\nurlpatterns = [\n # static pages\n path(\"index/\", FullIndexView.as_view(), name=\"chant-index\"),\n path(\"contact/\", views.contact_us, name=\"contact\"),\n # login/logout/user\n path('login/', auth_views.LoginView.as_view(redirect_authenticated_user=True), name=\"login\"),\n path('logout/', CustomLogoutView.as_view(), name=\"logout\"),\n path(\"my-sources/\", UserSourceListView.as_view(), name=\"my-sources\"),\n path(\"users/<int:pk>\", UserDetailView.as_view(), name=\"user-detail\"),\n path(\"users/\", UserListView.as_view(), name=\"user-list\"),\n\n # chant\n path(\"chants/\", ChantListView.as_view(), name=\"chant-list\"),\n path(\"chant/<int:pk>\", ChantDetailView.as_view(), name=\"chant-detail\"),\n path(\"chant-search/\", ChantSearchView.as_view(), name=\"chant-search\"),\n path(\n \"chant-create/<int:source_pk>\", ChantCreateView.as_view(), name=\"chant-create\"\n ),\n path(\"chant-update/<int:pk>\", ChantUpdateView.as_view(), name=\"chant-update\"),\n path(\n \"id/<str:cantus_id>\", ChantByCantusIDView.as_view(), name=\"chant-by-cantus-id\"\n ),\n path(\"chant-delete/<int:pk>\", ChantDeleteView.as_view(), name=\"chant-delete\"),\n path(\n \"edit-volpiano/<int:source_id>\", \n ChantEditVolpianoView.as_view(), \n name=\"source-edit-volpiano\"\n ),\n # feast\n path(\"feasts/\", FeastListView.as_view(), name=\"feast-list\"),\n path(\"feast/<int:pk>\", FeastDetailView.as_view(), name=\"feast-detail\"),\n # genre\n path(\"genres/\", GenreListView.as_view(), name=\"genre-list\"),\n path(\"genre/<int:pk>\", GenreDetailView.as_view(), name=\"genre-detail\"),\n # indexer\n path(\"indexers/\", IndexerListView.as_view(), name=\"indexer-list\"),\n path(\"indexer/<int:pk>\", IndexerDetailView.as_view(), name=\"indexer-detail\"),\n # office\n path(\"offices/\", OfficeListView.as_view(), name=\"office-list\"),\n path(\"office/<int:pk>\", OfficeDetailView.as_view(), name=\"office-detail\"),\n # sequence\n path(\"sequences/\", SequenceListView.as_view(), name=\"sequence-list\"),\n path(\"sequence/<int:pk>\", SequenceDetailView.as_view(), name=\"sequence-detail\",),\n path(\"edit-sequence/<int:sequence_id>\", SequenceEditView.as_view(), name=\"sequence-edit\",),\n # source\n path(\"sources/\", SourceListView.as_view(), name=\"source-list\"),\n path(\"source/<int:pk>\", SourceDetailView.as_view(), name=\"source-detail\"),\n path(\n \"source-create/\", \n SourceCreateView.as_view(), \n name=\"source-create\"\n ),\n path(\n \"edit-source/<int:source_id>\", \n SourceEditView.as_view(), \n name=\"source-edit\"\n ),\n # melody\n path(\"melody/\", MelodySearchView.as_view(), name=\"melody-search\"),\n path(\"ajax/melody/<str:cantus_id>\", views.ajax_melody_list, name=\"ajax-melody\"),\n path(\"ajax/melody-search/\", views.ajax_melody_search, name=\"ajax-melody-search\",),\n # json api\n path(\"json-sources/\", views.json_sources_export, name=\"json-sources-export\"),\n path(\"json-node/<str:id>\", views.json_node_export, name=\"json-node-export\"),\n path(\"json-nextchants/<str:cantus_id>\", views.json_nextchants, name=\"json-nextchants\"),\n path(\n \"json-melody/<str:cantus_id>\",\n views.json_melody_export,\n name=\"json-melody-export\",\n ),\n # misc search\n path(\n \"chant-search-ms/<int:source_pk>\",\n ChantSearchMSView.as_view(),\n name=\"chant-search-ms\",\n ),\n path(\"ci-search/<str:search_term>\", CISearchView.as_view(), name=\"ci-search\"),\n path(\n \"ajax/search-bar/<str:search_term>\",\n views.ajax_search_bar,\n name=\"ajax-search-bar\",\n ),\n # misc\n path(\"content-statistics\", views.items_count, name=\"items-count\"),\n path(\"csv/<str:source_id>\", views.csv_export, name=\"csv-export\"),\n path(\n \"ajax/concordance/<str:cantus_id>\",\n views.ajax_concordance_list,\n name=\"ajax-concordance\",\n ),\n]\n\nhandler404 = 'main_app.views.views.handle404'\n", "path": "django/cantusdb_project/main_app/urls.py"}, {"content": "from urllib import request\nfrom django.views.generic import DetailView\nfrom django.contrib.auth import get_user_model\nfrom main_app.models import Source\nfrom django.views.generic import ListView\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.db.models import Q\nfrom django.core.paginator import Paginator\nfrom django.contrib.auth.views import LogoutView\nfrom django.contrib import messages\nfrom extra_views import SearchableListMixin\n\nclass UserDetailView(DetailView):\n \"\"\"Detail view for User model\n\n Accessed by /users/<pk>\n \"\"\"\n\n model = get_user_model()\n context_object_name = \"user\"\n template_name = \"user_detail.html\"\n\nclass UserSourceListView(LoginRequiredMixin, ListView):\n model = Source\n context_object_name = \"sources\"\n template_name = \"user_source_list.html\"\n paginate_by = 100\n\n def get_queryset(self):\n return Source.objects.filter(\n Q(current_editors=self.request.user)\n | Q(created_by=self.request.user)\n # | Q(inventoried_by=self.request.user)\n # | Q(full_text_entered_by=self.request.user)\n # | Q(melodies_entered_by=self.request.user)\n # | Q(proofreaders=self.request.user)\n # | Q(other_editors=self.request.user) \n ).order_by(\"title\")\n \n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n user_created_sources = Source.objects.filter(created_by=self.request.user)\n paginator = Paginator(user_created_sources, 10)\n page_number = self.request.GET.get('page2')\n page_obj = paginator.get_page(page_number)\n\n context[\"user_created_sources_page_obj\"] = page_obj\n return context\n\nclass CustomLogoutView(LogoutView):\n def get_next_page(self):\n next_page = super().get_next_page()\n messages.success(\n self.request, \n 'You have successfully logged out!'\n )\n return next_page\n\nclass UserListView(LoginRequiredMixin, SearchableListMixin, ListView):\n \"\"\"Searchable List view for User model\n\n Accessed by /users/\n\n When passed a ``?q=<query>`` argument in the GET request, it will filter users\n based on the fields defined in ``search_fields`` with the ``icontains`` lookup\n \"\"\"\n\n model = get_user_model()\n search_fields = [\"first_name\", \"last_name\", \"institution\", \"city\", \"country\"]\n paginate_by = 100\n template_name = \"user_list.html\"\n context_object_name = \"users\"\n", "path": "django/cantusdb_project/main_app/views/user.py"}, {"content": "from django.db import models\nfrom django.contrib.auth.models import AbstractUser\nfrom django.urls.base import reverse\n\n\nclass User(AbstractUser):\n institution = models.CharField(max_length=255, blank=True, null=True)\n city = models.CharField(max_length=255, blank=True, null=True)\n country = models.CharField(max_length=255, blank=True, null=True)\n website = models.URLField(blank=True, null=True)\n sources_user_can_edit = models.ManyToManyField(\"main_app.Source\", related_name=\"users_who_can_edit_this_source\", blank=True)\n full_name = models.CharField(max_length=255, blank=True, null=True)\n\n @property\n def name(self):\n if self.full_name:\n return self.full_name\n elif self.first_name and self.last_name:\n return f'{self.first_name} {self.last_name}'\n\n def __str__(self):\n if self.name:\n return self.name\n else:\n return self.username\n\n def get_absolute_url(self) -> str:\n \"\"\"Get the absolute URL for an instance of a model.\"\"\"\n detail_name = self.__class__.__name__.lower() + \"-detail\"\n return reverse(detail_name, kwargs={\"pk\": self.pk})\n", "path": "django/cantusdb_project/users/models.py"}]} | 2,372 | 851 |
gh_patches_debug_5468 | rasdani/github-patches | git_diff | freedomofpress__securedrop-1901 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wordlists are not being parsed correctly
# Bug
## Description
`crypo_util.{words,nouns,adjectives}` all contain an empty string as their last element.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/crypto_util.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from base64 import b32encode
5 import os
6 import subprocess
7
8 from Crypto.Random import random
9 import gnupg
10 from gnupg._util import _is_stream, _make_binary_stream
11 import scrypt
12
13 import config
14 import store
15
16 # to fix gpg error #78 on production
17 os.environ['USERNAME'] = 'www-data'
18
19 GPG_KEY_TYPE = "RSA"
20 if os.environ.get('SECUREDROP_ENV') == 'test':
21 # Optimize crypto to speed up tests (at the expense of security - DO NOT
22 # use these settings in production)
23 GPG_KEY_LENGTH = 1024
24 SCRYPT_PARAMS = dict(N=2**1, r=1, p=1)
25 else: # pragma: no cover
26 GPG_KEY_LENGTH = 4096
27 SCRYPT_PARAMS = config.SCRYPT_PARAMS
28
29 SCRYPT_ID_PEPPER = config.SCRYPT_ID_PEPPER
30 SCRYPT_GPG_PEPPER = config.SCRYPT_GPG_PEPPER
31
32 DEFAULT_WORDS_IN_RANDOM_ID = 8
33
34
35 # Make sure these pass before the app can run
36 # TODO: Add more tests
37 def do_runtime_tests():
38 assert(config.SCRYPT_ID_PEPPER != config.SCRYPT_GPG_PEPPER)
39 # crash if we don't have srm:
40 try:
41 subprocess.check_call(['srm'], stdout=subprocess.PIPE)
42 except subprocess.CalledProcessError:
43 pass
44
45 do_runtime_tests()
46
47 gpg = gnupg.GPG(binary='gpg2', homedir=config.GPG_KEY_DIR)
48
49 words = open(config.WORD_LIST).read().split('\n')
50 nouns = open(config.NOUNS).read().split('\n')
51 adjectives = open(config.ADJECTIVES).read().split('\n')
52
53
54 class CryptoException(Exception):
55 pass
56
57
58 def clean(s, also=''):
59 """
60 >>> clean("Hello, world!")
61 Traceback (most recent call last):
62 ...
63 CryptoException: invalid input: Hello, world!
64 >>> clean("Helloworld")
65 'Helloworld'
66 """
67 # safe characters for every possible word in the wordlist includes capital
68 # letters because codename hashes are base32-encoded with capital letters
69 ok = (' !#%$&)(+*-1032547698;:=?@acbedgfihkjmlonqpsrutwvyxzABCDEFGHIJ'
70 'KLMNOPQRSTUVWXYZ')
71 for c in s:
72 if c not in ok and c not in also:
73 raise CryptoException("invalid input: {0}".format(s))
74 # scrypt.hash requires input of type str. Since the wordlist is all ASCII
75 # characters, this conversion is not problematic
76 return str(s)
77
78
79 def genrandomid(words_in_random_id=DEFAULT_WORDS_IN_RANDOM_ID):
80 return ' '.join(random.choice(words) for x in range(words_in_random_id))
81
82
83 def display_id():
84 return ' '.join([random.choice(adjectives), random.choice(nouns)])
85
86
87 def hash_codename(codename, salt=SCRYPT_ID_PEPPER):
88 """Salts and hashes a codename using scrypt.
89
90 :param str codename: A source's codename.
91 :param str salt: The salt to mix with the codename when hashing.
92 :returns: A base32 encoded string; the salted codename hash.
93 """
94 return b32encode(scrypt.hash(clean(codename), salt, **SCRYPT_PARAMS))
95
96
97 def genkeypair(name, secret):
98 """Generate a GPG key through batch file key generation. A source's
99 codename is salted with SCRYPT_GPG_PEPPER and hashed with scrypt to
100 provide the passphrase used to encrypt their private key. Their name
101 should be their filesystem id.
102
103 >>> if not gpg.list_keys(hash_codename('randomid')):
104 ... genkeypair(hash_codename('randomid'), 'randomid').type
105 ... else:
106 ... u'P'
107 u'P'
108
109 :param str name: The source's filesystem id (their codename, salted
110 with SCRYPT_ID_PEPPER, and hashed with scrypt).
111 :param str secret: The source's codename.
112 :returns: a :class:`GenKey <gnupg._parser.GenKey>` object, on which
113 the ``__str__()`` method may be called to return the
114 generated key's fingeprint.
115
116 """
117 name = clean(name)
118 secret = hash_codename(secret, salt=SCRYPT_GPG_PEPPER)
119 return gpg.gen_key(gpg.gen_key_input(
120 key_type=GPG_KEY_TYPE, key_length=GPG_KEY_LENGTH,
121 passphrase=secret,
122 name_email=name
123 ))
124
125
126 def delete_reply_keypair(source_filesystem_id):
127 key = getkey(source_filesystem_id)
128 # If this source was never flagged for review, they won't have a reply
129 # keypair
130 if not key:
131 return
132 # The private key needs to be deleted before the public key can be deleted
133 # http://pythonhosted.org/python-gnupg/#deleting-keys
134 gpg.delete_keys(key, True) # private key
135 gpg.delete_keys(key) # public key
136 # TODO: srm?
137
138
139 def getkey(name):
140 for key in gpg.list_keys():
141 for uid in key['uids']:
142 if name in uid:
143 return key['fingerprint']
144 return None
145
146
147 def encrypt(plaintext, fingerprints, output=None):
148 # Verify the output path
149 if output:
150 store.verify(output)
151
152 if not isinstance(fingerprints, (list, tuple)):
153 fingerprints = [fingerprints, ]
154 # Remove any spaces from provided fingerprints GPG outputs fingerprints
155 # with spaces for readability, but requires the spaces to be removed when
156 # using fingerprints to specify recipients.
157 fingerprints = [fpr.replace(' ', '') for fpr in fingerprints]
158
159 if not _is_stream(plaintext):
160 plaintext = _make_binary_stream(plaintext, "utf_8")
161
162 out = gpg.encrypt(plaintext,
163 *fingerprints,
164 output=output,
165 always_trust=True,
166 armor=False)
167 if out.ok:
168 return out.data
169 else:
170 raise CryptoException(out.stderr)
171
172
173 def decrypt(secret, ciphertext):
174 """
175 >>> key = genkeypair('randomid', 'randomid')
176 >>> decrypt('randomid',
177 ... encrypt('Goodbye, cruel world!', str(key))
178 ... )
179 'Goodbye, cruel world!'
180 """
181 hashed_codename = hash_codename(secret, salt=SCRYPT_GPG_PEPPER)
182 return gpg.decrypt(ciphertext, passphrase=hashed_codename).data
183
184 if __name__ == "__main__": # pragma: no cover
185 import doctest
186 doctest.testmod()
187
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/securedrop/crypto_util.py b/securedrop/crypto_util.py
--- a/securedrop/crypto_util.py
+++ b/securedrop/crypto_util.py
@@ -46,9 +46,9 @@
gpg = gnupg.GPG(binary='gpg2', homedir=config.GPG_KEY_DIR)
-words = open(config.WORD_LIST).read().split('\n')
-nouns = open(config.NOUNS).read().split('\n')
-adjectives = open(config.ADJECTIVES).read().split('\n')
+words = open(config.WORD_LIST).read().rstrip('\n').split('\n')
+nouns = open(config.NOUNS).read().rstrip('\n').split('\n')
+adjectives = open(config.ADJECTIVES).read().rstrip('\n').split('\n')
class CryptoException(Exception):
| {"golden_diff": "diff --git a/securedrop/crypto_util.py b/securedrop/crypto_util.py\n--- a/securedrop/crypto_util.py\n+++ b/securedrop/crypto_util.py\n@@ -46,9 +46,9 @@\n \n gpg = gnupg.GPG(binary='gpg2', homedir=config.GPG_KEY_DIR)\n \n-words = open(config.WORD_LIST).read().split('\\n')\n-nouns = open(config.NOUNS).read().split('\\n')\n-adjectives = open(config.ADJECTIVES).read().split('\\n')\n+words = open(config.WORD_LIST).read().rstrip('\\n').split('\\n')\n+nouns = open(config.NOUNS).read().rstrip('\\n').split('\\n')\n+adjectives = open(config.ADJECTIVES).read().rstrip('\\n').split('\\n')\n \n \n class CryptoException(Exception):\n", "issue": "Wordlists are not being parsed correctly\n# Bug\r\n\r\n## Description\r\n\r\n`crypo_util.{words,nouns,adjectives}` all contain an empty string as their last element.\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom base64 import b32encode\nimport os\nimport subprocess\n\nfrom Crypto.Random import random\nimport gnupg\nfrom gnupg._util import _is_stream, _make_binary_stream\nimport scrypt\n\nimport config\nimport store\n\n# to fix gpg error #78 on production\nos.environ['USERNAME'] = 'www-data'\n\nGPG_KEY_TYPE = \"RSA\"\nif os.environ.get('SECUREDROP_ENV') == 'test':\n # Optimize crypto to speed up tests (at the expense of security - DO NOT\n # use these settings in production)\n GPG_KEY_LENGTH = 1024\n SCRYPT_PARAMS = dict(N=2**1, r=1, p=1)\nelse: # pragma: no cover\n GPG_KEY_LENGTH = 4096\n SCRYPT_PARAMS = config.SCRYPT_PARAMS\n\nSCRYPT_ID_PEPPER = config.SCRYPT_ID_PEPPER\nSCRYPT_GPG_PEPPER = config.SCRYPT_GPG_PEPPER\n\nDEFAULT_WORDS_IN_RANDOM_ID = 8\n\n\n# Make sure these pass before the app can run\n# TODO: Add more tests\ndef do_runtime_tests():\n assert(config.SCRYPT_ID_PEPPER != config.SCRYPT_GPG_PEPPER)\n # crash if we don't have srm:\n try:\n subprocess.check_call(['srm'], stdout=subprocess.PIPE)\n except subprocess.CalledProcessError:\n pass\n\ndo_runtime_tests()\n\ngpg = gnupg.GPG(binary='gpg2', homedir=config.GPG_KEY_DIR)\n\nwords = open(config.WORD_LIST).read().split('\\n')\nnouns = open(config.NOUNS).read().split('\\n')\nadjectives = open(config.ADJECTIVES).read().split('\\n')\n\n\nclass CryptoException(Exception):\n pass\n\n\ndef clean(s, also=''):\n \"\"\"\n >>> clean(\"Hello, world!\")\n Traceback (most recent call last):\n ...\n CryptoException: invalid input: Hello, world!\n >>> clean(\"Helloworld\")\n 'Helloworld'\n \"\"\"\n # safe characters for every possible word in the wordlist includes capital\n # letters because codename hashes are base32-encoded with capital letters\n ok = (' !#%$&)(+*-1032547698;:=?@acbedgfihkjmlonqpsrutwvyxzABCDEFGHIJ'\n 'KLMNOPQRSTUVWXYZ')\n for c in s:\n if c not in ok and c not in also:\n raise CryptoException(\"invalid input: {0}\".format(s))\n # scrypt.hash requires input of type str. Since the wordlist is all ASCII\n # characters, this conversion is not problematic\n return str(s)\n\n\ndef genrandomid(words_in_random_id=DEFAULT_WORDS_IN_RANDOM_ID):\n return ' '.join(random.choice(words) for x in range(words_in_random_id))\n\n\ndef display_id():\n return ' '.join([random.choice(adjectives), random.choice(nouns)])\n\n\ndef hash_codename(codename, salt=SCRYPT_ID_PEPPER):\n \"\"\"Salts and hashes a codename using scrypt.\n\n :param str codename: A source's codename.\n :param str salt: The salt to mix with the codename when hashing.\n :returns: A base32 encoded string; the salted codename hash.\n \"\"\"\n return b32encode(scrypt.hash(clean(codename), salt, **SCRYPT_PARAMS))\n\n\ndef genkeypair(name, secret):\n \"\"\"Generate a GPG key through batch file key generation. A source's\n codename is salted with SCRYPT_GPG_PEPPER and hashed with scrypt to\n provide the passphrase used to encrypt their private key. Their name\n should be their filesystem id.\n\n >>> if not gpg.list_keys(hash_codename('randomid')):\n ... genkeypair(hash_codename('randomid'), 'randomid').type\n ... else:\n ... u'P'\n u'P'\n\n :param str name: The source's filesystem id (their codename, salted\n with SCRYPT_ID_PEPPER, and hashed with scrypt).\n :param str secret: The source's codename.\n :returns: a :class:`GenKey <gnupg._parser.GenKey>` object, on which\n the ``__str__()`` method may be called to return the\n generated key's fingeprint.\n\n \"\"\"\n name = clean(name)\n secret = hash_codename(secret, salt=SCRYPT_GPG_PEPPER)\n return gpg.gen_key(gpg.gen_key_input(\n key_type=GPG_KEY_TYPE, key_length=GPG_KEY_LENGTH,\n passphrase=secret,\n name_email=name\n ))\n\n\ndef delete_reply_keypair(source_filesystem_id):\n key = getkey(source_filesystem_id)\n # If this source was never flagged for review, they won't have a reply\n # keypair\n if not key:\n return\n # The private key needs to be deleted before the public key can be deleted\n # http://pythonhosted.org/python-gnupg/#deleting-keys\n gpg.delete_keys(key, True) # private key\n gpg.delete_keys(key) # public key\n # TODO: srm?\n\n\ndef getkey(name):\n for key in gpg.list_keys():\n for uid in key['uids']:\n if name in uid:\n return key['fingerprint']\n return None\n\n\ndef encrypt(plaintext, fingerprints, output=None):\n # Verify the output path\n if output:\n store.verify(output)\n\n if not isinstance(fingerprints, (list, tuple)):\n fingerprints = [fingerprints, ]\n # Remove any spaces from provided fingerprints GPG outputs fingerprints\n # with spaces for readability, but requires the spaces to be removed when\n # using fingerprints to specify recipients.\n fingerprints = [fpr.replace(' ', '') for fpr in fingerprints]\n\n if not _is_stream(plaintext):\n plaintext = _make_binary_stream(plaintext, \"utf_8\")\n\n out = gpg.encrypt(plaintext,\n *fingerprints,\n output=output,\n always_trust=True,\n armor=False)\n if out.ok:\n return out.data\n else:\n raise CryptoException(out.stderr)\n\n\ndef decrypt(secret, ciphertext):\n \"\"\"\n >>> key = genkeypair('randomid', 'randomid')\n >>> decrypt('randomid',\n ... encrypt('Goodbye, cruel world!', str(key))\n ... )\n 'Goodbye, cruel world!'\n \"\"\"\n hashed_codename = hash_codename(secret, salt=SCRYPT_GPG_PEPPER)\n return gpg.decrypt(ciphertext, passphrase=hashed_codename).data\n\nif __name__ == \"__main__\": # pragma: no cover\n import doctest\n doctest.testmod()\n", "path": "securedrop/crypto_util.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom base64 import b32encode\nimport os\nimport subprocess\n\nfrom Crypto.Random import random\nimport gnupg\nfrom gnupg._util import _is_stream, _make_binary_stream\nimport scrypt\n\nimport config\nimport store\n\n# to fix gpg error #78 on production\nos.environ['USERNAME'] = 'www-data'\n\nGPG_KEY_TYPE = \"RSA\"\nif os.environ.get('SECUREDROP_ENV') == 'test':\n # Optimize crypto to speed up tests (at the expense of security - DO NOT\n # use these settings in production)\n GPG_KEY_LENGTH = 1024\n SCRYPT_PARAMS = dict(N=2**1, r=1, p=1)\nelse: # pragma: no cover\n GPG_KEY_LENGTH = 4096\n SCRYPT_PARAMS = config.SCRYPT_PARAMS\n\nSCRYPT_ID_PEPPER = config.SCRYPT_ID_PEPPER\nSCRYPT_GPG_PEPPER = config.SCRYPT_GPG_PEPPER\n\nDEFAULT_WORDS_IN_RANDOM_ID = 8\n\n\n# Make sure these pass before the app can run\n# TODO: Add more tests\ndef do_runtime_tests():\n assert(config.SCRYPT_ID_PEPPER != config.SCRYPT_GPG_PEPPER)\n # crash if we don't have srm:\n try:\n subprocess.check_call(['srm'], stdout=subprocess.PIPE)\n except subprocess.CalledProcessError:\n pass\n\ndo_runtime_tests()\n\ngpg = gnupg.GPG(binary='gpg2', homedir=config.GPG_KEY_DIR)\n\nwords = open(config.WORD_LIST).read().rstrip('\\n').split('\\n')\nnouns = open(config.NOUNS).read().rstrip('\\n').split('\\n')\nadjectives = open(config.ADJECTIVES).read().rstrip('\\n').split('\\n')\n\n\nclass CryptoException(Exception):\n pass\n\n\ndef clean(s, also=''):\n \"\"\"\n >>> clean(\"Hello, world!\")\n Traceback (most recent call last):\n ...\n CryptoException: invalid input: Hello, world!\n >>> clean(\"Helloworld\")\n 'Helloworld'\n \"\"\"\n # safe characters for every possible word in the wordlist includes capital\n # letters because codename hashes are base32-encoded with capital letters\n ok = (' !#%$&)(+*-1032547698;:=?@acbedgfihkjmlonqpsrutwvyxzABCDEFGHIJ'\n 'KLMNOPQRSTUVWXYZ')\n for c in s:\n if c not in ok and c not in also:\n raise CryptoException(\"invalid input: {0}\".format(s))\n # scrypt.hash requires input of type str. Since the wordlist is all ASCII\n # characters, this conversion is not problematic\n return str(s)\n\n\ndef genrandomid(words_in_random_id=DEFAULT_WORDS_IN_RANDOM_ID):\n return ' '.join(random.choice(words) for x in range(words_in_random_id))\n\n\ndef display_id():\n return ' '.join([random.choice(adjectives), random.choice(nouns)])\n\n\ndef hash_codename(codename, salt=SCRYPT_ID_PEPPER):\n \"\"\"Salts and hashes a codename using scrypt.\n\n :param str codename: A source's codename.\n :param str salt: The salt to mix with the codename when hashing.\n :returns: A base32 encoded string; the salted codename hash.\n \"\"\"\n return b32encode(scrypt.hash(clean(codename), salt, **SCRYPT_PARAMS))\n\n\ndef genkeypair(name, secret):\n \"\"\"Generate a GPG key through batch file key generation. A source's\n codename is salted with SCRYPT_GPG_PEPPER and hashed with scrypt to\n provide the passphrase used to encrypt their private key. Their name\n should be their filesystem id.\n\n >>> if not gpg.list_keys(hash_codename('randomid')):\n ... genkeypair(hash_codename('randomid'), 'randomid').type\n ... else:\n ... u'P'\n u'P'\n\n :param str name: The source's filesystem id (their codename, salted\n with SCRYPT_ID_PEPPER, and hashed with scrypt).\n :param str secret: The source's codename.\n :returns: a :class:`GenKey <gnupg._parser.GenKey>` object, on which\n the ``__str__()`` method may be called to return the\n generated key's fingeprint.\n\n \"\"\"\n name = clean(name)\n secret = hash_codename(secret, salt=SCRYPT_GPG_PEPPER)\n return gpg.gen_key(gpg.gen_key_input(\n key_type=GPG_KEY_TYPE, key_length=GPG_KEY_LENGTH,\n passphrase=secret,\n name_email=name\n ))\n\n\ndef delete_reply_keypair(source_filesystem_id):\n key = getkey(source_filesystem_id)\n # If this source was never flagged for review, they won't have a reply\n # keypair\n if not key:\n return\n # The private key needs to be deleted before the public key can be deleted\n # http://pythonhosted.org/python-gnupg/#deleting-keys\n gpg.delete_keys(key, True) # private key\n gpg.delete_keys(key) # public key\n # TODO: srm?\n\n\ndef getkey(name):\n for key in gpg.list_keys():\n for uid in key['uids']:\n if name in uid:\n return key['fingerprint']\n return None\n\n\ndef encrypt(plaintext, fingerprints, output=None):\n # Verify the output path\n if output:\n store.verify(output)\n\n if not isinstance(fingerprints, (list, tuple)):\n fingerprints = [fingerprints, ]\n # Remove any spaces from provided fingerprints GPG outputs fingerprints\n # with spaces for readability, but requires the spaces to be removed when\n # using fingerprints to specify recipients.\n fingerprints = [fpr.replace(' ', '') for fpr in fingerprints]\n\n if not _is_stream(plaintext):\n plaintext = _make_binary_stream(plaintext, \"utf_8\")\n\n out = gpg.encrypt(plaintext,\n *fingerprints,\n output=output,\n always_trust=True,\n armor=False)\n if out.ok:\n return out.data\n else:\n raise CryptoException(out.stderr)\n\n\ndef decrypt(secret, ciphertext):\n \"\"\"\n >>> key = genkeypair('randomid', 'randomid')\n >>> decrypt('randomid',\n ... encrypt('Goodbye, cruel world!', str(key))\n ... )\n 'Goodbye, cruel world!'\n \"\"\"\n hashed_codename = hash_codename(secret, salt=SCRYPT_GPG_PEPPER)\n return gpg.decrypt(ciphertext, passphrase=hashed_codename).data\n\nif __name__ == \"__main__\": # pragma: no cover\n import doctest\n doctest.testmod()\n", "path": "securedrop/crypto_util.py"}]} | 2,254 | 182 |
gh_patches_debug_36567 | rasdani/github-patches | git_diff | Slicer__ExtensionsIndex-1759 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bad dependencies kill entire extension build
[SlicerVideoCamera name change](https://github.com/Slicer/ExtensionsIndex/commit/93d1942ed51a5c576f477dab77df9529ce788754) introduced this [bug](https://github.com/Slicer/ExtensionsIndex/commit/4181b49933cca4bf1420d1b8f7b54017bbfe131c) where an extension had a non-existent dependency.
Resulting [CMake Error](https://slicer.cdash.org/build/2225046/configure) terminated the whole build process.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/check_description_files.py`
Content:
```
1 #!/usr/bin/env python
2
3 """
4 Python 3.x CLI for validating extension description files.
5 """
6
7 import argparse
8 import os
9 import sys
10 import textwrap
11 import urllib.parse as urlparse
12
13 from functools import wraps
14
15
16 class ExtensionCheckError(RuntimeError):
17 """Exception raised when a particular extension check failed.
18 """
19 def __init__(self, extension_name, check_name, details):
20 self.extension_name = extension_name
21 self.check_name = check_name
22 self.details = details
23
24 def __str__(self):
25 return self.details
26
27
28 def require_metadata_key(metadata_key):
29 check_name = "require_metadata_key"
30
31 def dec(fun):
32 @wraps(fun)
33 def wrapped(*args, **kwargs):
34 extension_name = args[0]
35 metadata = args[1]
36 if metadata_key not in metadata.keys():
37 raise ExtensionCheckError(extension_name, check_name, "%s key is missing" % metadata_key)
38 return fun(*args, **kwargs)
39 return wrapped
40 return dec
41
42
43 def parse_s4ext(ext_file_path):
44 """Parse a Slicer extension description file.
45 :param ext_file_path: Path to a Slicer extension description file.
46 :return: Dictionary of extension metadata.
47 """
48 ext_metadata = {}
49 with open(ext_file_path) as ext_file:
50 for line in ext_file:
51 if not line.strip() or line.startswith("#"):
52 continue
53 fields = [field.strip() for field in line.split(' ', 1)]
54 assert(len(fields) <= 2)
55 ext_metadata[fields[0]] = fields[1] if len(fields) == 2 else None
56 return ext_metadata
57
58
59 @require_metadata_key("scmurl")
60 def check_scmurl_syntax(extension_name, metadata):
61 check_name = "check_scmurl_syntax"
62
63 if "://" not in metadata["scmurl"]:
64 raise ExtensionCheckError(extension_name, check_name, "scmurl do not match scheme://host/path")
65
66 supported_schemes = ["git", "https", "svn"]
67 scheme = urlparse.urlsplit(metadata["scmurl"]).scheme
68 if scheme not in supported_schemes:
69 raise ExtensionCheckError(
70 extension_name, check_name,
71 "scmurl scheme is '%s' but it should by any of %s" % (scheme, supported_schemes))
72
73
74 @require_metadata_key("scmurl")
75 @require_metadata_key("scm")
76 def check_git_repository_name(extension_name, metadata):
77 """See https://www.slicer.org/wiki/Documentation/Nightly/Developers/FAQ#Should_the_name_of_the_source_repository_match_the_name_of_the_extension_.3F
78 """
79 check_name = "check_git_repository_name"
80
81 if metadata["scm"] != "git":
82 return
83
84 repo_name = os.path.splitext(urlparse.urlsplit(metadata["scmurl"]).path.split("/")[-1])[0]
85
86 if not repo_name.startswith("Slicer"):
87
88 variations = [prefix + repo_name for prefix in ["Slicer-", "Slicer_", "SlicerExtension-", "SlicerExtension_"]]
89
90 raise ExtensionCheckError(
91 extension_name, check_name,
92 textwrap.dedent("""
93 extension repository name is '%s'. Please, consider changing it to 'Slicer%s' or any of
94 these variations %s.
95 """ % (
96 repo_name, repo_name, variations)))
97
98
99 def main():
100 parser = argparse.ArgumentParser(
101 description='Validate extension description files.')
102 parser.add_argument(
103 "--check-git-repository-name", action="store_true",
104 help="Check extension git repository name. Disabled by default.")
105 parser.add_argument("/path/to/description.s4ext", nargs='*')
106 args = parser.parse_args()
107
108 checks = []
109
110 if args.check_git_repository_name:
111 checks.append(check_git_repository_name)
112
113 if not checks:
114 checks = [
115 check_scmurl_syntax,
116 ]
117
118 total_failure_count = 0
119
120 file_paths = getattr(args, "/path/to/description.s4ext")
121 for file_path in file_paths:
122 extension_name = os.path.splitext(os.path.basename(file_path))[0]
123
124 failures = []
125
126 metadata = parse_s4ext(file_path)
127 for check in checks:
128 try:
129 check(extension_name, metadata)
130 except ExtensionCheckError as exc:
131 failures.append(str(exc))
132
133 if failures:
134 total_failure_count += len(failures)
135 print("%s.s4ext" % extension_name)
136 for failure in set(failures):
137 print(" %s" % failure)
138
139 print("Checked %d description files: Found %d errors" % (len(file_paths), total_failure_count))
140 sys.exit(total_failure_count)
141
142
143 if __name__ == "__main__":
144 main()
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/check_description_files.py b/scripts/check_description_files.py
--- a/scripts/check_description_files.py
+++ b/scripts/check_description_files.py
@@ -95,6 +95,38 @@
""" % (
repo_name, repo_name, variations)))
+def check_dependencies(directory):
+ import os
+ required_extensions = {} # for each extension it contains a list of extensions that require it
+ available_extensions = []
+ for filename in os.listdir(directory):
+ f = os.path.join(directory, filename)
+ if not os.path.isfile(f):
+ continue
+ extension_name = os.path.splitext(os.path.basename(filename))[0]
+ available_extensions.append(extension_name)
+ extension_description = parse_s4ext(f)
+ if 'depends' not in extension_description:
+ continue
+ dependencies = extension_description['depends'].split(' ')
+ for dependency in dependencies:
+ if dependency == 'NA':
+ # special value, just a placeholder that must be ignored
+ continue
+ if dependency in required_extensions:
+ required_extensions[dependency].append(extension_name)
+ else:
+ required_extensions[dependency] = [extension_name]
+ print(f"Checked dependency between {len(available_extensions)} extensions.")
+ error_count = 0
+ for extension in required_extensions:
+ if extension in available_extensions:
+ # required extension is found
+ continue
+ required_by_extensions = ', '.join(required_extensions[extension])
+ print(f"{extension} extension is not found. It is required by extension: {required_by_extensions}.")
+ error_count += 1
+ return error_count
def main():
parser = argparse.ArgumentParser(
@@ -102,6 +134,7 @@
parser.add_argument(
"--check-git-repository-name", action="store_true",
help="Check extension git repository name. Disabled by default.")
+ parser.add_argument("-d", "--check-dependencies", help="Check all extension dsecription files in the provided folder.")
parser.add_argument("/path/to/description.s4ext", nargs='*')
args = parser.parse_args()
@@ -136,7 +169,13 @@
for failure in set(failures):
print(" %s" % failure)
- print("Checked %d description files: Found %d errors" % (len(file_paths), total_failure_count))
+ print(f"Checked content of {len(file_paths)} description files.")
+
+
+ if args.check_dependencies:
+ total_failure_count += check_dependencies(args.check_dependencies)
+
+ print(f"Total errors found in extension descriptions: {total_failure_count}")
sys.exit(total_failure_count)
| {"golden_diff": "diff --git a/scripts/check_description_files.py b/scripts/check_description_files.py\n--- a/scripts/check_description_files.py\n+++ b/scripts/check_description_files.py\n@@ -95,6 +95,38 @@\n \"\"\" % (\n repo_name, repo_name, variations)))\n \n+def check_dependencies(directory):\n+ import os\n+ required_extensions = {} # for each extension it contains a list of extensions that require it\n+ available_extensions = []\n+ for filename in os.listdir(directory):\n+ f = os.path.join(directory, filename)\n+ if not os.path.isfile(f):\n+ continue\n+ extension_name = os.path.splitext(os.path.basename(filename))[0]\n+ available_extensions.append(extension_name)\n+ extension_description = parse_s4ext(f)\n+ if 'depends' not in extension_description:\n+ continue\n+ dependencies = extension_description['depends'].split(' ')\n+ for dependency in dependencies:\n+ if dependency == 'NA':\n+ # special value, just a placeholder that must be ignored\n+ continue\n+ if dependency in required_extensions:\n+ required_extensions[dependency].append(extension_name)\n+ else:\n+ required_extensions[dependency] = [extension_name]\n+ print(f\"Checked dependency between {len(available_extensions)} extensions.\")\n+ error_count = 0\n+ for extension in required_extensions:\n+ if extension in available_extensions:\n+ # required extension is found\n+ continue\n+ required_by_extensions = ', '.join(required_extensions[extension])\n+ print(f\"{extension} extension is not found. It is required by extension: {required_by_extensions}.\")\n+ error_count += 1\n+ return error_count\n \n def main():\n parser = argparse.ArgumentParser(\n@@ -102,6 +134,7 @@\n parser.add_argument(\n \"--check-git-repository-name\", action=\"store_true\",\n help=\"Check extension git repository name. Disabled by default.\")\n+ parser.add_argument(\"-d\", \"--check-dependencies\", help=\"Check all extension dsecription files in the provided folder.\")\n parser.add_argument(\"/path/to/description.s4ext\", nargs='*')\n args = parser.parse_args()\n \n@@ -136,7 +169,13 @@\n for failure in set(failures):\n print(\" %s\" % failure)\n \n- print(\"Checked %d description files: Found %d errors\" % (len(file_paths), total_failure_count))\n+ print(f\"Checked content of {len(file_paths)} description files.\")\n+\n+\n+ if args.check_dependencies:\n+ total_failure_count += check_dependencies(args.check_dependencies)\n+\n+ print(f\"Total errors found in extension descriptions: {total_failure_count}\")\n sys.exit(total_failure_count)\n", "issue": "Bad dependencies kill entire extension build\n[SlicerVideoCamera name change](https://github.com/Slicer/ExtensionsIndex/commit/93d1942ed51a5c576f477dab77df9529ce788754) introduced this [bug](https://github.com/Slicer/ExtensionsIndex/commit/4181b49933cca4bf1420d1b8f7b54017bbfe131c) where an extension had a non-existent dependency.\r\n\r\nResulting [CMake Error](https://slicer.cdash.org/build/2225046/configure) terminated the whole build process.\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\nPython 3.x CLI for validating extension description files.\n\"\"\"\n\nimport argparse\nimport os\nimport sys\nimport textwrap\nimport urllib.parse as urlparse\n\nfrom functools import wraps\n\n\nclass ExtensionCheckError(RuntimeError):\n \"\"\"Exception raised when a particular extension check failed.\n \"\"\"\n def __init__(self, extension_name, check_name, details):\n self.extension_name = extension_name\n self.check_name = check_name\n self.details = details\n\n def __str__(self):\n return self.details\n\n\ndef require_metadata_key(metadata_key):\n check_name = \"require_metadata_key\"\n\n def dec(fun):\n @wraps(fun)\n def wrapped(*args, **kwargs):\n extension_name = args[0]\n metadata = args[1]\n if metadata_key not in metadata.keys():\n raise ExtensionCheckError(extension_name, check_name, \"%s key is missing\" % metadata_key)\n return fun(*args, **kwargs)\n return wrapped\n return dec\n\n\ndef parse_s4ext(ext_file_path):\n \"\"\"Parse a Slicer extension description file.\n :param ext_file_path: Path to a Slicer extension description file.\n :return: Dictionary of extension metadata.\n \"\"\"\n ext_metadata = {}\n with open(ext_file_path) as ext_file:\n for line in ext_file:\n if not line.strip() or line.startswith(\"#\"):\n continue\n fields = [field.strip() for field in line.split(' ', 1)]\n assert(len(fields) <= 2)\n ext_metadata[fields[0]] = fields[1] if len(fields) == 2 else None\n return ext_metadata\n\n\n@require_metadata_key(\"scmurl\")\ndef check_scmurl_syntax(extension_name, metadata):\n check_name = \"check_scmurl_syntax\"\n\n if \"://\" not in metadata[\"scmurl\"]:\n raise ExtensionCheckError(extension_name, check_name, \"scmurl do not match scheme://host/path\")\n\n supported_schemes = [\"git\", \"https\", \"svn\"]\n scheme = urlparse.urlsplit(metadata[\"scmurl\"]).scheme\n if scheme not in supported_schemes:\n raise ExtensionCheckError(\n extension_name, check_name,\n \"scmurl scheme is '%s' but it should by any of %s\" % (scheme, supported_schemes))\n\n\n@require_metadata_key(\"scmurl\")\n@require_metadata_key(\"scm\")\ndef check_git_repository_name(extension_name, metadata):\n \"\"\"See https://www.slicer.org/wiki/Documentation/Nightly/Developers/FAQ#Should_the_name_of_the_source_repository_match_the_name_of_the_extension_.3F\n \"\"\"\n check_name = \"check_git_repository_name\"\n\n if metadata[\"scm\"] != \"git\":\n return\n\n repo_name = os.path.splitext(urlparse.urlsplit(metadata[\"scmurl\"]).path.split(\"/\")[-1])[0]\n\n if not repo_name.startswith(\"Slicer\"):\n\n variations = [prefix + repo_name for prefix in [\"Slicer-\", \"Slicer_\", \"SlicerExtension-\", \"SlicerExtension_\"]]\n\n raise ExtensionCheckError(\n extension_name, check_name,\n textwrap.dedent(\"\"\"\n extension repository name is '%s'. Please, consider changing it to 'Slicer%s' or any of\n these variations %s.\n \"\"\" % (\n repo_name, repo_name, variations)))\n\n\ndef main():\n parser = argparse.ArgumentParser(\n description='Validate extension description files.')\n parser.add_argument(\n \"--check-git-repository-name\", action=\"store_true\",\n help=\"Check extension git repository name. Disabled by default.\")\n parser.add_argument(\"/path/to/description.s4ext\", nargs='*')\n args = parser.parse_args()\n\n checks = []\n\n if args.check_git_repository_name:\n checks.append(check_git_repository_name)\n\n if not checks:\n checks = [\n check_scmurl_syntax,\n ]\n\n total_failure_count = 0\n\n file_paths = getattr(args, \"/path/to/description.s4ext\")\n for file_path in file_paths:\n extension_name = os.path.splitext(os.path.basename(file_path))[0]\n\n failures = []\n \n metadata = parse_s4ext(file_path)\n for check in checks:\n try:\n check(extension_name, metadata)\n except ExtensionCheckError as exc:\n failures.append(str(exc))\n\n if failures:\n total_failure_count += len(failures)\n print(\"%s.s4ext\" % extension_name)\n for failure in set(failures):\n print(\" %s\" % failure)\n\n print(\"Checked %d description files: Found %d errors\" % (len(file_paths), total_failure_count))\n sys.exit(total_failure_count)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "scripts/check_description_files.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\nPython 3.x CLI for validating extension description files.\n\"\"\"\n\nimport argparse\nimport os\nimport sys\nimport textwrap\nimport urllib.parse as urlparse\n\nfrom functools import wraps\n\n\nclass ExtensionCheckError(RuntimeError):\n \"\"\"Exception raised when a particular extension check failed.\n \"\"\"\n def __init__(self, extension_name, check_name, details):\n self.extension_name = extension_name\n self.check_name = check_name\n self.details = details\n\n def __str__(self):\n return self.details\n\n\ndef require_metadata_key(metadata_key):\n check_name = \"require_metadata_key\"\n\n def dec(fun):\n @wraps(fun)\n def wrapped(*args, **kwargs):\n extension_name = args[0]\n metadata = args[1]\n if metadata_key not in metadata.keys():\n raise ExtensionCheckError(extension_name, check_name, \"%s key is missing\" % metadata_key)\n return fun(*args, **kwargs)\n return wrapped\n return dec\n\n\ndef parse_s4ext(ext_file_path):\n \"\"\"Parse a Slicer extension description file.\n :param ext_file_path: Path to a Slicer extension description file.\n :return: Dictionary of extension metadata.\n \"\"\"\n ext_metadata = {}\n with open(ext_file_path) as ext_file:\n for line in ext_file:\n if not line.strip() or line.startswith(\"#\"):\n continue\n fields = [field.strip() for field in line.split(' ', 1)]\n assert(len(fields) <= 2)\n ext_metadata[fields[0]] = fields[1] if len(fields) == 2 else None\n return ext_metadata\n\n\n@require_metadata_key(\"scmurl\")\ndef check_scmurl_syntax(extension_name, metadata):\n check_name = \"check_scmurl_syntax\"\n\n if \"://\" not in metadata[\"scmurl\"]:\n raise ExtensionCheckError(extension_name, check_name, \"scmurl do not match scheme://host/path\")\n\n supported_schemes = [\"git\", \"https\", \"svn\"]\n scheme = urlparse.urlsplit(metadata[\"scmurl\"]).scheme\n if scheme not in supported_schemes:\n raise ExtensionCheckError(\n extension_name, check_name,\n \"scmurl scheme is '%s' but it should by any of %s\" % (scheme, supported_schemes))\n\n\n@require_metadata_key(\"scmurl\")\n@require_metadata_key(\"scm\")\ndef check_git_repository_name(extension_name, metadata):\n \"\"\"See https://www.slicer.org/wiki/Documentation/Nightly/Developers/FAQ#Should_the_name_of_the_source_repository_match_the_name_of_the_extension_.3F\n \"\"\"\n check_name = \"check_git_repository_name\"\n\n if metadata[\"scm\"] != \"git\":\n return\n\n repo_name = os.path.splitext(urlparse.urlsplit(metadata[\"scmurl\"]).path.split(\"/\")[-1])[0]\n\n if not repo_name.startswith(\"Slicer\"):\n\n variations = [prefix + repo_name for prefix in [\"Slicer-\", \"Slicer_\", \"SlicerExtension-\", \"SlicerExtension_\"]]\n\n raise ExtensionCheckError(\n extension_name, check_name,\n textwrap.dedent(\"\"\"\n extension repository name is '%s'. Please, consider changing it to 'Slicer%s' or any of\n these variations %s.\n \"\"\" % (\n repo_name, repo_name, variations)))\n\ndef check_dependencies(directory):\n import os\n required_extensions = {} # for each extension it contains a list of extensions that require it\n available_extensions = []\n for filename in os.listdir(directory):\n f = os.path.join(directory, filename)\n if not os.path.isfile(f):\n continue\n extension_name = os.path.splitext(os.path.basename(filename))[0]\n available_extensions.append(extension_name)\n extension_description = parse_s4ext(f)\n if 'depends' not in extension_description:\n continue\n dependencies = extension_description['depends'].split(' ')\n for dependency in dependencies:\n if dependency == 'NA':\n # special value, just a placeholder that must be ignored\n continue\n if dependency in required_extensions:\n required_extensions[dependency].append(extension_name)\n else:\n required_extensions[dependency] = [extension_name]\n print(f\"Checked dependency between {len(available_extensions)} extensions.\")\n error_count = 0\n for extension in required_extensions:\n if extension in available_extensions:\n # required extension is found\n continue\n required_by_extensions = ', '.join(required_extensions[extension])\n print(f\"{extension} extension is not found. It is required by extension: {required_by_extensions}.\")\n error_count += 1\n return error_count\n\ndef main():\n parser = argparse.ArgumentParser(\n description='Validate extension description files.')\n parser.add_argument(\n \"--check-git-repository-name\", action=\"store_true\",\n help=\"Check extension git repository name. Disabled by default.\")\n parser.add_argument(\"-d\", \"--check-dependencies\", help=\"Check all extension dsecription files in the provided folder.\")\n parser.add_argument(\"/path/to/description.s4ext\", nargs='*')\n args = parser.parse_args()\n\n checks = []\n\n if args.check_git_repository_name:\n checks.append(check_git_repository_name)\n\n if not checks:\n checks = [\n check_scmurl_syntax,\n ]\n\n total_failure_count = 0\n\n file_paths = getattr(args, \"/path/to/description.s4ext\")\n for file_path in file_paths:\n extension_name = os.path.splitext(os.path.basename(file_path))[0]\n\n failures = []\n \n metadata = parse_s4ext(file_path)\n for check in checks:\n try:\n check(extension_name, metadata)\n except ExtensionCheckError as exc:\n failures.append(str(exc))\n\n if failures:\n total_failure_count += len(failures)\n print(\"%s.s4ext\" % extension_name)\n for failure in set(failures):\n print(\" %s\" % failure)\n\n print(f\"Checked content of {len(file_paths)} description files.\")\n\n\n if args.check_dependencies:\n total_failure_count += check_dependencies(args.check_dependencies)\n\n print(f\"Total errors found in extension descriptions: {total_failure_count}\")\n sys.exit(total_failure_count)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "scripts/check_description_files.py"}]} | 1,774 | 591 |
gh_patches_debug_32378 | rasdani/github-patches | git_diff | optuna__optuna-4684 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove experimental label from `_ProgressBar`
### Motivation
Several issues related to `_ProgressBar` have been already addressed (ref: https://github.com/optuna/optuna/issues/2892, https://github.com/optuna/optuna/issues/2957, https://github.com/optuna/optuna/issues/2958). Now we can remove the experimental label from `_ProgressBar`.
### Suggestion
Remove the `@experimental_func` decorator from `_ProgressBar`. Also, `_init_valid` method can be removed as explained in [TODO comment](https://github.com/optuna/optuna/blob/806448420863606c113aeb2e33457acf022be066/optuna/progress_bar.py#L57C28-L58).
### Additional context (optional)
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optuna/progress_bar.py`
Content:
```
1 import logging
2 from typing import Any
3 from typing import Optional
4 from typing import TYPE_CHECKING
5
6 from tqdm.auto import tqdm
7
8 from optuna import logging as optuna_logging
9 from optuna._experimental import experimental_func
10
11
12 if TYPE_CHECKING:
13 from optuna.study import Study
14
15 _tqdm_handler: Optional["_TqdmLoggingHandler"] = None
16
17
18 # Reference: https://gist.github.com/hvy/8b80c2cedf02b15c24f85d1fa17ebe02
19 class _TqdmLoggingHandler(logging.StreamHandler):
20 def emit(self, record: Any) -> None:
21 try:
22 msg = self.format(record)
23 tqdm.write(msg)
24 self.flush()
25 except (KeyboardInterrupt, SystemExit):
26 raise
27 except Exception:
28 self.handleError(record)
29
30
31 class _ProgressBar:
32 """Progress Bar implementation for :func:`~optuna.study.Study.optimize` on the top of `tqdm`.
33
34 Args:
35 is_valid:
36 Whether to show progress bars in :func:`~optuna.study.Study.optimize`.
37 n_trials:
38 The number of trials.
39 timeout:
40 Stop study after the given number of second(s).
41 """
42
43 def __init__(
44 self,
45 is_valid: bool,
46 n_trials: Optional[int] = None,
47 timeout: Optional[float] = None,
48 ) -> None:
49 self._is_valid = is_valid and (n_trials or timeout) is not None
50 self._n_trials = n_trials
51 self._timeout = timeout
52 self._last_elapsed_seconds = 0.0
53
54 if self._is_valid:
55 self._init_valid()
56
57 # TODO(hvy): Remove initialization indirection via this method when the progress bar is no
58 # longer experimental.
59 @experimental_func("1.2.0", name="Progress bar")
60 def _init_valid(self) -> None:
61 if self._n_trials is not None:
62 self._progress_bar = tqdm(total=self._n_trials)
63
64 elif self._timeout is not None:
65 total = tqdm.format_interval(self._timeout)
66 fmt = "{desc} {percentage:3.0f}%|{bar}| {elapsed}/" + total
67 self._progress_bar = tqdm(total=self._timeout, bar_format=fmt)
68 else:
69 assert False
70
71 global _tqdm_handler
72
73 _tqdm_handler = _TqdmLoggingHandler()
74 _tqdm_handler.setLevel(logging.INFO)
75 _tqdm_handler.setFormatter(optuna_logging.create_default_formatter())
76 optuna_logging.disable_default_handler()
77 optuna_logging._get_library_root_logger().addHandler(_tqdm_handler)
78
79 def update(self, elapsed_seconds: float, study: "Study") -> None:
80 """Update the progress bars if ``is_valid`` is :obj:`True`.
81
82 Args:
83 elapsed_seconds:
84 The time past since :func:`~optuna.study.Study.optimize` started.
85 study:
86 The current study object.
87 """
88
89 if self._is_valid:
90 if not study._is_multi_objective():
91 # Not updating the progress bar when there are no complete trial.
92 try:
93 msg = (
94 f"Best trial: {study.best_trial.number}. "
95 f"Best value: {study.best_value:.6g}"
96 )
97
98 self._progress_bar.set_description(msg)
99 except ValueError:
100 pass
101
102 if self._n_trials is not None:
103 self._progress_bar.update(1)
104 if self._timeout is not None:
105 self._progress_bar.set_postfix_str(
106 "{:.02f}/{} seconds".format(elapsed_seconds, self._timeout)
107 )
108
109 elif self._timeout is not None:
110 time_diff = elapsed_seconds - self._last_elapsed_seconds
111 if elapsed_seconds > self._timeout:
112 # Clip elapsed time to avoid tqdm warnings.
113 time_diff -= elapsed_seconds - self._timeout
114
115 self._progress_bar.update(time_diff)
116 self._last_elapsed_seconds = elapsed_seconds
117
118 else:
119 assert False
120
121 def close(self) -> None:
122 """Close progress bars."""
123
124 if self._is_valid:
125 self._progress_bar.close()
126 assert _tqdm_handler is not None
127 optuna_logging._get_library_root_logger().removeHandler(_tqdm_handler)
128 optuna_logging.enable_default_handler()
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/optuna/progress_bar.py b/optuna/progress_bar.py
--- a/optuna/progress_bar.py
+++ b/optuna/progress_bar.py
@@ -6,7 +6,6 @@
from tqdm.auto import tqdm
from optuna import logging as optuna_logging
-from optuna._experimental import experimental_func
if TYPE_CHECKING:
@@ -52,29 +51,22 @@
self._last_elapsed_seconds = 0.0
if self._is_valid:
- self._init_valid()
-
- # TODO(hvy): Remove initialization indirection via this method when the progress bar is no
- # longer experimental.
- @experimental_func("1.2.0", name="Progress bar")
- def _init_valid(self) -> None:
- if self._n_trials is not None:
- self._progress_bar = tqdm(total=self._n_trials)
-
- elif self._timeout is not None:
- total = tqdm.format_interval(self._timeout)
- fmt = "{desc} {percentage:3.0f}%|{bar}| {elapsed}/" + total
- self._progress_bar = tqdm(total=self._timeout, bar_format=fmt)
- else:
- assert False
-
- global _tqdm_handler
-
- _tqdm_handler = _TqdmLoggingHandler()
- _tqdm_handler.setLevel(logging.INFO)
- _tqdm_handler.setFormatter(optuna_logging.create_default_formatter())
- optuna_logging.disable_default_handler()
- optuna_logging._get_library_root_logger().addHandler(_tqdm_handler)
+ if self._n_trials is not None:
+ self._progress_bar = tqdm(total=self._n_trials)
+ elif self._timeout is not None:
+ total = tqdm.format_interval(self._timeout)
+ fmt = "{desc} {percentage:3.0f}%|{bar}| {elapsed}/" + total
+ self._progress_bar = tqdm(total=self._timeout, bar_format=fmt)
+ else:
+ assert False
+
+ global _tqdm_handler
+
+ _tqdm_handler = _TqdmLoggingHandler()
+ _tqdm_handler.setLevel(logging.INFO)
+ _tqdm_handler.setFormatter(optuna_logging.create_default_formatter())
+ optuna_logging.disable_default_handler()
+ optuna_logging._get_library_root_logger().addHandler(_tqdm_handler)
def update(self, elapsed_seconds: float, study: "Study") -> None:
"""Update the progress bars if ``is_valid`` is :obj:`True`.
| {"golden_diff": "diff --git a/optuna/progress_bar.py b/optuna/progress_bar.py\n--- a/optuna/progress_bar.py\n+++ b/optuna/progress_bar.py\n@@ -6,7 +6,6 @@\n from tqdm.auto import tqdm\n \n from optuna import logging as optuna_logging\n-from optuna._experimental import experimental_func\n \n \n if TYPE_CHECKING:\n@@ -52,29 +51,22 @@\n self._last_elapsed_seconds = 0.0\n \n if self._is_valid:\n- self._init_valid()\n-\n- # TODO(hvy): Remove initialization indirection via this method when the progress bar is no\n- # longer experimental.\n- @experimental_func(\"1.2.0\", name=\"Progress bar\")\n- def _init_valid(self) -> None:\n- if self._n_trials is not None:\n- self._progress_bar = tqdm(total=self._n_trials)\n-\n- elif self._timeout is not None:\n- total = tqdm.format_interval(self._timeout)\n- fmt = \"{desc} {percentage:3.0f}%|{bar}| {elapsed}/\" + total\n- self._progress_bar = tqdm(total=self._timeout, bar_format=fmt)\n- else:\n- assert False\n-\n- global _tqdm_handler\n-\n- _tqdm_handler = _TqdmLoggingHandler()\n- _tqdm_handler.setLevel(logging.INFO)\n- _tqdm_handler.setFormatter(optuna_logging.create_default_formatter())\n- optuna_logging.disable_default_handler()\n- optuna_logging._get_library_root_logger().addHandler(_tqdm_handler)\n+ if self._n_trials is not None:\n+ self._progress_bar = tqdm(total=self._n_trials)\n+ elif self._timeout is not None:\n+ total = tqdm.format_interval(self._timeout)\n+ fmt = \"{desc} {percentage:3.0f}%|{bar}| {elapsed}/\" + total\n+ self._progress_bar = tqdm(total=self._timeout, bar_format=fmt)\n+ else:\n+ assert False\n+\n+ global _tqdm_handler\n+\n+ _tqdm_handler = _TqdmLoggingHandler()\n+ _tqdm_handler.setLevel(logging.INFO)\n+ _tqdm_handler.setFormatter(optuna_logging.create_default_formatter())\n+ optuna_logging.disable_default_handler()\n+ optuna_logging._get_library_root_logger().addHandler(_tqdm_handler)\n \n def update(self, elapsed_seconds: float, study: \"Study\") -> None:\n \"\"\"Update the progress bars if ``is_valid`` is :obj:`True`.\n", "issue": "Remove experimental label from `_ProgressBar`\n### Motivation\n\nSeveral issues related to `_ProgressBar` have been already addressed (ref: https://github.com/optuna/optuna/issues/2892, https://github.com/optuna/optuna/issues/2957, https://github.com/optuna/optuna/issues/2958). Now we can remove the experimental label from `_ProgressBar`.\n\n### Suggestion\n\nRemove the `@experimental_func` decorator from `_ProgressBar`. Also, `_init_valid` method can be removed as explained in [TODO comment](https://github.com/optuna/optuna/blob/806448420863606c113aeb2e33457acf022be066/optuna/progress_bar.py#L57C28-L58).\n\n### Additional context (optional)\n\n_No response_\n", "before_files": [{"content": "import logging\nfrom typing import Any\nfrom typing import Optional\nfrom typing import TYPE_CHECKING\n\nfrom tqdm.auto import tqdm\n\nfrom optuna import logging as optuna_logging\nfrom optuna._experimental import experimental_func\n\n\nif TYPE_CHECKING:\n from optuna.study import Study\n\n_tqdm_handler: Optional[\"_TqdmLoggingHandler\"] = None\n\n\n# Reference: https://gist.github.com/hvy/8b80c2cedf02b15c24f85d1fa17ebe02\nclass _TqdmLoggingHandler(logging.StreamHandler):\n def emit(self, record: Any) -> None:\n try:\n msg = self.format(record)\n tqdm.write(msg)\n self.flush()\n except (KeyboardInterrupt, SystemExit):\n raise\n except Exception:\n self.handleError(record)\n\n\nclass _ProgressBar:\n \"\"\"Progress Bar implementation for :func:`~optuna.study.Study.optimize` on the top of `tqdm`.\n\n Args:\n is_valid:\n Whether to show progress bars in :func:`~optuna.study.Study.optimize`.\n n_trials:\n The number of trials.\n timeout:\n Stop study after the given number of second(s).\n \"\"\"\n\n def __init__(\n self,\n is_valid: bool,\n n_trials: Optional[int] = None,\n timeout: Optional[float] = None,\n ) -> None:\n self._is_valid = is_valid and (n_trials or timeout) is not None\n self._n_trials = n_trials\n self._timeout = timeout\n self._last_elapsed_seconds = 0.0\n\n if self._is_valid:\n self._init_valid()\n\n # TODO(hvy): Remove initialization indirection via this method when the progress bar is no\n # longer experimental.\n @experimental_func(\"1.2.0\", name=\"Progress bar\")\n def _init_valid(self) -> None:\n if self._n_trials is not None:\n self._progress_bar = tqdm(total=self._n_trials)\n\n elif self._timeout is not None:\n total = tqdm.format_interval(self._timeout)\n fmt = \"{desc} {percentage:3.0f}%|{bar}| {elapsed}/\" + total\n self._progress_bar = tqdm(total=self._timeout, bar_format=fmt)\n else:\n assert False\n\n global _tqdm_handler\n\n _tqdm_handler = _TqdmLoggingHandler()\n _tqdm_handler.setLevel(logging.INFO)\n _tqdm_handler.setFormatter(optuna_logging.create_default_formatter())\n optuna_logging.disable_default_handler()\n optuna_logging._get_library_root_logger().addHandler(_tqdm_handler)\n\n def update(self, elapsed_seconds: float, study: \"Study\") -> None:\n \"\"\"Update the progress bars if ``is_valid`` is :obj:`True`.\n\n Args:\n elapsed_seconds:\n The time past since :func:`~optuna.study.Study.optimize` started.\n study:\n The current study object.\n \"\"\"\n\n if self._is_valid:\n if not study._is_multi_objective():\n # Not updating the progress bar when there are no complete trial.\n try:\n msg = (\n f\"Best trial: {study.best_trial.number}. \"\n f\"Best value: {study.best_value:.6g}\"\n )\n\n self._progress_bar.set_description(msg)\n except ValueError:\n pass\n\n if self._n_trials is not None:\n self._progress_bar.update(1)\n if self._timeout is not None:\n self._progress_bar.set_postfix_str(\n \"{:.02f}/{} seconds\".format(elapsed_seconds, self._timeout)\n )\n\n elif self._timeout is not None:\n time_diff = elapsed_seconds - self._last_elapsed_seconds\n if elapsed_seconds > self._timeout:\n # Clip elapsed time to avoid tqdm warnings.\n time_diff -= elapsed_seconds - self._timeout\n\n self._progress_bar.update(time_diff)\n self._last_elapsed_seconds = elapsed_seconds\n\n else:\n assert False\n\n def close(self) -> None:\n \"\"\"Close progress bars.\"\"\"\n\n if self._is_valid:\n self._progress_bar.close()\n assert _tqdm_handler is not None\n optuna_logging._get_library_root_logger().removeHandler(_tqdm_handler)\n optuna_logging.enable_default_handler()\n", "path": "optuna/progress_bar.py"}], "after_files": [{"content": "import logging\nfrom typing import Any\nfrom typing import Optional\nfrom typing import TYPE_CHECKING\n\nfrom tqdm.auto import tqdm\n\nfrom optuna import logging as optuna_logging\n\n\nif TYPE_CHECKING:\n from optuna.study import Study\n\n_tqdm_handler: Optional[\"_TqdmLoggingHandler\"] = None\n\n\n# Reference: https://gist.github.com/hvy/8b80c2cedf02b15c24f85d1fa17ebe02\nclass _TqdmLoggingHandler(logging.StreamHandler):\n def emit(self, record: Any) -> None:\n try:\n msg = self.format(record)\n tqdm.write(msg)\n self.flush()\n except (KeyboardInterrupt, SystemExit):\n raise\n except Exception:\n self.handleError(record)\n\n\nclass _ProgressBar:\n \"\"\"Progress Bar implementation for :func:`~optuna.study.Study.optimize` on the top of `tqdm`.\n\n Args:\n is_valid:\n Whether to show progress bars in :func:`~optuna.study.Study.optimize`.\n n_trials:\n The number of trials.\n timeout:\n Stop study after the given number of second(s).\n \"\"\"\n\n def __init__(\n self,\n is_valid: bool,\n n_trials: Optional[int] = None,\n timeout: Optional[float] = None,\n ) -> None:\n self._is_valid = is_valid and (n_trials or timeout) is not None\n self._n_trials = n_trials\n self._timeout = timeout\n self._last_elapsed_seconds = 0.0\n\n if self._is_valid:\n if self._n_trials is not None:\n self._progress_bar = tqdm(total=self._n_trials)\n elif self._timeout is not None:\n total = tqdm.format_interval(self._timeout)\n fmt = \"{desc} {percentage:3.0f}%|{bar}| {elapsed}/\" + total\n self._progress_bar = tqdm(total=self._timeout, bar_format=fmt)\n else:\n assert False\n\n global _tqdm_handler\n\n _tqdm_handler = _TqdmLoggingHandler()\n _tqdm_handler.setLevel(logging.INFO)\n _tqdm_handler.setFormatter(optuna_logging.create_default_formatter())\n optuna_logging.disable_default_handler()\n optuna_logging._get_library_root_logger().addHandler(_tqdm_handler)\n\n def update(self, elapsed_seconds: float, study: \"Study\") -> None:\n \"\"\"Update the progress bars if ``is_valid`` is :obj:`True`.\n\n Args:\n elapsed_seconds:\n The time past since :func:`~optuna.study.Study.optimize` started.\n study:\n The current study object.\n \"\"\"\n\n if self._is_valid:\n if not study._is_multi_objective():\n # Not updating the progress bar when there are no complete trial.\n try:\n msg = (\n f\"Best trial: {study.best_trial.number}. \"\n f\"Best value: {study.best_value:.6g}\"\n )\n\n self._progress_bar.set_description(msg)\n except ValueError:\n pass\n\n if self._n_trials is not None:\n self._progress_bar.update(1)\n if self._timeout is not None:\n self._progress_bar.set_postfix_str(\n \"{:.02f}/{} seconds\".format(elapsed_seconds, self._timeout)\n )\n\n elif self._timeout is not None:\n time_diff = elapsed_seconds - self._last_elapsed_seconds\n if elapsed_seconds > self._timeout:\n # Clip elapsed time to avoid tqdm warnings.\n time_diff -= elapsed_seconds - self._timeout\n\n self._progress_bar.update(time_diff)\n self._last_elapsed_seconds = elapsed_seconds\n\n else:\n assert False\n\n def close(self) -> None:\n \"\"\"Close progress bars.\"\"\"\n\n if self._is_valid:\n self._progress_bar.close()\n assert _tqdm_handler is not None\n optuna_logging._get_library_root_logger().removeHandler(_tqdm_handler)\n optuna_logging.enable_default_handler()\n", "path": "optuna/progress_bar.py"}]} | 1,676 | 570 |
gh_patches_debug_21275 | rasdani/github-patches | git_diff | goauthentik__authentik-5657 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SCIM provider automatic sync not triggering since 2023.5.0
**Describe the bug**
SCIM provider automatic sync is not triggering (hourly & on user/group change).
Just upgraded to 2023.5.0 yesterday and (I think) that sync is broken since upgrade. I was using 2023.3.1 and 2023.4.1 previously with the SCIM provider to provision AWS SSO (IAM Identity Center) and those triggers worked (but without PATCH added in 2023.5.0 so that was the main reason for this upgrade).
**To Reproduce**
Configure SCIM provider and wait for the full sync hourly, try to add/remove a member in a group or create a new user.
**Expected behavior**
From [documentation](https://goauthentik.io/docs/providers/scim/#syncing):
```
Data is synchronized in multiple ways:
When a user/group is created/modified/deleted, that action is sent to all SCIM providers
Periodically (once an hour), all SCIM providers are fully synchronized
```
**Screenshots**
No screenshots for this case.
**Logs**
There's no sync events events in logs. Logs are configured in trace level.
The only SCIM events I see are:
```
2023-05-17 09:24:58 | {"event": "Task started", "level": "info", "logger": "authentik.root.celery", "pid": 5476, "task_id": "d2f7357b-caf3-40b9-8750-93134c36badf", "task_name": "scim_signal_direct", "timestamp": "2023-05-17T12:24:58.174683"}
2023-05-17 09:24:58 | {"event": "Task finished", "level": "info", "logger": "authentik.root.celery", "pid": 5476, "state": "SUCCESS", "task_id": "d2f7357b-caf3-40b9-8750-93134c36badf", "task_name": "scim_signal_direct", "timestamp": "2023-05-17T12:24:58.210445"}
```
**Version and Deployment (please complete the following information):**
- authentik version:2023.5.0
- Deployment: Helm chart
**Additional context**
If I run the sync manually it works, but the full sync adds/replace objects it doesn't remove users from a group since I think it's only incremental and the removal of a member should be done whent the group is modified.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/providers/scim/tasks.py`
Content:
```
1 """SCIM Provider tasks"""
2 from typing import Any, Optional
3
4 from celery.result import allow_join_result
5 from django.core.paginator import Paginator
6 from django.db.models import Model, QuerySet
7 from django.utils.text import slugify
8 from django.utils.translation import gettext_lazy as _
9 from pydanticscim.responses import PatchOp
10 from structlog.stdlib import get_logger
11
12 from authentik.core.models import Group, User
13 from authentik.events.monitored_tasks import MonitoredTask, TaskResult, TaskResultStatus
14 from authentik.lib.utils.reflection import path_to_class
15 from authentik.providers.scim.clients import PAGE_SIZE
16 from authentik.providers.scim.clients.base import SCIMClient
17 from authentik.providers.scim.clients.exceptions import SCIMRequestException, StopSync
18 from authentik.providers.scim.clients.group import SCIMGroupClient
19 from authentik.providers.scim.clients.user import SCIMUserClient
20 from authentik.providers.scim.models import SCIMProvider
21 from authentik.root.celery import CELERY_APP
22
23 LOGGER = get_logger(__name__)
24
25
26 def client_for_model(provider: SCIMProvider, model: Model) -> SCIMClient:
27 """Get SCIM client for model"""
28 if isinstance(model, User):
29 return SCIMUserClient(provider)
30 if isinstance(model, Group):
31 return SCIMGroupClient(provider)
32 raise ValueError(f"Invalid model {model}")
33
34
35 @CELERY_APP.task()
36 def scim_sync_all():
37 """Run sync for all providers"""
38 for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):
39 scim_sync.delay(provider.pk)
40
41
42 @CELERY_APP.task(bind=True, base=MonitoredTask)
43 def scim_sync(self: MonitoredTask, provider_pk: int) -> None:
44 """Run SCIM full sync for provider"""
45 provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()
46 if not provider:
47 return
48 self.set_uid(slugify(provider.name))
49 result = TaskResult(TaskResultStatus.SUCCESSFUL, [])
50 result.messages.append(_("Starting full SCIM sync"))
51 LOGGER.debug("Starting SCIM sync")
52 users_paginator = Paginator(provider.get_user_qs(), PAGE_SIZE)
53 groups_paginator = Paginator(provider.get_group_qs(), PAGE_SIZE)
54 with allow_join_result():
55 try:
56 for page in users_paginator.page_range:
57 result.messages.append(_("Syncing page %(page)d of users" % {"page": page}))
58 for msg in scim_sync_users.delay(page, provider_pk).get():
59 result.messages.append(msg)
60 for page in groups_paginator.page_range:
61 result.messages.append(_("Syncing page %(page)d of groups" % {"page": page}))
62 for msg in scim_sync_group.delay(page, provider_pk).get():
63 result.messages.append(msg)
64 except StopSync as exc:
65 self.set_status(TaskResult(TaskResultStatus.ERROR).with_error(exc))
66 return
67 self.set_status(result)
68
69
70 @CELERY_APP.task()
71 def scim_sync_users(page: int, provider_pk: int):
72 """Sync single or multiple users to SCIM"""
73 messages = []
74 provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()
75 if not provider:
76 return messages
77 try:
78 client = SCIMUserClient(provider)
79 except SCIMRequestException:
80 return messages
81 paginator = Paginator(provider.get_user_qs(), PAGE_SIZE)
82 LOGGER.debug("starting user sync for page", page=page)
83 for user in paginator.page(page).object_list:
84 try:
85 client.write(user)
86 except SCIMRequestException as exc:
87 LOGGER.warning("failed to sync user", exc=exc, user=user)
88 messages.append(
89 _(
90 "Failed to sync user %(user_name)s due to remote error: %(error)s"
91 % {
92 "user_name": user.username,
93 "error": exc.detail(),
94 }
95 )
96 )
97 except StopSync as exc:
98 LOGGER.warning("Stopping sync", exc=exc)
99 messages.append(
100 _(
101 "Stopping sync due to error: %(error)s"
102 % {
103 "error": exc.detail(),
104 }
105 )
106 )
107 break
108 return messages
109
110
111 @CELERY_APP.task()
112 def scim_sync_group(page: int, provider_pk: int):
113 """Sync single or multiple groups to SCIM"""
114 messages = []
115 provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()
116 if not provider:
117 return messages
118 try:
119 client = SCIMGroupClient(provider)
120 except SCIMRequestException:
121 return messages
122 paginator = Paginator(provider.get_group_qs(), PAGE_SIZE)
123 LOGGER.debug("starting group sync for page", page=page)
124 for group in paginator.page(page).object_list:
125 try:
126 client.write(group)
127 except SCIMRequestException as exc:
128 LOGGER.warning("failed to sync group", exc=exc, group=group)
129 messages.append(
130 _(
131 "Failed to sync group %(group_name)s due to remote error: %(error)s"
132 % {
133 "group_name": group.name,
134 "error": exc.detail(),
135 }
136 )
137 )
138 except StopSync as exc:
139 LOGGER.warning("Stopping sync", exc=exc)
140 messages.append(
141 _(
142 "Stopping sync due to error: %(error)s"
143 % {
144 "error": exc.detail(),
145 }
146 )
147 )
148 break
149 return messages
150
151
152 @CELERY_APP.task()
153 def scim_signal_direct(model: str, pk: Any, raw_op: str):
154 """Handler for post_save and pre_delete signal"""
155 model_class: type[Model] = path_to_class(model)
156 instance = model_class.objects.filter(pk=pk).first()
157 if not instance:
158 return
159 operation = PatchOp(raw_op)
160 for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):
161 client = client_for_model(provider, instance)
162 # Check if the object is allowed within the provider's restrictions
163 queryset: Optional[QuerySet] = None
164 if isinstance(instance, User):
165 queryset = provider.get_user_qs()
166 if isinstance(instance, Group):
167 queryset = provider.get_group_qs()
168 if not queryset:
169 continue
170
171 # The queryset we get from the provider must include the instance we've got given
172 # otherwise ignore this provider
173 if not queryset.filter(pk=instance.pk).exists():
174 continue
175
176 try:
177 if operation == PatchOp.add:
178 client.write(instance)
179 if operation == PatchOp.remove:
180 client.delete(instance)
181 except (StopSync, SCIMRequestException) as exc:
182 LOGGER.warning(exc)
183
184
185 @CELERY_APP.task()
186 def scim_signal_m2m(group_pk: str, action: str, pk_set: list[int]):
187 """Update m2m (group membership)"""
188 group = Group.objects.filter(pk=group_pk).first()
189 if not group:
190 return
191 for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):
192 # Check if the object is allowed within the provider's restrictions
193 queryset: QuerySet = provider.get_group_qs()
194 # The queryset we get from the provider must include the instance we've got given
195 # otherwise ignore this provider
196 if not queryset.filter(pk=group_pk).exists():
197 continue
198
199 client = SCIMGroupClient(provider)
200 try:
201 operation = None
202 if action == "post_add":
203 operation = PatchOp.add
204 if action == "post_remove":
205 operation = PatchOp.remove
206 client.update_group(group, operation, pk_set)
207 except (StopSync, SCIMRequestException) as exc:
208 LOGGER.warning(exc)
209
```
Path: `authentik/providers/scim/api/providers.py`
Content:
```
1 """SCIM Provider API Views"""
2 from django.utils.text import slugify
3 from drf_spectacular.utils import OpenApiResponse, extend_schema
4 from rest_framework.decorators import action
5 from rest_framework.request import Request
6 from rest_framework.response import Response
7 from rest_framework.viewsets import ModelViewSet
8
9 from authentik.admin.api.tasks import TaskSerializer
10 from authentik.core.api.providers import ProviderSerializer
11 from authentik.core.api.used_by import UsedByMixin
12 from authentik.events.monitored_tasks import TaskInfo
13 from authentik.providers.scim.models import SCIMProvider
14
15
16 class SCIMProviderSerializer(ProviderSerializer):
17 """SCIMProvider Serializer"""
18
19 class Meta:
20 model = SCIMProvider
21 fields = [
22 "pk",
23 "name",
24 "property_mappings",
25 "property_mappings_group",
26 "component",
27 "assigned_application_slug",
28 "assigned_application_name",
29 "verbose_name",
30 "verbose_name_plural",
31 "meta_model_name",
32 "url",
33 "token",
34 "exclude_users_service_account",
35 "filter_group",
36 ]
37 extra_kwargs = {}
38
39
40 class SCIMProviderViewSet(UsedByMixin, ModelViewSet):
41 """SCIMProvider Viewset"""
42
43 queryset = SCIMProvider.objects.all()
44 serializer_class = SCIMProviderSerializer
45 filterset_fields = ["name", "exclude_users_service_account", "url", "filter_group"]
46 search_fields = ["name", "url"]
47 ordering = ["name", "url"]
48
49 @extend_schema(
50 responses={
51 200: TaskSerializer(),
52 404: OpenApiResponse(description="Task not found"),
53 }
54 )
55 @action(methods=["GET"], detail=True, pagination_class=None, filter_backends=[])
56 def sync_status(self, request: Request, pk: int) -> Response:
57 """Get provider's sync status"""
58 provider = self.get_object()
59 task = TaskInfo.by_name(f"scim_sync:{slugify(provider.name)}")
60 if not task:
61 return Response(status=404)
62 return Response(TaskSerializer(task).data)
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/providers/scim/api/providers.py b/authentik/providers/scim/api/providers.py
--- a/authentik/providers/scim/api/providers.py
+++ b/authentik/providers/scim/api/providers.py
@@ -24,8 +24,8 @@
"property_mappings",
"property_mappings_group",
"component",
- "assigned_application_slug",
- "assigned_application_name",
+ "assigned_backchannel_application_slug",
+ "assigned_backchannel_application_name",
"verbose_name",
"verbose_name_plural",
"meta_model_name",
diff --git a/authentik/providers/scim/tasks.py b/authentik/providers/scim/tasks.py
--- a/authentik/providers/scim/tasks.py
+++ b/authentik/providers/scim/tasks.py
@@ -42,7 +42,9 @@
@CELERY_APP.task(bind=True, base=MonitoredTask)
def scim_sync(self: MonitoredTask, provider_pk: int) -> None:
"""Run SCIM full sync for provider"""
- provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()
+ provider: SCIMProvider = SCIMProvider.objects.filter(
+ pk=provider_pk, backchannel_application__isnull=False
+ ).first()
if not provider:
return
self.set_uid(slugify(provider.name))
| {"golden_diff": "diff --git a/authentik/providers/scim/api/providers.py b/authentik/providers/scim/api/providers.py\n--- a/authentik/providers/scim/api/providers.py\n+++ b/authentik/providers/scim/api/providers.py\n@@ -24,8 +24,8 @@\n \"property_mappings\",\n \"property_mappings_group\",\n \"component\",\n- \"assigned_application_slug\",\n- \"assigned_application_name\",\n+ \"assigned_backchannel_application_slug\",\n+ \"assigned_backchannel_application_name\",\n \"verbose_name\",\n \"verbose_name_plural\",\n \"meta_model_name\",\ndiff --git a/authentik/providers/scim/tasks.py b/authentik/providers/scim/tasks.py\n--- a/authentik/providers/scim/tasks.py\n+++ b/authentik/providers/scim/tasks.py\n@@ -42,7 +42,9 @@\n @CELERY_APP.task(bind=True, base=MonitoredTask)\n def scim_sync(self: MonitoredTask, provider_pk: int) -> None:\n \"\"\"Run SCIM full sync for provider\"\"\"\n- provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()\n+ provider: SCIMProvider = SCIMProvider.objects.filter(\n+ pk=provider_pk, backchannel_application__isnull=False\n+ ).first()\n if not provider:\n return\n self.set_uid(slugify(provider.name))\n", "issue": "SCIM provider automatic sync not triggering since 2023.5.0\n**Describe the bug**\r\nSCIM provider automatic sync is not triggering (hourly & on user/group change).\r\nJust upgraded to 2023.5.0 yesterday and (I think) that sync is broken since upgrade. I was using 2023.3.1 and 2023.4.1 previously with the SCIM provider to provision AWS SSO (IAM Identity Center) and those triggers worked (but without PATCH added in 2023.5.0 so that was the main reason for this upgrade).\r\n\r\n**To Reproduce**\r\nConfigure SCIM provider and wait for the full sync hourly, try to add/remove a member in a group or create a new user.\r\n\r\n**Expected behavior**\r\nFrom [documentation](https://goauthentik.io/docs/providers/scim/#syncing):\r\n\r\n```\r\nData is synchronized in multiple ways:\r\n\r\nWhen a user/group is created/modified/deleted, that action is sent to all SCIM providers\r\nPeriodically (once an hour), all SCIM providers are fully synchronized\r\n```\r\n\r\n**Screenshots**\r\nNo screenshots for this case.\r\n\r\n**Logs**\r\nThere's no sync events events in logs. Logs are configured in trace level.\r\nThe only SCIM events I see are:\r\n\r\n```\r\n2023-05-17 09:24:58 | {\"event\": \"Task started\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 5476, \"task_id\": \"d2f7357b-caf3-40b9-8750-93134c36badf\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-05-17T12:24:58.174683\"}\r\n2023-05-17 09:24:58 | {\"event\": \"Task finished\", \"level\": \"info\", \"logger\": \"authentik.root.celery\", \"pid\": 5476, \"state\": \"SUCCESS\", \"task_id\": \"d2f7357b-caf3-40b9-8750-93134c36badf\", \"task_name\": \"scim_signal_direct\", \"timestamp\": \"2023-05-17T12:24:58.210445\"}\r\n```\r\n\r\n**Version and Deployment (please complete the following information):**\r\n\r\n- authentik version:2023.5.0\r\n- Deployment: Helm chart\r\n\r\n**Additional context**\r\nIf I run the sync manually it works, but the full sync adds/replace objects it doesn't remove users from a group since I think it's only incremental and the removal of a member should be done whent the group is modified.\n", "before_files": [{"content": "\"\"\"SCIM Provider tasks\"\"\"\nfrom typing import Any, Optional\n\nfrom celery.result import allow_join_result\nfrom django.core.paginator import Paginator\nfrom django.db.models import Model, QuerySet\nfrom django.utils.text import slugify\nfrom django.utils.translation import gettext_lazy as _\nfrom pydanticscim.responses import PatchOp\nfrom structlog.stdlib import get_logger\n\nfrom authentik.core.models import Group, User\nfrom authentik.events.monitored_tasks import MonitoredTask, TaskResult, TaskResultStatus\nfrom authentik.lib.utils.reflection import path_to_class\nfrom authentik.providers.scim.clients import PAGE_SIZE\nfrom authentik.providers.scim.clients.base import SCIMClient\nfrom authentik.providers.scim.clients.exceptions import SCIMRequestException, StopSync\nfrom authentik.providers.scim.clients.group import SCIMGroupClient\nfrom authentik.providers.scim.clients.user import SCIMUserClient\nfrom authentik.providers.scim.models import SCIMProvider\nfrom authentik.root.celery import CELERY_APP\n\nLOGGER = get_logger(__name__)\n\n\ndef client_for_model(provider: SCIMProvider, model: Model) -> SCIMClient:\n \"\"\"Get SCIM client for model\"\"\"\n if isinstance(model, User):\n return SCIMUserClient(provider)\n if isinstance(model, Group):\n return SCIMGroupClient(provider)\n raise ValueError(f\"Invalid model {model}\")\n\n\n@CELERY_APP.task()\ndef scim_sync_all():\n \"\"\"Run sync for all providers\"\"\"\n for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):\n scim_sync.delay(provider.pk)\n\n\n@CELERY_APP.task(bind=True, base=MonitoredTask)\ndef scim_sync(self: MonitoredTask, provider_pk: int) -> None:\n \"\"\"Run SCIM full sync for provider\"\"\"\n provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()\n if not provider:\n return\n self.set_uid(slugify(provider.name))\n result = TaskResult(TaskResultStatus.SUCCESSFUL, [])\n result.messages.append(_(\"Starting full SCIM sync\"))\n LOGGER.debug(\"Starting SCIM sync\")\n users_paginator = Paginator(provider.get_user_qs(), PAGE_SIZE)\n groups_paginator = Paginator(provider.get_group_qs(), PAGE_SIZE)\n with allow_join_result():\n try:\n for page in users_paginator.page_range:\n result.messages.append(_(\"Syncing page %(page)d of users\" % {\"page\": page}))\n for msg in scim_sync_users.delay(page, provider_pk).get():\n result.messages.append(msg)\n for page in groups_paginator.page_range:\n result.messages.append(_(\"Syncing page %(page)d of groups\" % {\"page\": page}))\n for msg in scim_sync_group.delay(page, provider_pk).get():\n result.messages.append(msg)\n except StopSync as exc:\n self.set_status(TaskResult(TaskResultStatus.ERROR).with_error(exc))\n return\n self.set_status(result)\n\n\n@CELERY_APP.task()\ndef scim_sync_users(page: int, provider_pk: int):\n \"\"\"Sync single or multiple users to SCIM\"\"\"\n messages = []\n provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()\n if not provider:\n return messages\n try:\n client = SCIMUserClient(provider)\n except SCIMRequestException:\n return messages\n paginator = Paginator(provider.get_user_qs(), PAGE_SIZE)\n LOGGER.debug(\"starting user sync for page\", page=page)\n for user in paginator.page(page).object_list:\n try:\n client.write(user)\n except SCIMRequestException as exc:\n LOGGER.warning(\"failed to sync user\", exc=exc, user=user)\n messages.append(\n _(\n \"Failed to sync user %(user_name)s due to remote error: %(error)s\"\n % {\n \"user_name\": user.username,\n \"error\": exc.detail(),\n }\n )\n )\n except StopSync as exc:\n LOGGER.warning(\"Stopping sync\", exc=exc)\n messages.append(\n _(\n \"Stopping sync due to error: %(error)s\"\n % {\n \"error\": exc.detail(),\n }\n )\n )\n break\n return messages\n\n\n@CELERY_APP.task()\ndef scim_sync_group(page: int, provider_pk: int):\n \"\"\"Sync single or multiple groups to SCIM\"\"\"\n messages = []\n provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()\n if not provider:\n return messages\n try:\n client = SCIMGroupClient(provider)\n except SCIMRequestException:\n return messages\n paginator = Paginator(provider.get_group_qs(), PAGE_SIZE)\n LOGGER.debug(\"starting group sync for page\", page=page)\n for group in paginator.page(page).object_list:\n try:\n client.write(group)\n except SCIMRequestException as exc:\n LOGGER.warning(\"failed to sync group\", exc=exc, group=group)\n messages.append(\n _(\n \"Failed to sync group %(group_name)s due to remote error: %(error)s\"\n % {\n \"group_name\": group.name,\n \"error\": exc.detail(),\n }\n )\n )\n except StopSync as exc:\n LOGGER.warning(\"Stopping sync\", exc=exc)\n messages.append(\n _(\n \"Stopping sync due to error: %(error)s\"\n % {\n \"error\": exc.detail(),\n }\n )\n )\n break\n return messages\n\n\n@CELERY_APP.task()\ndef scim_signal_direct(model: str, pk: Any, raw_op: str):\n \"\"\"Handler for post_save and pre_delete signal\"\"\"\n model_class: type[Model] = path_to_class(model)\n instance = model_class.objects.filter(pk=pk).first()\n if not instance:\n return\n operation = PatchOp(raw_op)\n for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):\n client = client_for_model(provider, instance)\n # Check if the object is allowed within the provider's restrictions\n queryset: Optional[QuerySet] = None\n if isinstance(instance, User):\n queryset = provider.get_user_qs()\n if isinstance(instance, Group):\n queryset = provider.get_group_qs()\n if not queryset:\n continue\n\n # The queryset we get from the provider must include the instance we've got given\n # otherwise ignore this provider\n if not queryset.filter(pk=instance.pk).exists():\n continue\n\n try:\n if operation == PatchOp.add:\n client.write(instance)\n if operation == PatchOp.remove:\n client.delete(instance)\n except (StopSync, SCIMRequestException) as exc:\n LOGGER.warning(exc)\n\n\n@CELERY_APP.task()\ndef scim_signal_m2m(group_pk: str, action: str, pk_set: list[int]):\n \"\"\"Update m2m (group membership)\"\"\"\n group = Group.objects.filter(pk=group_pk).first()\n if not group:\n return\n for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):\n # Check if the object is allowed within the provider's restrictions\n queryset: QuerySet = provider.get_group_qs()\n # The queryset we get from the provider must include the instance we've got given\n # otherwise ignore this provider\n if not queryset.filter(pk=group_pk).exists():\n continue\n\n client = SCIMGroupClient(provider)\n try:\n operation = None\n if action == \"post_add\":\n operation = PatchOp.add\n if action == \"post_remove\":\n operation = PatchOp.remove\n client.update_group(group, operation, pk_set)\n except (StopSync, SCIMRequestException) as exc:\n LOGGER.warning(exc)\n", "path": "authentik/providers/scim/tasks.py"}, {"content": "\"\"\"SCIM Provider API Views\"\"\"\nfrom django.utils.text import slugify\nfrom drf_spectacular.utils import OpenApiResponse, extend_schema\nfrom rest_framework.decorators import action\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\nfrom rest_framework.viewsets import ModelViewSet\n\nfrom authentik.admin.api.tasks import TaskSerializer\nfrom authentik.core.api.providers import ProviderSerializer\nfrom authentik.core.api.used_by import UsedByMixin\nfrom authentik.events.monitored_tasks import TaskInfo\nfrom authentik.providers.scim.models import SCIMProvider\n\n\nclass SCIMProviderSerializer(ProviderSerializer):\n \"\"\"SCIMProvider Serializer\"\"\"\n\n class Meta:\n model = SCIMProvider\n fields = [\n \"pk\",\n \"name\",\n \"property_mappings\",\n \"property_mappings_group\",\n \"component\",\n \"assigned_application_slug\",\n \"assigned_application_name\",\n \"verbose_name\",\n \"verbose_name_plural\",\n \"meta_model_name\",\n \"url\",\n \"token\",\n \"exclude_users_service_account\",\n \"filter_group\",\n ]\n extra_kwargs = {}\n\n\nclass SCIMProviderViewSet(UsedByMixin, ModelViewSet):\n \"\"\"SCIMProvider Viewset\"\"\"\n\n queryset = SCIMProvider.objects.all()\n serializer_class = SCIMProviderSerializer\n filterset_fields = [\"name\", \"exclude_users_service_account\", \"url\", \"filter_group\"]\n search_fields = [\"name\", \"url\"]\n ordering = [\"name\", \"url\"]\n\n @extend_schema(\n responses={\n 200: TaskSerializer(),\n 404: OpenApiResponse(description=\"Task not found\"),\n }\n )\n @action(methods=[\"GET\"], detail=True, pagination_class=None, filter_backends=[])\n def sync_status(self, request: Request, pk: int) -> Response:\n \"\"\"Get provider's sync status\"\"\"\n provider = self.get_object()\n task = TaskInfo.by_name(f\"scim_sync:{slugify(provider.name)}\")\n if not task:\n return Response(status=404)\n return Response(TaskSerializer(task).data)\n", "path": "authentik/providers/scim/api/providers.py"}], "after_files": [{"content": "\"\"\"SCIM Provider tasks\"\"\"\nfrom typing import Any, Optional\n\nfrom celery.result import allow_join_result\nfrom django.core.paginator import Paginator\nfrom django.db.models import Model, QuerySet\nfrom django.utils.text import slugify\nfrom django.utils.translation import gettext_lazy as _\nfrom pydanticscim.responses import PatchOp\nfrom structlog.stdlib import get_logger\n\nfrom authentik.core.models import Group, User\nfrom authentik.events.monitored_tasks import MonitoredTask, TaskResult, TaskResultStatus\nfrom authentik.lib.utils.reflection import path_to_class\nfrom authentik.providers.scim.clients import PAGE_SIZE\nfrom authentik.providers.scim.clients.base import SCIMClient\nfrom authentik.providers.scim.clients.exceptions import SCIMRequestException, StopSync\nfrom authentik.providers.scim.clients.group import SCIMGroupClient\nfrom authentik.providers.scim.clients.user import SCIMUserClient\nfrom authentik.providers.scim.models import SCIMProvider\nfrom authentik.root.celery import CELERY_APP\n\nLOGGER = get_logger(__name__)\n\n\ndef client_for_model(provider: SCIMProvider, model: Model) -> SCIMClient:\n \"\"\"Get SCIM client for model\"\"\"\n if isinstance(model, User):\n return SCIMUserClient(provider)\n if isinstance(model, Group):\n return SCIMGroupClient(provider)\n raise ValueError(f\"Invalid model {model}\")\n\n\n@CELERY_APP.task()\ndef scim_sync_all():\n \"\"\"Run sync for all providers\"\"\"\n for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):\n scim_sync.delay(provider.pk)\n\n\n@CELERY_APP.task(bind=True, base=MonitoredTask)\ndef scim_sync(self: MonitoredTask, provider_pk: int) -> None:\n \"\"\"Run SCIM full sync for provider\"\"\"\n provider: SCIMProvider = SCIMProvider.objects.filter(\n pk=provider_pk, backchannel_application__isnull=False\n ).first()\n if not provider:\n return\n self.set_uid(slugify(provider.name))\n result = TaskResult(TaskResultStatus.SUCCESSFUL, [])\n result.messages.append(_(\"Starting full SCIM sync\"))\n LOGGER.debug(\"Starting SCIM sync\")\n users_paginator = Paginator(provider.get_user_qs(), PAGE_SIZE)\n groups_paginator = Paginator(provider.get_group_qs(), PAGE_SIZE)\n with allow_join_result():\n try:\n for page in users_paginator.page_range:\n result.messages.append(_(\"Syncing page %(page)d of users\" % {\"page\": page}))\n for msg in scim_sync_users.delay(page, provider_pk).get():\n result.messages.append(msg)\n for page in groups_paginator.page_range:\n result.messages.append(_(\"Syncing page %(page)d of groups\" % {\"page\": page}))\n for msg in scim_sync_group.delay(page, provider_pk).get():\n result.messages.append(msg)\n except StopSync as exc:\n self.set_status(TaskResult(TaskResultStatus.ERROR).with_error(exc))\n return\n self.set_status(result)\n\n\n@CELERY_APP.task()\ndef scim_sync_users(page: int, provider_pk: int):\n \"\"\"Sync single or multiple users to SCIM\"\"\"\n messages = []\n provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()\n if not provider:\n return messages\n try:\n client = SCIMUserClient(provider)\n except SCIMRequestException:\n return messages\n paginator = Paginator(provider.get_user_qs(), PAGE_SIZE)\n LOGGER.debug(\"starting user sync for page\", page=page)\n for user in paginator.page(page).object_list:\n try:\n client.write(user)\n except SCIMRequestException as exc:\n LOGGER.warning(\"failed to sync user\", exc=exc, user=user)\n messages.append(\n _(\n \"Failed to sync user %(user_name)s due to remote error: %(error)s\"\n % {\n \"user_name\": user.username,\n \"error\": exc.detail(),\n }\n )\n )\n except StopSync as exc:\n LOGGER.warning(\"Stopping sync\", exc=exc)\n messages.append(\n _(\n \"Stopping sync due to error: %(error)s\"\n % {\n \"error\": exc.detail(),\n }\n )\n )\n break\n return messages\n\n\n@CELERY_APP.task()\ndef scim_sync_group(page: int, provider_pk: int):\n \"\"\"Sync single or multiple groups to SCIM\"\"\"\n messages = []\n provider: SCIMProvider = SCIMProvider.objects.filter(pk=provider_pk).first()\n if not provider:\n return messages\n try:\n client = SCIMGroupClient(provider)\n except SCIMRequestException:\n return messages\n paginator = Paginator(provider.get_group_qs(), PAGE_SIZE)\n LOGGER.debug(\"starting group sync for page\", page=page)\n for group in paginator.page(page).object_list:\n try:\n client.write(group)\n except SCIMRequestException as exc:\n LOGGER.warning(\"failed to sync group\", exc=exc, group=group)\n messages.append(\n _(\n \"Failed to sync group %(group_name)s due to remote error: %(error)s\"\n % {\n \"group_name\": group.name,\n \"error\": exc.detail(),\n }\n )\n )\n except StopSync as exc:\n LOGGER.warning(\"Stopping sync\", exc=exc)\n messages.append(\n _(\n \"Stopping sync due to error: %(error)s\"\n % {\n \"error\": exc.detail(),\n }\n )\n )\n break\n return messages\n\n\n@CELERY_APP.task()\ndef scim_signal_direct(model: str, pk: Any, raw_op: str):\n \"\"\"Handler for post_save and pre_delete signal\"\"\"\n model_class: type[Model] = path_to_class(model)\n instance = model_class.objects.filter(pk=pk).first()\n if not instance:\n return\n operation = PatchOp(raw_op)\n for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):\n client = client_for_model(provider, instance)\n # Check if the object is allowed within the provider's restrictions\n queryset: Optional[QuerySet] = None\n if isinstance(instance, User):\n queryset = provider.get_user_qs()\n if isinstance(instance, Group):\n queryset = provider.get_group_qs()\n if not queryset:\n continue\n\n # The queryset we get from the provider must include the instance we've got given\n # otherwise ignore this provider\n if not queryset.filter(pk=instance.pk).exists():\n continue\n\n try:\n if operation == PatchOp.add:\n client.write(instance)\n if operation == PatchOp.remove:\n client.delete(instance)\n except (StopSync, SCIMRequestException) as exc:\n LOGGER.warning(exc)\n\n\n@CELERY_APP.task()\ndef scim_signal_m2m(group_pk: str, action: str, pk_set: list[int]):\n \"\"\"Update m2m (group membership)\"\"\"\n group = Group.objects.filter(pk=group_pk).first()\n if not group:\n return\n for provider in SCIMProvider.objects.filter(backchannel_application__isnull=False):\n # Check if the object is allowed within the provider's restrictions\n queryset: QuerySet = provider.get_group_qs()\n # The queryset we get from the provider must include the instance we've got given\n # otherwise ignore this provider\n if not queryset.filter(pk=group_pk).exists():\n continue\n\n client = SCIMGroupClient(provider)\n try:\n operation = None\n if action == \"post_add\":\n operation = PatchOp.add\n if action == \"post_remove\":\n operation = PatchOp.remove\n client.update_group(group, operation, pk_set)\n except (StopSync, SCIMRequestException) as exc:\n LOGGER.warning(exc)\n", "path": "authentik/providers/scim/tasks.py"}, {"content": "\"\"\"SCIM Provider API Views\"\"\"\nfrom django.utils.text import slugify\nfrom drf_spectacular.utils import OpenApiResponse, extend_schema\nfrom rest_framework.decorators import action\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\nfrom rest_framework.viewsets import ModelViewSet\n\nfrom authentik.admin.api.tasks import TaskSerializer\nfrom authentik.core.api.providers import ProviderSerializer\nfrom authentik.core.api.used_by import UsedByMixin\nfrom authentik.events.monitored_tasks import TaskInfo\nfrom authentik.providers.scim.models import SCIMProvider\n\n\nclass SCIMProviderSerializer(ProviderSerializer):\n \"\"\"SCIMProvider Serializer\"\"\"\n\n class Meta:\n model = SCIMProvider\n fields = [\n \"pk\",\n \"name\",\n \"property_mappings\",\n \"property_mappings_group\",\n \"component\",\n \"assigned_backchannel_application_slug\",\n \"assigned_backchannel_application_name\",\n \"verbose_name\",\n \"verbose_name_plural\",\n \"meta_model_name\",\n \"url\",\n \"token\",\n \"exclude_users_service_account\",\n \"filter_group\",\n ]\n extra_kwargs = {}\n\n\nclass SCIMProviderViewSet(UsedByMixin, ModelViewSet):\n \"\"\"SCIMProvider Viewset\"\"\"\n\n queryset = SCIMProvider.objects.all()\n serializer_class = SCIMProviderSerializer\n filterset_fields = [\"name\", \"exclude_users_service_account\", \"url\", \"filter_group\"]\n search_fields = [\"name\", \"url\"]\n ordering = [\"name\", \"url\"]\n\n @extend_schema(\n responses={\n 200: TaskSerializer(),\n 404: OpenApiResponse(description=\"Task not found\"),\n }\n )\n @action(methods=[\"GET\"], detail=True, pagination_class=None, filter_backends=[])\n def sync_status(self, request: Request, pk: int) -> Response:\n \"\"\"Get provider's sync status\"\"\"\n provider = self.get_object()\n task = TaskInfo.by_name(f\"scim_sync:{slugify(provider.name)}\")\n if not task:\n return Response(status=404)\n return Response(TaskSerializer(task).data)\n", "path": "authentik/providers/scim/api/providers.py"}]} | 3,650 | 293 |
gh_patches_debug_19203 | rasdani/github-patches | git_diff | e-valuation__EvaP-1367 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Inline datatables localization files to speed up first paint
Right now, datatables gets the localization file in form of a URL (see [datatables.html](https://github.com/fsr-itse/EvaP/blob/master/evap/evaluation/templates/datatables.html)), that is, it starts an ajax request when it starts processing the tables, and waits with processing until the result has been received.
both locales should be included into the compressed javascript or inlined into the html template so they are loaded earlier.
we do something similar for the [bootstrap datetimepicker](https://github.com/fsr-itse/EvaP/blob/028b6301e3eed446d93ae8675030d82c68d46886/evap/evaluation/templates/bootstrap_datetimepicker.html). unfortunately, it's not that easy in this case, since the localization files are json files, not javascript files.
one approach would be to turn the json files to js files, and simply putting the datastructure inside into a variable with the name of the corresponding locale.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/evaluation/templatetags/evaluation_filters.py`
Content:
```
1 from collections import namedtuple
2
3 from django.forms import TypedChoiceField
4 from django.template import Library
5 from django.utils.translation import ugettext_lazy as _
6
7 from evap.evaluation.models import BASE_UNIPOLAR_CHOICES
8 from evap.rewards.tools import can_reward_points_be_used_by
9 from evap.student.forms import HeadingField
10
11
12 # the names displayed for contributors
13 STATE_NAMES = {
14 'new': _('new'),
15 'prepared': _('prepared'),
16 'editor_approved': _('editor approved'),
17 'approved': _('approved'),
18 'in_evaluation': _('in evaluation'),
19 'evaluated': _('evaluated'),
20 'reviewed': _('reviewed'),
21 'published': _('published'),
22 }
23
24
25 # the descriptions used in tooltips for contributors
26 STATE_DESCRIPTIONS = {
27 'new': _('The evaluation was newly created and will be prepared by the evaluation team.'),
28 'prepared': _('The evaluation was prepared by the evaluation team and is now available for editors.'),
29 'editor_approved': _('The evaluation was approved by an editor and will now be checked by the evaluation team.'),
30 'approved': _('All preparations are finished. The evaluation will begin once the defined start date is reached.'),
31 'in_evaluation': _('The evaluation is currently running until the defined end date is reached.'),
32 'evaluated': _('The evaluation has finished and will now be reviewed by the evaluation team.'),
33 'reviewed': _('The evaluation has finished and was reviewed by the evaluation team. You will receive an email when its results are published.'),
34 'published': _('The results for this evaluation have been published.'),
35 }
36
37
38 # values for approval states shown to staff
39 StateValues = namedtuple('StateValues', ('order', 'icon', 'filter', 'description'))
40 APPROVAL_STATES = {
41 'new': StateValues(0, 'fas fa-circle icon-yellow', 'fa-circle icon-yellow', _('In preparation')),
42 'prepared': StateValues(2, 'far fa-square icon-gray', 'fa-square icon-gray', _('Awaiting editor review')),
43 'editor_approved': StateValues(1, 'far fa-check-square icon-yellow', 'fa-check-square icon-yellow', _('Approved by editor, awaiting manager review')),
44 'approved': StateValues(3, 'far fa-check-square icon-green', 'fa-check-square icon-green', _('Approved by manager')),
45 }
46
47
48 register = Library()
49
50
51 @register.filter(name='zip')
52 def _zip(a, b):
53 return zip(a, b)
54
55
56 @register.filter()
57 def zip_choices(counts, choices):
58 return zip(counts, choices.names, choices.colors, choices.values)
59
60
61 @register.filter
62 def ordering_index(evaluation):
63 if evaluation.state in ['new', 'prepared', 'editor_approved', 'approved']:
64 return evaluation.days_until_evaluation
65 elif evaluation.state == "in_evaluation":
66 return 100000 + evaluation.days_left_for_evaluation
67 return 200000 + evaluation.days_left_for_evaluation
68
69
70 # from http://www.jongales.com/blog/2009/10/19/percentage-django-template-tag/
71 @register.filter
72 def percentage(fraction, population):
73 try:
74 return "{0:.0f}%".format(int(float(fraction) / float(population) * 100))
75 except ValueError:
76 return None
77 except ZeroDivisionError:
78 return None
79
80
81 @register.filter
82 def percentage_one_decimal(fraction, population):
83 try:
84 return "{0:.1f}%".format((float(fraction) / float(population)) * 100)
85 except ValueError:
86 return None
87 except ZeroDivisionError:
88 return None
89
90
91 @register.filter
92 def to_colors(choices):
93 if not choices:
94 # When displaying the course distribution, there are no associated voting choices.
95 # In that case, we just use the colors of a unipolar scale.
96 return BASE_UNIPOLAR_CHOICES['colors']
97 return choices.colors
98
99
100 @register.filter
101 def statename(state):
102 return STATE_NAMES.get(state)
103
104
105 @register.filter
106 def statedescription(state):
107 return STATE_DESCRIPTIONS.get(state)
108
109
110 @register.filter
111 def approval_state_values(state):
112 if state in APPROVAL_STATES:
113 return APPROVAL_STATES[state]
114 elif state in ['in_evaluation', 'evaluated', 'reviewed', 'published']:
115 return APPROVAL_STATES['approved']
116 return None
117
118
119 @register.filter
120 def approval_state_icon(state):
121 if state in APPROVAL_STATES:
122 return APPROVAL_STATES[state].icon
123 elif state in ['in_evaluation', 'evaluated', 'reviewed', 'published']:
124 return APPROVAL_STATES['approved'].icon
125 return None
126
127
128 @register.filter
129 def can_results_page_be_seen_by(evaluation, user):
130 return evaluation.can_results_page_be_seen_by(user)
131
132
133 @register.filter(name='can_reward_points_be_used_by')
134 def _can_reward_points_be_used_by(user):
135 return can_reward_points_be_used_by(user)
136
137
138 @register.filter
139 def is_choice_field(field):
140 return isinstance(field.field, TypedChoiceField)
141
142
143 @register.filter
144 def is_heading_field(field):
145 return isinstance(field.field, HeadingField)
146
147
148 @register.filter
149 def is_user_editor_or_delegate(evaluation, user):
150 return evaluation.is_user_editor_or_delegate(user)
151
152
153 @register.filter
154 def is_user_responsible_or_contributor_or_delegate(evaluation, user):
155 return evaluation.is_user_responsible_or_contributor_or_delegate(user)
156
157 @register.filter
158 def message_class(level):
159 return {
160 'debug': 'info',
161 'info': 'info',
162 'success': 'success',
163 'warning': 'warning',
164 'error': 'danger',
165 }.get(level, 'info')
166
167
168 @register.filter
169 def hours_and_minutes(time_left_for_evaluation):
170 hours = time_left_for_evaluation.seconds // 3600
171 minutes = (time_left_for_evaluation.seconds // 60) % 60
172 return "{:02}:{:02}".format(hours, minutes)
173
174
175 @register.filter
176 def has_nonresponsible_editor(evaluation):
177 return evaluation.contributions.filter(can_edit=True).exclude(contributor__in=evaluation.course.responsibles.all()).exists()
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/evap/evaluation/templatetags/evaluation_filters.py b/evap/evaluation/templatetags/evaluation_filters.py
--- a/evap/evaluation/templatetags/evaluation_filters.py
+++ b/evap/evaluation/templatetags/evaluation_filters.py
@@ -38,10 +38,10 @@
# values for approval states shown to staff
StateValues = namedtuple('StateValues', ('order', 'icon', 'filter', 'description'))
APPROVAL_STATES = {
- 'new': StateValues(0, 'fas fa-circle icon-yellow', 'fa-circle icon-yellow', _('In preparation')),
- 'prepared': StateValues(2, 'far fa-square icon-gray', 'fa-square icon-gray', _('Awaiting editor review')),
- 'editor_approved': StateValues(1, 'far fa-check-square icon-yellow', 'fa-check-square icon-yellow', _('Approved by editor, awaiting manager review')),
- 'approved': StateValues(3, 'far fa-check-square icon-green', 'fa-check-square icon-green', _('Approved by manager')),
+ 'new': StateValues(0, 'fas fa-circle icon-yellow', 'new', _('In preparation')),
+ 'prepared': StateValues(2, 'far fa-square icon-gray', 'prepared', _('Awaiting editor review')),
+ 'editor_approved': StateValues(1, 'far fa-check-square icon-yellow', 'editor_approved', _('Approved by editor, awaiting manager review')),
+ 'approved': StateValues(3, 'far fa-check-square icon-green', 'approved', _('Approved by manager')),
}
| {"golden_diff": "diff --git a/evap/evaluation/templatetags/evaluation_filters.py b/evap/evaluation/templatetags/evaluation_filters.py\n--- a/evap/evaluation/templatetags/evaluation_filters.py\n+++ b/evap/evaluation/templatetags/evaluation_filters.py\n@@ -38,10 +38,10 @@\n # values for approval states shown to staff\n StateValues = namedtuple('StateValues', ('order', 'icon', 'filter', 'description'))\n APPROVAL_STATES = {\n- 'new': StateValues(0, 'fas fa-circle icon-yellow', 'fa-circle icon-yellow', _('In preparation')),\n- 'prepared': StateValues(2, 'far fa-square icon-gray', 'fa-square icon-gray', _('Awaiting editor review')),\n- 'editor_approved': StateValues(1, 'far fa-check-square icon-yellow', 'fa-check-square icon-yellow', _('Approved by editor, awaiting manager review')),\n- 'approved': StateValues(3, 'far fa-check-square icon-green', 'fa-check-square icon-green', _('Approved by manager')),\n+ 'new': StateValues(0, 'fas fa-circle icon-yellow', 'new', _('In preparation')),\n+ 'prepared': StateValues(2, 'far fa-square icon-gray', 'prepared', _('Awaiting editor review')),\n+ 'editor_approved': StateValues(1, 'far fa-check-square icon-yellow', 'editor_approved', _('Approved by editor, awaiting manager review')),\n+ 'approved': StateValues(3, 'far fa-check-square icon-green', 'approved', _('Approved by manager')),\n }\n", "issue": "Inline datatables localization files to speed up first paint\nRight now, datatables gets the localization file in form of a URL (see [datatables.html](https://github.com/fsr-itse/EvaP/blob/master/evap/evaluation/templates/datatables.html)), that is, it starts an ajax request when it starts processing the tables, and waits with processing until the result has been received.\r\n\r\nboth locales should be included into the compressed javascript or inlined into the html template so they are loaded earlier.\r\n\r\nwe do something similar for the [bootstrap datetimepicker](https://github.com/fsr-itse/EvaP/blob/028b6301e3eed446d93ae8675030d82c68d46886/evap/evaluation/templates/bootstrap_datetimepicker.html). unfortunately, it's not that easy in this case, since the localization files are json files, not javascript files.\r\n\r\none approach would be to turn the json files to js files, and simply putting the datastructure inside into a variable with the name of the corresponding locale.\n", "before_files": [{"content": "from collections import namedtuple\n\nfrom django.forms import TypedChoiceField\nfrom django.template import Library\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom evap.evaluation.models import BASE_UNIPOLAR_CHOICES\nfrom evap.rewards.tools import can_reward_points_be_used_by\nfrom evap.student.forms import HeadingField\n\n\n# the names displayed for contributors\nSTATE_NAMES = {\n 'new': _('new'),\n 'prepared': _('prepared'),\n 'editor_approved': _('editor approved'),\n 'approved': _('approved'),\n 'in_evaluation': _('in evaluation'),\n 'evaluated': _('evaluated'),\n 'reviewed': _('reviewed'),\n 'published': _('published'),\n}\n\n\n# the descriptions used in tooltips for contributors\nSTATE_DESCRIPTIONS = {\n 'new': _('The evaluation was newly created and will be prepared by the evaluation team.'),\n 'prepared': _('The evaluation was prepared by the evaluation team and is now available for editors.'),\n 'editor_approved': _('The evaluation was approved by an editor and will now be checked by the evaluation team.'),\n 'approved': _('All preparations are finished. The evaluation will begin once the defined start date is reached.'),\n 'in_evaluation': _('The evaluation is currently running until the defined end date is reached.'),\n 'evaluated': _('The evaluation has finished and will now be reviewed by the evaluation team.'),\n 'reviewed': _('The evaluation has finished and was reviewed by the evaluation team. You will receive an email when its results are published.'),\n 'published': _('The results for this evaluation have been published.'),\n}\n\n\n# values for approval states shown to staff\nStateValues = namedtuple('StateValues', ('order', 'icon', 'filter', 'description'))\nAPPROVAL_STATES = {\n 'new': StateValues(0, 'fas fa-circle icon-yellow', 'fa-circle icon-yellow', _('In preparation')),\n 'prepared': StateValues(2, 'far fa-square icon-gray', 'fa-square icon-gray', _('Awaiting editor review')),\n 'editor_approved': StateValues(1, 'far fa-check-square icon-yellow', 'fa-check-square icon-yellow', _('Approved by editor, awaiting manager review')),\n 'approved': StateValues(3, 'far fa-check-square icon-green', 'fa-check-square icon-green', _('Approved by manager')),\n}\n\n\nregister = Library()\n\n\[email protected](name='zip')\ndef _zip(a, b):\n return zip(a, b)\n\n\[email protected]()\ndef zip_choices(counts, choices):\n return zip(counts, choices.names, choices.colors, choices.values)\n\n\[email protected]\ndef ordering_index(evaluation):\n if evaluation.state in ['new', 'prepared', 'editor_approved', 'approved']:\n return evaluation.days_until_evaluation\n elif evaluation.state == \"in_evaluation\":\n return 100000 + evaluation.days_left_for_evaluation\n return 200000 + evaluation.days_left_for_evaluation\n\n\n# from http://www.jongales.com/blog/2009/10/19/percentage-django-template-tag/\[email protected]\ndef percentage(fraction, population):\n try:\n return \"{0:.0f}%\".format(int(float(fraction) / float(population) * 100))\n except ValueError:\n return None\n except ZeroDivisionError:\n return None\n\n\[email protected]\ndef percentage_one_decimal(fraction, population):\n try:\n return \"{0:.1f}%\".format((float(fraction) / float(population)) * 100)\n except ValueError:\n return None\n except ZeroDivisionError:\n return None\n\n\[email protected]\ndef to_colors(choices):\n if not choices:\n # When displaying the course distribution, there are no associated voting choices.\n # In that case, we just use the colors of a unipolar scale.\n return BASE_UNIPOLAR_CHOICES['colors']\n return choices.colors\n\n\[email protected]\ndef statename(state):\n return STATE_NAMES.get(state)\n\n\[email protected]\ndef statedescription(state):\n return STATE_DESCRIPTIONS.get(state)\n\n\[email protected]\ndef approval_state_values(state):\n if state in APPROVAL_STATES:\n return APPROVAL_STATES[state]\n elif state in ['in_evaluation', 'evaluated', 'reviewed', 'published']:\n return APPROVAL_STATES['approved']\n return None\n\n\[email protected]\ndef approval_state_icon(state):\n if state in APPROVAL_STATES:\n return APPROVAL_STATES[state].icon\n elif state in ['in_evaluation', 'evaluated', 'reviewed', 'published']:\n return APPROVAL_STATES['approved'].icon\n return None\n\n\[email protected]\ndef can_results_page_be_seen_by(evaluation, user):\n return evaluation.can_results_page_be_seen_by(user)\n\n\[email protected](name='can_reward_points_be_used_by')\ndef _can_reward_points_be_used_by(user):\n return can_reward_points_be_used_by(user)\n\n\[email protected]\ndef is_choice_field(field):\n return isinstance(field.field, TypedChoiceField)\n\n\[email protected]\ndef is_heading_field(field):\n return isinstance(field.field, HeadingField)\n\n\[email protected]\ndef is_user_editor_or_delegate(evaluation, user):\n return evaluation.is_user_editor_or_delegate(user)\n\n\[email protected]\ndef is_user_responsible_or_contributor_or_delegate(evaluation, user):\n return evaluation.is_user_responsible_or_contributor_or_delegate(user)\n\[email protected]\ndef message_class(level):\n return {\n 'debug': 'info',\n 'info': 'info',\n 'success': 'success',\n 'warning': 'warning',\n 'error': 'danger',\n }.get(level, 'info')\n\n\[email protected]\ndef hours_and_minutes(time_left_for_evaluation):\n hours = time_left_for_evaluation.seconds // 3600\n minutes = (time_left_for_evaluation.seconds // 60) % 60\n return \"{:02}:{:02}\".format(hours, minutes)\n\n\[email protected]\ndef has_nonresponsible_editor(evaluation):\n return evaluation.contributions.filter(can_edit=True).exclude(contributor__in=evaluation.course.responsibles.all()).exists()\n", "path": "evap/evaluation/templatetags/evaluation_filters.py"}], "after_files": [{"content": "from collections import namedtuple\n\nfrom django.forms import TypedChoiceField\nfrom django.template import Library\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom evap.evaluation.models import BASE_UNIPOLAR_CHOICES\nfrom evap.rewards.tools import can_reward_points_be_used_by\nfrom evap.student.forms import HeadingField\n\n\n# the names displayed for contributors\nSTATE_NAMES = {\n 'new': _('new'),\n 'prepared': _('prepared'),\n 'editor_approved': _('editor approved'),\n 'approved': _('approved'),\n 'in_evaluation': _('in evaluation'),\n 'evaluated': _('evaluated'),\n 'reviewed': _('reviewed'),\n 'published': _('published'),\n}\n\n\n# the descriptions used in tooltips for contributors\nSTATE_DESCRIPTIONS = {\n 'new': _('The evaluation was newly created and will be prepared by the evaluation team.'),\n 'prepared': _('The evaluation was prepared by the evaluation team and is now available for editors.'),\n 'editor_approved': _('The evaluation was approved by an editor and will now be checked by the evaluation team.'),\n 'approved': _('All preparations are finished. The evaluation will begin once the defined start date is reached.'),\n 'in_evaluation': _('The evaluation is currently running until the defined end date is reached.'),\n 'evaluated': _('The evaluation has finished and will now be reviewed by the evaluation team.'),\n 'reviewed': _('The evaluation has finished and was reviewed by the evaluation team. You will receive an email when its results are published.'),\n 'published': _('The results for this evaluation have been published.'),\n}\n\n\n# values for approval states shown to staff\nStateValues = namedtuple('StateValues', ('order', 'icon', 'filter', 'description'))\nAPPROVAL_STATES = {\n 'new': StateValues(0, 'fas fa-circle icon-yellow', 'new', _('In preparation')),\n 'prepared': StateValues(2, 'far fa-square icon-gray', 'prepared', _('Awaiting editor review')),\n 'editor_approved': StateValues(1, 'far fa-check-square icon-yellow', 'editor_approved', _('Approved by editor, awaiting manager review')),\n 'approved': StateValues(3, 'far fa-check-square icon-green', 'approved', _('Approved by manager')),\n}\n\n\nregister = Library()\n\n\[email protected](name='zip')\ndef _zip(a, b):\n return zip(a, b)\n\n\[email protected]()\ndef zip_choices(counts, choices):\n return zip(counts, choices.names, choices.colors, choices.values)\n\n\[email protected]\ndef ordering_index(evaluation):\n if evaluation.state in ['new', 'prepared', 'editor_approved', 'approved']:\n return evaluation.days_until_evaluation\n elif evaluation.state == \"in_evaluation\":\n return 100000 + evaluation.days_left_for_evaluation\n return 200000 + evaluation.days_left_for_evaluation\n\n\n# from http://www.jongales.com/blog/2009/10/19/percentage-django-template-tag/\[email protected]\ndef percentage(fraction, population):\n try:\n return \"{0:.0f}%\".format(int(float(fraction) / float(population) * 100))\n except ValueError:\n return None\n except ZeroDivisionError:\n return None\n\n\[email protected]\ndef percentage_one_decimal(fraction, population):\n try:\n return \"{0:.1f}%\".format((float(fraction) / float(population)) * 100)\n except ValueError:\n return None\n except ZeroDivisionError:\n return None\n\n\[email protected]\ndef to_colors(choices):\n if not choices:\n # When displaying the course distribution, there are no associated voting choices.\n # In that case, we just use the colors of a unipolar scale.\n return BASE_UNIPOLAR_CHOICES['colors']\n return choices.colors\n\n\[email protected]\ndef statename(state):\n return STATE_NAMES.get(state)\n\n\[email protected]\ndef statedescription(state):\n return STATE_DESCRIPTIONS.get(state)\n\n\[email protected]\ndef approval_state_values(state):\n if state in APPROVAL_STATES:\n return APPROVAL_STATES[state]\n elif state in ['in_evaluation', 'evaluated', 'reviewed', 'published']:\n return APPROVAL_STATES['approved']\n return None\n\n\[email protected]\ndef approval_state_icon(state):\n if state in APPROVAL_STATES:\n return APPROVAL_STATES[state].icon\n elif state in ['in_evaluation', 'evaluated', 'reviewed', 'published']:\n return APPROVAL_STATES['approved'].icon\n return None\n\n\[email protected]\ndef can_results_page_be_seen_by(evaluation, user):\n return evaluation.can_results_page_be_seen_by(user)\n\n\[email protected](name='can_reward_points_be_used_by')\ndef _can_reward_points_be_used_by(user):\n return can_reward_points_be_used_by(user)\n\n\[email protected]\ndef is_choice_field(field):\n return isinstance(field.field, TypedChoiceField)\n\n\[email protected]\ndef is_heading_field(field):\n return isinstance(field.field, HeadingField)\n\n\[email protected]\ndef is_user_editor_or_delegate(evaluation, user):\n return evaluation.is_user_editor_or_delegate(user)\n\n\[email protected]\ndef is_user_responsible_or_contributor_or_delegate(evaluation, user):\n return evaluation.is_user_responsible_or_contributor_or_delegate(user)\n\[email protected]\ndef message_class(level):\n return {\n 'debug': 'info',\n 'info': 'info',\n 'success': 'success',\n 'warning': 'warning',\n 'error': 'danger',\n }.get(level, 'info')\n\n\[email protected]\ndef hours_and_minutes(time_left_for_evaluation):\n hours = time_left_for_evaluation.seconds // 3600\n minutes = (time_left_for_evaluation.seconds // 60) % 60\n return \"{:02}:{:02}\".format(hours, minutes)\n\n\[email protected]\ndef has_nonresponsible_editor(evaluation):\n return evaluation.contributions.filter(can_edit=True).exclude(contributor__in=evaluation.course.responsibles.all()).exists()\n", "path": "evap/evaluation/templatetags/evaluation_filters.py"}]} | 2,242 | 346 |
gh_patches_debug_1918 | rasdani/github-patches | git_diff | projectmesa__mesa-1844 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
jupyterviz checkbox input change is not propagated
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mesa/experimental/jupyter_viz.py`
Content:
```
1 import threading
2
3 import matplotlib.pyplot as plt
4 import networkx as nx
5 import reacton.ipywidgets as widgets
6 import solara
7 from matplotlib.figure import Figure
8 from matplotlib.ticker import MaxNLocator
9
10 import mesa
11
12 # Avoid interactive backend
13 plt.switch_backend("agg")
14
15
16 @solara.component
17 def JupyterViz(
18 model_class,
19 model_params,
20 measures=None,
21 name="Mesa Model",
22 agent_portrayal=None,
23 space_drawer="default",
24 play_interval=150,
25 ):
26 """Initialize a component to visualize a model.
27 Args:
28 model_class: class of the model to instantiate
29 model_params: parameters for initializing the model
30 measures: list of callables or data attributes to plot
31 name: name for display
32 agent_portrayal: options for rendering agents (dictionary)
33 space_drawer: method to render the agent space for
34 the model; default implementation is :meth:`make_space`;
35 simulations with no space to visualize should
36 specify `space_drawer=False`
37 play_interval: play interval (default: 150)
38 """
39
40 current_step, set_current_step = solara.use_state(0)
41
42 # 1. Set up model parameters
43 user_params, fixed_params = split_model_params(model_params)
44 model_parameters, set_model_parameters = solara.use_state(
45 {**fixed_params, **{k: v["value"] for k, v in user_params.items()}}
46 )
47
48 # 2. Set up Model
49 def make_model():
50 model = model_class(**model_parameters)
51 set_current_step(0)
52 return model
53
54 reset_counter = solara.use_reactive(0)
55 model = solara.use_memo(
56 make_model, dependencies=[*list(model_parameters.values()), reset_counter.value]
57 )
58
59 def handle_change_model_params(name: str, value: any):
60 set_model_parameters({**model_parameters, name: value})
61
62 # 3. Set up UI
63 solara.Markdown(name)
64 UserInputs(user_params, on_change=handle_change_model_params)
65 ModelController(model, play_interval, current_step, set_current_step, reset_counter)
66
67 with solara.GridFixed(columns=2):
68 # 4. Space
69 if space_drawer == "default":
70 # draw with the default implementation
71 make_space(model, agent_portrayal)
72 elif space_drawer:
73 # if specified, draw agent space with an alternate renderer
74 space_drawer(model, agent_portrayal)
75 # otherwise, do nothing (do not draw space)
76
77 # 5. Plots
78 for measure in measures:
79 if callable(measure):
80 # Is a custom object
81 measure(model)
82 else:
83 make_plot(model, measure)
84
85
86 @solara.component
87 def ModelController(
88 model, play_interval, current_step, set_current_step, reset_counter
89 ):
90 playing = solara.use_reactive(False)
91 thread = solara.use_reactive(None)
92 # We track the previous step to detect if user resets the model via
93 # clicking the reset button or changing the parameters. If previous_step >
94 # current_step, it means a model reset happens while the simulation is
95 # still playing.
96 previous_step = solara.use_reactive(0)
97
98 def on_value_play(change):
99 if previous_step.value > current_step and current_step == 0:
100 # We add extra checks for current_step == 0, just to be sure.
101 # We automatically stop the playing if a model is reset.
102 playing.value = False
103 elif model.running:
104 do_step()
105 else:
106 playing.value = False
107
108 def do_step():
109 model.step()
110 previous_step.value = current_step
111 set_current_step(model.schedule.steps)
112
113 def do_play():
114 model.running = True
115 while model.running:
116 do_step()
117
118 def threaded_do_play():
119 if thread is not None and thread.is_alive():
120 return
121 thread.value = threading.Thread(target=do_play)
122 thread.start()
123
124 def do_pause():
125 if (thread is None) or (not thread.is_alive()):
126 return
127 model.running = False
128 thread.join()
129
130 def do_reset():
131 reset_counter.value += 1
132
133 with solara.Row():
134 solara.Button(label="Step", color="primary", on_click=do_step)
135 # This style is necessary so that the play widget has almost the same
136 # height as typical Solara buttons.
137 solara.Style(
138 """
139 .widget-play {
140 height: 30px;
141 }
142 """
143 )
144 widgets.Play(
145 value=0,
146 interval=play_interval,
147 repeat=True,
148 show_repeat=False,
149 on_value=on_value_play,
150 playing=playing.value,
151 on_playing=playing.set,
152 )
153 solara.Button(label="Reset", color="primary", on_click=do_reset)
154 solara.Markdown(md_text=f"**Step:** {current_step}")
155 # threaded_do_play is not used for now because it
156 # doesn't work in Google colab. We use
157 # ipywidgets.Play until it is fixed. The threading
158 # version is definite a much better implementation,
159 # if it works.
160 # solara.Button(label="▶", color="primary", on_click=viz.threaded_do_play)
161 # solara.Button(label="⏸︎", color="primary", on_click=viz.do_pause)
162 # solara.Button(label="Reset", color="primary", on_click=do_reset)
163
164
165 def split_model_params(model_params):
166 model_params_input = {}
167 model_params_fixed = {}
168 for k, v in model_params.items():
169 if check_param_is_fixed(v):
170 model_params_fixed[k] = v
171 else:
172 model_params_input[k] = v
173 return model_params_input, model_params_fixed
174
175
176 def check_param_is_fixed(param):
177 if not isinstance(param, dict):
178 return True
179 if "type" not in param:
180 return True
181
182
183 @solara.component
184 def UserInputs(user_params, on_change=None):
185 """Initialize user inputs for configurable model parameters.
186 Currently supports :class:`solara.SliderInt`, :class:`solara.SliderFloat`,
187 :class:`solara.Select`, and :class:`solara.Checkbox`.
188
189 Props:
190 user_params: dictionary with options for the input, including label,
191 min and max values, and other fields specific to the input type.
192 on_change: function to be called with (name, value) when the value of an input changes.
193 """
194
195 for name, options in user_params.items():
196 # label for the input is "label" from options or name
197 label = options.get("label", name)
198 input_type = options.get("type")
199
200 def change_handler(value, name=name):
201 on_change(name, value)
202
203 if input_type == "SliderInt":
204 solara.SliderInt(
205 label,
206 value=options.get("value"),
207 on_value=change_handler,
208 min=options.get("min"),
209 max=options.get("max"),
210 step=options.get("step"),
211 )
212 elif input_type == "SliderFloat":
213 solara.SliderFloat(
214 label,
215 value=options.get("value"),
216 on_value=change_handler,
217 min=options.get("min"),
218 max=options.get("max"),
219 step=options.get("step"),
220 )
221 elif input_type == "Select":
222 solara.Select(
223 label,
224 value=options.get("value"),
225 on_value=change_handler,
226 values=options.get("values"),
227 )
228 elif input_type == "Checkbox":
229 solara.Checkbox(
230 label=label,
231 value=options.get("value"),
232 )
233 else:
234 raise ValueError(f"{input_type} is not a supported input type")
235
236
237 def make_space(model, agent_portrayal):
238 space_fig = Figure()
239 space_ax = space_fig.subplots()
240 space = getattr(model, "grid", None)
241 if space is None:
242 # Sometimes the space is defined as model.space instead of model.grid
243 space = model.space
244 if isinstance(space, mesa.space.NetworkGrid):
245 _draw_network_grid(space, space_ax, agent_portrayal)
246 elif isinstance(space, mesa.space.ContinuousSpace):
247 _draw_continuous_space(space, space_ax, agent_portrayal)
248 else:
249 _draw_grid(space, space_ax, agent_portrayal)
250 space_ax.set_axis_off()
251 solara.FigureMatplotlib(space_fig, format="png")
252
253
254 def _draw_grid(space, space_ax, agent_portrayal):
255 def portray(g):
256 x = []
257 y = []
258 s = [] # size
259 c = [] # color
260 for i in range(g.width):
261 for j in range(g.height):
262 content = g._grid[i][j]
263 if not content:
264 continue
265 if not hasattr(content, "__iter__"):
266 # Is a single grid
267 content = [content]
268 for agent in content:
269 data = agent_portrayal(agent)
270 x.append(i)
271 y.append(j)
272 if "size" in data:
273 s.append(data["size"])
274 if "color" in data:
275 c.append(data["color"])
276 out = {"x": x, "y": y}
277 if len(s) > 0:
278 out["s"] = s
279 if len(c) > 0:
280 out["c"] = c
281 return out
282
283 space_ax.scatter(**portray(space))
284
285
286 def _draw_network_grid(space, space_ax, agent_portrayal):
287 graph = space.G
288 pos = nx.spring_layout(graph, seed=0)
289 nx.draw(
290 graph,
291 ax=space_ax,
292 pos=pos,
293 **agent_portrayal(graph),
294 )
295
296
297 def _draw_continuous_space(space, space_ax, agent_portrayal):
298 def portray(space):
299 x = []
300 y = []
301 s = [] # size
302 c = [] # color
303 for agent in space._agent_to_index:
304 data = agent_portrayal(agent)
305 _x, _y = agent.pos
306 x.append(_x)
307 y.append(_y)
308 if "size" in data:
309 s.append(data["size"])
310 if "color" in data:
311 c.append(data["color"])
312 out = {"x": x, "y": y}
313 if len(s) > 0:
314 out["s"] = s
315 if len(c) > 0:
316 out["c"] = c
317 return out
318
319 space_ax.scatter(**portray(space))
320
321
322 def make_plot(model, measure):
323 fig = Figure()
324 ax = fig.subplots()
325 df = model.datacollector.get_model_vars_dataframe()
326 ax.plot(df.loc[:, measure])
327 ax.set_ylabel(measure)
328 # Set integer x axis
329 ax.xaxis.set_major_locator(MaxNLocator(integer=True))
330 solara.FigureMatplotlib(fig)
331
332
333 def make_text(renderer):
334 def function(model):
335 solara.Markdown(renderer(model))
336
337 return function
338
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mesa/experimental/jupyter_viz.py b/mesa/experimental/jupyter_viz.py
--- a/mesa/experimental/jupyter_viz.py
+++ b/mesa/experimental/jupyter_viz.py
@@ -228,6 +228,7 @@
elif input_type == "Checkbox":
solara.Checkbox(
label=label,
+ on_value=change_handler,
value=options.get("value"),
)
else:
| {"golden_diff": "diff --git a/mesa/experimental/jupyter_viz.py b/mesa/experimental/jupyter_viz.py\n--- a/mesa/experimental/jupyter_viz.py\n+++ b/mesa/experimental/jupyter_viz.py\n@@ -228,6 +228,7 @@\n elif input_type == \"Checkbox\":\n solara.Checkbox(\n label=label,\n+ on_value=change_handler,\n value=options.get(\"value\"),\n )\n else:\n", "issue": "jupyterviz checkbox input change is not propagated\n\n", "before_files": [{"content": "import threading\n\nimport matplotlib.pyplot as plt\nimport networkx as nx\nimport reacton.ipywidgets as widgets\nimport solara\nfrom matplotlib.figure import Figure\nfrom matplotlib.ticker import MaxNLocator\n\nimport mesa\n\n# Avoid interactive backend\nplt.switch_backend(\"agg\")\n\n\[email protected]\ndef JupyterViz(\n model_class,\n model_params,\n measures=None,\n name=\"Mesa Model\",\n agent_portrayal=None,\n space_drawer=\"default\",\n play_interval=150,\n):\n \"\"\"Initialize a component to visualize a model.\n Args:\n model_class: class of the model to instantiate\n model_params: parameters for initializing the model\n measures: list of callables or data attributes to plot\n name: name for display\n agent_portrayal: options for rendering agents (dictionary)\n space_drawer: method to render the agent space for\n the model; default implementation is :meth:`make_space`;\n simulations with no space to visualize should\n specify `space_drawer=False`\n play_interval: play interval (default: 150)\n \"\"\"\n\n current_step, set_current_step = solara.use_state(0)\n\n # 1. Set up model parameters\n user_params, fixed_params = split_model_params(model_params)\n model_parameters, set_model_parameters = solara.use_state(\n {**fixed_params, **{k: v[\"value\"] for k, v in user_params.items()}}\n )\n\n # 2. Set up Model\n def make_model():\n model = model_class(**model_parameters)\n set_current_step(0)\n return model\n\n reset_counter = solara.use_reactive(0)\n model = solara.use_memo(\n make_model, dependencies=[*list(model_parameters.values()), reset_counter.value]\n )\n\n def handle_change_model_params(name: str, value: any):\n set_model_parameters({**model_parameters, name: value})\n\n # 3. Set up UI\n solara.Markdown(name)\n UserInputs(user_params, on_change=handle_change_model_params)\n ModelController(model, play_interval, current_step, set_current_step, reset_counter)\n\n with solara.GridFixed(columns=2):\n # 4. Space\n if space_drawer == \"default\":\n # draw with the default implementation\n make_space(model, agent_portrayal)\n elif space_drawer:\n # if specified, draw agent space with an alternate renderer\n space_drawer(model, agent_portrayal)\n # otherwise, do nothing (do not draw space)\n\n # 5. Plots\n for measure in measures:\n if callable(measure):\n # Is a custom object\n measure(model)\n else:\n make_plot(model, measure)\n\n\[email protected]\ndef ModelController(\n model, play_interval, current_step, set_current_step, reset_counter\n):\n playing = solara.use_reactive(False)\n thread = solara.use_reactive(None)\n # We track the previous step to detect if user resets the model via\n # clicking the reset button or changing the parameters. If previous_step >\n # current_step, it means a model reset happens while the simulation is\n # still playing.\n previous_step = solara.use_reactive(0)\n\n def on_value_play(change):\n if previous_step.value > current_step and current_step == 0:\n # We add extra checks for current_step == 0, just to be sure.\n # We automatically stop the playing if a model is reset.\n playing.value = False\n elif model.running:\n do_step()\n else:\n playing.value = False\n\n def do_step():\n model.step()\n previous_step.value = current_step\n set_current_step(model.schedule.steps)\n\n def do_play():\n model.running = True\n while model.running:\n do_step()\n\n def threaded_do_play():\n if thread is not None and thread.is_alive():\n return\n thread.value = threading.Thread(target=do_play)\n thread.start()\n\n def do_pause():\n if (thread is None) or (not thread.is_alive()):\n return\n model.running = False\n thread.join()\n\n def do_reset():\n reset_counter.value += 1\n\n with solara.Row():\n solara.Button(label=\"Step\", color=\"primary\", on_click=do_step)\n # This style is necessary so that the play widget has almost the same\n # height as typical Solara buttons.\n solara.Style(\n \"\"\"\n .widget-play {\n height: 30px;\n }\n \"\"\"\n )\n widgets.Play(\n value=0,\n interval=play_interval,\n repeat=True,\n show_repeat=False,\n on_value=on_value_play,\n playing=playing.value,\n on_playing=playing.set,\n )\n solara.Button(label=\"Reset\", color=\"primary\", on_click=do_reset)\n solara.Markdown(md_text=f\"**Step:** {current_step}\")\n # threaded_do_play is not used for now because it\n # doesn't work in Google colab. We use\n # ipywidgets.Play until it is fixed. The threading\n # version is definite a much better implementation,\n # if it works.\n # solara.Button(label=\"\u25b6\", color=\"primary\", on_click=viz.threaded_do_play)\n # solara.Button(label=\"\u23f8\ufe0e\", color=\"primary\", on_click=viz.do_pause)\n # solara.Button(label=\"Reset\", color=\"primary\", on_click=do_reset)\n\n\ndef split_model_params(model_params):\n model_params_input = {}\n model_params_fixed = {}\n for k, v in model_params.items():\n if check_param_is_fixed(v):\n model_params_fixed[k] = v\n else:\n model_params_input[k] = v\n return model_params_input, model_params_fixed\n\n\ndef check_param_is_fixed(param):\n if not isinstance(param, dict):\n return True\n if \"type\" not in param:\n return True\n\n\[email protected]\ndef UserInputs(user_params, on_change=None):\n \"\"\"Initialize user inputs for configurable model parameters.\n Currently supports :class:`solara.SliderInt`, :class:`solara.SliderFloat`,\n :class:`solara.Select`, and :class:`solara.Checkbox`.\n\n Props:\n user_params: dictionary with options for the input, including label,\n min and max values, and other fields specific to the input type.\n on_change: function to be called with (name, value) when the value of an input changes.\n \"\"\"\n\n for name, options in user_params.items():\n # label for the input is \"label\" from options or name\n label = options.get(\"label\", name)\n input_type = options.get(\"type\")\n\n def change_handler(value, name=name):\n on_change(name, value)\n\n if input_type == \"SliderInt\":\n solara.SliderInt(\n label,\n value=options.get(\"value\"),\n on_value=change_handler,\n min=options.get(\"min\"),\n max=options.get(\"max\"),\n step=options.get(\"step\"),\n )\n elif input_type == \"SliderFloat\":\n solara.SliderFloat(\n label,\n value=options.get(\"value\"),\n on_value=change_handler,\n min=options.get(\"min\"),\n max=options.get(\"max\"),\n step=options.get(\"step\"),\n )\n elif input_type == \"Select\":\n solara.Select(\n label,\n value=options.get(\"value\"),\n on_value=change_handler,\n values=options.get(\"values\"),\n )\n elif input_type == \"Checkbox\":\n solara.Checkbox(\n label=label,\n value=options.get(\"value\"),\n )\n else:\n raise ValueError(f\"{input_type} is not a supported input type\")\n\n\ndef make_space(model, agent_portrayal):\n space_fig = Figure()\n space_ax = space_fig.subplots()\n space = getattr(model, \"grid\", None)\n if space is None:\n # Sometimes the space is defined as model.space instead of model.grid\n space = model.space\n if isinstance(space, mesa.space.NetworkGrid):\n _draw_network_grid(space, space_ax, agent_portrayal)\n elif isinstance(space, mesa.space.ContinuousSpace):\n _draw_continuous_space(space, space_ax, agent_portrayal)\n else:\n _draw_grid(space, space_ax, agent_portrayal)\n space_ax.set_axis_off()\n solara.FigureMatplotlib(space_fig, format=\"png\")\n\n\ndef _draw_grid(space, space_ax, agent_portrayal):\n def portray(g):\n x = []\n y = []\n s = [] # size\n c = [] # color\n for i in range(g.width):\n for j in range(g.height):\n content = g._grid[i][j]\n if not content:\n continue\n if not hasattr(content, \"__iter__\"):\n # Is a single grid\n content = [content]\n for agent in content:\n data = agent_portrayal(agent)\n x.append(i)\n y.append(j)\n if \"size\" in data:\n s.append(data[\"size\"])\n if \"color\" in data:\n c.append(data[\"color\"])\n out = {\"x\": x, \"y\": y}\n if len(s) > 0:\n out[\"s\"] = s\n if len(c) > 0:\n out[\"c\"] = c\n return out\n\n space_ax.scatter(**portray(space))\n\n\ndef _draw_network_grid(space, space_ax, agent_portrayal):\n graph = space.G\n pos = nx.spring_layout(graph, seed=0)\n nx.draw(\n graph,\n ax=space_ax,\n pos=pos,\n **agent_portrayal(graph),\n )\n\n\ndef _draw_continuous_space(space, space_ax, agent_portrayal):\n def portray(space):\n x = []\n y = []\n s = [] # size\n c = [] # color\n for agent in space._agent_to_index:\n data = agent_portrayal(agent)\n _x, _y = agent.pos\n x.append(_x)\n y.append(_y)\n if \"size\" in data:\n s.append(data[\"size\"])\n if \"color\" in data:\n c.append(data[\"color\"])\n out = {\"x\": x, \"y\": y}\n if len(s) > 0:\n out[\"s\"] = s\n if len(c) > 0:\n out[\"c\"] = c\n return out\n\n space_ax.scatter(**portray(space))\n\n\ndef make_plot(model, measure):\n fig = Figure()\n ax = fig.subplots()\n df = model.datacollector.get_model_vars_dataframe()\n ax.plot(df.loc[:, measure])\n ax.set_ylabel(measure)\n # Set integer x axis\n ax.xaxis.set_major_locator(MaxNLocator(integer=True))\n solara.FigureMatplotlib(fig)\n\n\ndef make_text(renderer):\n def function(model):\n solara.Markdown(renderer(model))\n\n return function\n", "path": "mesa/experimental/jupyter_viz.py"}], "after_files": [{"content": "import threading\n\nimport matplotlib.pyplot as plt\nimport networkx as nx\nimport reacton.ipywidgets as widgets\nimport solara\nfrom matplotlib.figure import Figure\nfrom matplotlib.ticker import MaxNLocator\n\nimport mesa\n\n# Avoid interactive backend\nplt.switch_backend(\"agg\")\n\n\[email protected]\ndef JupyterViz(\n model_class,\n model_params,\n measures=None,\n name=\"Mesa Model\",\n agent_portrayal=None,\n space_drawer=\"default\",\n play_interval=150,\n):\n \"\"\"Initialize a component to visualize a model.\n Args:\n model_class: class of the model to instantiate\n model_params: parameters for initializing the model\n measures: list of callables or data attributes to plot\n name: name for display\n agent_portrayal: options for rendering agents (dictionary)\n space_drawer: method to render the agent space for\n the model; default implementation is :meth:`make_space`;\n simulations with no space to visualize should\n specify `space_drawer=False`\n play_interval: play interval (default: 150)\n \"\"\"\n\n current_step, set_current_step = solara.use_state(0)\n\n # 1. Set up model parameters\n user_params, fixed_params = split_model_params(model_params)\n model_parameters, set_model_parameters = solara.use_state(\n {**fixed_params, **{k: v[\"value\"] for k, v in user_params.items()}}\n )\n\n # 2. Set up Model\n def make_model():\n model = model_class(**model_parameters)\n set_current_step(0)\n return model\n\n reset_counter = solara.use_reactive(0)\n model = solara.use_memo(\n make_model, dependencies=[*list(model_parameters.values()), reset_counter.value]\n )\n\n def handle_change_model_params(name: str, value: any):\n set_model_parameters({**model_parameters, name: value})\n\n # 3. Set up UI\n solara.Markdown(name)\n UserInputs(user_params, on_change=handle_change_model_params)\n ModelController(model, play_interval, current_step, set_current_step, reset_counter)\n\n with solara.GridFixed(columns=2):\n # 4. Space\n if space_drawer == \"default\":\n # draw with the default implementation\n make_space(model, agent_portrayal)\n elif space_drawer:\n # if specified, draw agent space with an alternate renderer\n space_drawer(model, agent_portrayal)\n # otherwise, do nothing (do not draw space)\n\n # 5. Plots\n for measure in measures:\n if callable(measure):\n # Is a custom object\n measure(model)\n else:\n make_plot(model, measure)\n\n\[email protected]\ndef ModelController(\n model, play_interval, current_step, set_current_step, reset_counter\n):\n playing = solara.use_reactive(False)\n thread = solara.use_reactive(None)\n # We track the previous step to detect if user resets the model via\n # clicking the reset button or changing the parameters. If previous_step >\n # current_step, it means a model reset happens while the simulation is\n # still playing.\n previous_step = solara.use_reactive(0)\n\n def on_value_play(change):\n if previous_step.value > current_step and current_step == 0:\n # We add extra checks for current_step == 0, just to be sure.\n # We automatically stop the playing if a model is reset.\n playing.value = False\n elif model.running:\n do_step()\n else:\n playing.value = False\n\n def do_step():\n model.step()\n previous_step.value = current_step\n set_current_step(model.schedule.steps)\n\n def do_play():\n model.running = True\n while model.running:\n do_step()\n\n def threaded_do_play():\n if thread is not None and thread.is_alive():\n return\n thread.value = threading.Thread(target=do_play)\n thread.start()\n\n def do_pause():\n if (thread is None) or (not thread.is_alive()):\n return\n model.running = False\n thread.join()\n\n def do_reset():\n reset_counter.value += 1\n\n with solara.Row():\n solara.Button(label=\"Step\", color=\"primary\", on_click=do_step)\n # This style is necessary so that the play widget has almost the same\n # height as typical Solara buttons.\n solara.Style(\n \"\"\"\n .widget-play {\n height: 30px;\n }\n \"\"\"\n )\n widgets.Play(\n value=0,\n interval=play_interval,\n repeat=True,\n show_repeat=False,\n on_value=on_value_play,\n playing=playing.value,\n on_playing=playing.set,\n )\n solara.Button(label=\"Reset\", color=\"primary\", on_click=do_reset)\n solara.Markdown(md_text=f\"**Step:** {current_step}\")\n # threaded_do_play is not used for now because it\n # doesn't work in Google colab. We use\n # ipywidgets.Play until it is fixed. The threading\n # version is definite a much better implementation,\n # if it works.\n # solara.Button(label=\"\u25b6\", color=\"primary\", on_click=viz.threaded_do_play)\n # solara.Button(label=\"\u23f8\ufe0e\", color=\"primary\", on_click=viz.do_pause)\n # solara.Button(label=\"Reset\", color=\"primary\", on_click=do_reset)\n\n\ndef split_model_params(model_params):\n model_params_input = {}\n model_params_fixed = {}\n for k, v in model_params.items():\n if check_param_is_fixed(v):\n model_params_fixed[k] = v\n else:\n model_params_input[k] = v\n return model_params_input, model_params_fixed\n\n\ndef check_param_is_fixed(param):\n if not isinstance(param, dict):\n return True\n if \"type\" not in param:\n return True\n\n\[email protected]\ndef UserInputs(user_params, on_change=None):\n \"\"\"Initialize user inputs for configurable model parameters.\n Currently supports :class:`solara.SliderInt`, :class:`solara.SliderFloat`,\n :class:`solara.Select`, and :class:`solara.Checkbox`.\n\n Props:\n user_params: dictionary with options for the input, including label,\n min and max values, and other fields specific to the input type.\n on_change: function to be called with (name, value) when the value of an input changes.\n \"\"\"\n\n for name, options in user_params.items():\n # label for the input is \"label\" from options or name\n label = options.get(\"label\", name)\n input_type = options.get(\"type\")\n\n def change_handler(value, name=name):\n on_change(name, value)\n\n if input_type == \"SliderInt\":\n solara.SliderInt(\n label,\n value=options.get(\"value\"),\n on_value=change_handler,\n min=options.get(\"min\"),\n max=options.get(\"max\"),\n step=options.get(\"step\"),\n )\n elif input_type == \"SliderFloat\":\n solara.SliderFloat(\n label,\n value=options.get(\"value\"),\n on_value=change_handler,\n min=options.get(\"min\"),\n max=options.get(\"max\"),\n step=options.get(\"step\"),\n )\n elif input_type == \"Select\":\n solara.Select(\n label,\n value=options.get(\"value\"),\n on_value=change_handler,\n values=options.get(\"values\"),\n )\n elif input_type == \"Checkbox\":\n solara.Checkbox(\n label=label,\n on_value=change_handler,\n value=options.get(\"value\"),\n )\n else:\n raise ValueError(f\"{input_type} is not a supported input type\")\n\n\ndef make_space(model, agent_portrayal):\n space_fig = Figure()\n space_ax = space_fig.subplots()\n space = getattr(model, \"grid\", None)\n if space is None:\n # Sometimes the space is defined as model.space instead of model.grid\n space = model.space\n if isinstance(space, mesa.space.NetworkGrid):\n _draw_network_grid(space, space_ax, agent_portrayal)\n elif isinstance(space, mesa.space.ContinuousSpace):\n _draw_continuous_space(space, space_ax, agent_portrayal)\n else:\n _draw_grid(space, space_ax, agent_portrayal)\n space_ax.set_axis_off()\n solara.FigureMatplotlib(space_fig, format=\"png\")\n\n\ndef _draw_grid(space, space_ax, agent_portrayal):\n def portray(g):\n x = []\n y = []\n s = [] # size\n c = [] # color\n for i in range(g.width):\n for j in range(g.height):\n content = g._grid[i][j]\n if not content:\n continue\n if not hasattr(content, \"__iter__\"):\n # Is a single grid\n content = [content]\n for agent in content:\n data = agent_portrayal(agent)\n x.append(i)\n y.append(j)\n if \"size\" in data:\n s.append(data[\"size\"])\n if \"color\" in data:\n c.append(data[\"color\"])\n out = {\"x\": x, \"y\": y}\n if len(s) > 0:\n out[\"s\"] = s\n if len(c) > 0:\n out[\"c\"] = c\n return out\n\n space_ax.scatter(**portray(space))\n\n\ndef _draw_network_grid(space, space_ax, agent_portrayal):\n graph = space.G\n pos = nx.spring_layout(graph, seed=0)\n nx.draw(\n graph,\n ax=space_ax,\n pos=pos,\n **agent_portrayal(graph),\n )\n\n\ndef _draw_continuous_space(space, space_ax, agent_portrayal):\n def portray(space):\n x = []\n y = []\n s = [] # size\n c = [] # color\n for agent in space._agent_to_index:\n data = agent_portrayal(agent)\n _x, _y = agent.pos\n x.append(_x)\n y.append(_y)\n if \"size\" in data:\n s.append(data[\"size\"])\n if \"color\" in data:\n c.append(data[\"color\"])\n out = {\"x\": x, \"y\": y}\n if len(s) > 0:\n out[\"s\"] = s\n if len(c) > 0:\n out[\"c\"] = c\n return out\n\n space_ax.scatter(**portray(space))\n\n\ndef make_plot(model, measure):\n fig = Figure()\n ax = fig.subplots()\n df = model.datacollector.get_model_vars_dataframe()\n ax.plot(df.loc[:, measure])\n ax.set_ylabel(measure)\n # Set integer x axis\n ax.xaxis.set_major_locator(MaxNLocator(integer=True))\n solara.FigureMatplotlib(fig)\n\n\ndef make_text(renderer):\n def function(model):\n solara.Markdown(renderer(model))\n\n return function\n", "path": "mesa/experimental/jupyter_viz.py"}]} | 3,565 | 100 |
gh_patches_debug_12756 | rasdani/github-patches | git_diff | xorbitsai__inference-351 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FEAT: support WizardMath v1.0
### Is your feature request related to a problem? Please describe
https://huggingface.co/WizardLM/WizardMath-13B-V1.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xinference/model/llm/utils.py`
Content:
```
1 # Copyright 2022-2023 XProbe Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Iterator, List
16
17 from xinference.model.llm.llm_family import PromptStyleV1
18
19 from ...types import (
20 ChatCompletion,
21 ChatCompletionChunk,
22 ChatCompletionMessage,
23 Completion,
24 CompletionChunk,
25 )
26
27
28 class ChatModelMixin:
29 @staticmethod
30 def get_prompt(
31 prompt: str,
32 chat_history: List[ChatCompletionMessage],
33 prompt_style: PromptStyleV1,
34 ) -> str:
35 """
36 Inspired by FastChat. Format chat history into a prompt according to the prompty style of
37 different models.
38 """
39 assert prompt_style.roles is not None
40 chat_history.append(
41 ChatCompletionMessage(role=prompt_style.roles[0], content=prompt)
42 )
43 chat_history.append(
44 ChatCompletionMessage(role=prompt_style.roles[1], content="")
45 )
46
47 if prompt_style.style_name == "ADD_COLON_SINGLE":
48 ret = prompt_style.system_prompt + prompt_style.intra_message_sep
49 for message in chat_history:
50 role = message["role"]
51 content = message["content"]
52 if content:
53 ret += role + ": " + content + prompt_style.intra_message_sep
54 else:
55 ret += role + ":"
56 return ret
57 elif prompt_style.style_name == "ADD_COLON_TWO":
58 seps = [prompt_style.intra_message_sep, prompt_style.inter_message_sep]
59 ret = prompt_style.system_prompt + seps[0]
60 for i, message in enumerate(chat_history):
61 role = message["role"]
62 content = message["content"]
63 if content:
64 ret += role + ": " + content + seps[i % 2]
65 else:
66 ret += role + ":"
67 return ret
68 elif prompt_style.style_name == "NO_COLON_TWO":
69 seps = [prompt_style.intra_message_sep, prompt_style.inter_message_sep]
70 ret = prompt_style.system_prompt
71 for i, message in enumerate(chat_history):
72 role = message["role"]
73 content = message["content"]
74 if content:
75 ret += role + content + seps[i % 2]
76 else:
77 ret += role
78 return ret
79 elif prompt_style.style_name == "LLAMA2":
80 seps = [prompt_style.intra_message_sep, prompt_style.inter_message_sep]
81 ret = ""
82 for i, message in enumerate(chat_history):
83 role = message["role"]
84 content = message["content"]
85 if content:
86 if i == 0:
87 ret += prompt_style.system_prompt + content
88 else:
89 ret += role + " " + content + seps[i % 2]
90 else:
91 ret += role
92 return ret
93 elif prompt_style.style_name == "FALCON":
94 ret = prompt_style.system_prompt
95 for message in chat_history:
96 role = message["role"]
97 content = message["content"]
98 if content:
99 ret += (
100 role
101 + ": "
102 + content.replace("\r\n", "\n").replace("\n\n", "\n")
103 )
104 ret += "\n\n"
105 else:
106 ret += role + ":"
107 return ret
108 elif prompt_style.style_name == "CHATGLM":
109 round_add_n = 1 if prompt_style.intra_message_sep == "\n\n" else 0
110 if prompt_style.system_prompt:
111 ret = prompt_style.system_prompt + prompt_style.intra_message_sep
112 else:
113 ret = ""
114 for i, message in enumerate(chat_history):
115 role = message["role"]
116 content = message["content"]
117 if i % 2 == 0:
118 ret += f"[Round {i // 2 + round_add_n}]{prompt_style.intra_message_sep}"
119 if content:
120 ret += role + ":" + content + prompt_style.intra_message_sep
121 else:
122 ret += role + ":"
123 return ret
124 elif prompt_style.style_name == "QWEN":
125 ret = f"<|im_start|>system\n{prompt_style.system_prompt}<|im_end|>"
126 for message in chat_history:
127 role = message["role"]
128 content = message["content"]
129
130 ret += prompt_style.intra_message_sep
131 if content:
132 ret += f"<|im_start|>{role}\n{content}<|im_end|>"
133 else:
134 ret += f"<|im_start|>{role}\n"
135 return ret
136 elif prompt_style.style_name == "CHATML":
137 ret = (
138 ""
139 if prompt_style.system_prompt == ""
140 else prompt_style.system_prompt + prompt_style.intra_message_sep + "\n"
141 )
142 for message in chat_history:
143 role = message["role"]
144 content = message["content"]
145
146 if content:
147 ret += role + "\n" + content + prompt_style.intra_message_sep + "\n"
148 else:
149 ret += role + "\n"
150 return ret
151 elif prompt_style.style_name == "INTERNLM":
152 seps = [prompt_style.intra_message_sep, prompt_style.inter_message_sep]
153 ret = ""
154 for i, message in enumerate(chat_history[:-2]):
155 if i % 2 == 0:
156 ret += "<s>"
157 role = message["role"]
158 content = message["content"]
159 ret += role + ":" + content + seps[i % 2]
160 if len(ret) == 0:
161 ret += "<s>"
162 ret += (
163 chat_history[-2]["role"] + ":" + chat_history[-2]["content"] + seps[0]
164 )
165 ret += chat_history[-1]["role"] + ":"
166 return ret
167 else:
168 raise ValueError(f"Invalid prompt style: {prompt_style.style_name}")
169
170 @staticmethod
171 def _convert_chat_completion_chunks_to_chat(
172 chunks: Iterator[CompletionChunk],
173 ) -> Iterator[ChatCompletionChunk]:
174 for i, chunk in enumerate(chunks):
175 if i == 0:
176 yield {
177 "id": "chat" + chunk["id"],
178 "model": chunk["model"],
179 "created": chunk["created"],
180 "object": "chat.completion.chunk",
181 "choices": [
182 {
183 "index": 0,
184 "delta": {
185 "role": "assistant",
186 },
187 "finish_reason": None,
188 }
189 ],
190 }
191 yield {
192 "id": "chat" + chunk["id"],
193 "model": chunk["model"],
194 "created": chunk["created"],
195 "object": "chat.completion.chunk",
196 "choices": [
197 {
198 "index": 0,
199 "delta": {
200 "content": chunk["choices"][0]["text"],
201 },
202 "finish_reason": chunk["choices"][0]["finish_reason"],
203 }
204 ],
205 }
206
207 @staticmethod
208 def _convert_text_completion_to_chat(completion: Completion) -> ChatCompletion:
209 return {
210 "id": "chat" + completion["id"],
211 "object": "chat.completion",
212 "created": completion["created"],
213 "model": completion["model"],
214 "choices": [
215 {
216 "index": 0,
217 "message": {
218 "role": "assistant",
219 "content": completion["choices"][0]["text"],
220 },
221 "finish_reason": completion["choices"][0]["finish_reason"],
222 }
223 ],
224 "usage": completion["usage"],
225 }
226
227
228 def is_valid_model_name(model_name: str) -> bool:
229 import re
230
231 return re.match(r"^[A-Za-z0-9][A-Za-z0-9_\-]*$", model_name) is not None
232
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/xinference/model/llm/utils.py b/xinference/model/llm/utils.py
--- a/xinference/model/llm/utils.py
+++ b/xinference/model/llm/utils.py
@@ -164,6 +164,16 @@
)
ret += chat_history[-1]["role"] + ":"
return ret
+ elif prompt_style.style_name == "ADD_COLON_SINGLE_COT":
+ ret = prompt_style.system_prompt + prompt_style.intra_message_sep
+ for message in chat_history:
+ role = message["role"]
+ content = message["content"]
+ if content:
+ ret += role + ": " + content + prompt_style.intra_message_sep
+ else:
+ ret += role + ": Let's think step by step."
+ return ret
else:
raise ValueError(f"Invalid prompt style: {prompt_style.style_name}")
| {"golden_diff": "diff --git a/xinference/model/llm/utils.py b/xinference/model/llm/utils.py\n--- a/xinference/model/llm/utils.py\n+++ b/xinference/model/llm/utils.py\n@@ -164,6 +164,16 @@\n )\n ret += chat_history[-1][\"role\"] + \":\"\n return ret\n+ elif prompt_style.style_name == \"ADD_COLON_SINGLE_COT\":\n+ ret = prompt_style.system_prompt + prompt_style.intra_message_sep\n+ for message in chat_history:\n+ role = message[\"role\"]\n+ content = message[\"content\"]\n+ if content:\n+ ret += role + \": \" + content + prompt_style.intra_message_sep\n+ else:\n+ ret += role + \": Let's think step by step.\"\n+ return ret\n else:\n raise ValueError(f\"Invalid prompt style: {prompt_style.style_name}\")\n", "issue": "FEAT: support WizardMath v1.0\n### Is your feature request related to a problem? Please describe\r\nhttps://huggingface.co/WizardLM/WizardMath-13B-V1.0\r\n\r\n\n", "before_files": [{"content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Iterator, List\n\nfrom xinference.model.llm.llm_family import PromptStyleV1\n\nfrom ...types import (\n ChatCompletion,\n ChatCompletionChunk,\n ChatCompletionMessage,\n Completion,\n CompletionChunk,\n)\n\n\nclass ChatModelMixin:\n @staticmethod\n def get_prompt(\n prompt: str,\n chat_history: List[ChatCompletionMessage],\n prompt_style: PromptStyleV1,\n ) -> str:\n \"\"\"\n Inspired by FastChat. Format chat history into a prompt according to the prompty style of\n different models.\n \"\"\"\n assert prompt_style.roles is not None\n chat_history.append(\n ChatCompletionMessage(role=prompt_style.roles[0], content=prompt)\n )\n chat_history.append(\n ChatCompletionMessage(role=prompt_style.roles[1], content=\"\")\n )\n\n if prompt_style.style_name == \"ADD_COLON_SINGLE\":\n ret = prompt_style.system_prompt + prompt_style.intra_message_sep\n for message in chat_history:\n role = message[\"role\"]\n content = message[\"content\"]\n if content:\n ret += role + \": \" + content + prompt_style.intra_message_sep\n else:\n ret += role + \":\"\n return ret\n elif prompt_style.style_name == \"ADD_COLON_TWO\":\n seps = [prompt_style.intra_message_sep, prompt_style.inter_message_sep]\n ret = prompt_style.system_prompt + seps[0]\n for i, message in enumerate(chat_history):\n role = message[\"role\"]\n content = message[\"content\"]\n if content:\n ret += role + \": \" + content + seps[i % 2]\n else:\n ret += role + \":\"\n return ret\n elif prompt_style.style_name == \"NO_COLON_TWO\":\n seps = [prompt_style.intra_message_sep, prompt_style.inter_message_sep]\n ret = prompt_style.system_prompt\n for i, message in enumerate(chat_history):\n role = message[\"role\"]\n content = message[\"content\"]\n if content:\n ret += role + content + seps[i % 2]\n else:\n ret += role\n return ret\n elif prompt_style.style_name == \"LLAMA2\":\n seps = [prompt_style.intra_message_sep, prompt_style.inter_message_sep]\n ret = \"\"\n for i, message in enumerate(chat_history):\n role = message[\"role\"]\n content = message[\"content\"]\n if content:\n if i == 0:\n ret += prompt_style.system_prompt + content\n else:\n ret += role + \" \" + content + seps[i % 2]\n else:\n ret += role\n return ret\n elif prompt_style.style_name == \"FALCON\":\n ret = prompt_style.system_prompt\n for message in chat_history:\n role = message[\"role\"]\n content = message[\"content\"]\n if content:\n ret += (\n role\n + \": \"\n + content.replace(\"\\r\\n\", \"\\n\").replace(\"\\n\\n\", \"\\n\")\n )\n ret += \"\\n\\n\"\n else:\n ret += role + \":\"\n return ret\n elif prompt_style.style_name == \"CHATGLM\":\n round_add_n = 1 if prompt_style.intra_message_sep == \"\\n\\n\" else 0\n if prompt_style.system_prompt:\n ret = prompt_style.system_prompt + prompt_style.intra_message_sep\n else:\n ret = \"\"\n for i, message in enumerate(chat_history):\n role = message[\"role\"]\n content = message[\"content\"]\n if i % 2 == 0:\n ret += f\"[Round {i // 2 + round_add_n}]{prompt_style.intra_message_sep}\"\n if content:\n ret += role + \"\uff1a\" + content + prompt_style.intra_message_sep\n else:\n ret += role + \"\uff1a\"\n return ret\n elif prompt_style.style_name == \"QWEN\":\n ret = f\"<|im_start|>system\\n{prompt_style.system_prompt}<|im_end|>\"\n for message in chat_history:\n role = message[\"role\"]\n content = message[\"content\"]\n\n ret += prompt_style.intra_message_sep\n if content:\n ret += f\"<|im_start|>{role}\\n{content}<|im_end|>\"\n else:\n ret += f\"<|im_start|>{role}\\n\"\n return ret\n elif prompt_style.style_name == \"CHATML\":\n ret = (\n \"\"\n if prompt_style.system_prompt == \"\"\n else prompt_style.system_prompt + prompt_style.intra_message_sep + \"\\n\"\n )\n for message in chat_history:\n role = message[\"role\"]\n content = message[\"content\"]\n\n if content:\n ret += role + \"\\n\" + content + prompt_style.intra_message_sep + \"\\n\"\n else:\n ret += role + \"\\n\"\n return ret\n elif prompt_style.style_name == \"INTERNLM\":\n seps = [prompt_style.intra_message_sep, prompt_style.inter_message_sep]\n ret = \"\"\n for i, message in enumerate(chat_history[:-2]):\n if i % 2 == 0:\n ret += \"<s>\"\n role = message[\"role\"]\n content = message[\"content\"]\n ret += role + \":\" + content + seps[i % 2]\n if len(ret) == 0:\n ret += \"<s>\"\n ret += (\n chat_history[-2][\"role\"] + \":\" + chat_history[-2][\"content\"] + seps[0]\n )\n ret += chat_history[-1][\"role\"] + \":\"\n return ret\n else:\n raise ValueError(f\"Invalid prompt style: {prompt_style.style_name}\")\n\n @staticmethod\n def _convert_chat_completion_chunks_to_chat(\n chunks: Iterator[CompletionChunk],\n ) -> Iterator[ChatCompletionChunk]:\n for i, chunk in enumerate(chunks):\n if i == 0:\n yield {\n \"id\": \"chat\" + chunk[\"id\"],\n \"model\": chunk[\"model\"],\n \"created\": chunk[\"created\"],\n \"object\": \"chat.completion.chunk\",\n \"choices\": [\n {\n \"index\": 0,\n \"delta\": {\n \"role\": \"assistant\",\n },\n \"finish_reason\": None,\n }\n ],\n }\n yield {\n \"id\": \"chat\" + chunk[\"id\"],\n \"model\": chunk[\"model\"],\n \"created\": chunk[\"created\"],\n \"object\": \"chat.completion.chunk\",\n \"choices\": [\n {\n \"index\": 0,\n \"delta\": {\n \"content\": chunk[\"choices\"][0][\"text\"],\n },\n \"finish_reason\": chunk[\"choices\"][0][\"finish_reason\"],\n }\n ],\n }\n\n @staticmethod\n def _convert_text_completion_to_chat(completion: Completion) -> ChatCompletion:\n return {\n \"id\": \"chat\" + completion[\"id\"],\n \"object\": \"chat.completion\",\n \"created\": completion[\"created\"],\n \"model\": completion[\"model\"],\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": completion[\"choices\"][0][\"text\"],\n },\n \"finish_reason\": completion[\"choices\"][0][\"finish_reason\"],\n }\n ],\n \"usage\": completion[\"usage\"],\n }\n\n\ndef is_valid_model_name(model_name: str) -> bool:\n import re\n\n return re.match(r\"^[A-Za-z0-9][A-Za-z0-9_\\-]*$\", model_name) is not None\n", "path": "xinference/model/llm/utils.py"}], "after_files": [{"content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Iterator, List\n\nfrom xinference.model.llm.llm_family import PromptStyleV1\n\nfrom ...types import (\n ChatCompletion,\n ChatCompletionChunk,\n ChatCompletionMessage,\n Completion,\n CompletionChunk,\n)\n\n\nclass ChatModelMixin:\n @staticmethod\n def get_prompt(\n prompt: str,\n chat_history: List[ChatCompletionMessage],\n prompt_style: PromptStyleV1,\n ) -> str:\n \"\"\"\n Inspired by FastChat. Format chat history into a prompt according to the prompty style of\n different models.\n \"\"\"\n assert prompt_style.roles is not None\n chat_history.append(\n ChatCompletionMessage(role=prompt_style.roles[0], content=prompt)\n )\n chat_history.append(\n ChatCompletionMessage(role=prompt_style.roles[1], content=\"\")\n )\n\n if prompt_style.style_name == \"ADD_COLON_SINGLE\":\n ret = prompt_style.system_prompt + prompt_style.intra_message_sep\n for message in chat_history:\n role = message[\"role\"]\n content = message[\"content\"]\n if content:\n ret += role + \": \" + content + prompt_style.intra_message_sep\n else:\n ret += role + \":\"\n return ret\n elif prompt_style.style_name == \"ADD_COLON_TWO\":\n seps = [prompt_style.intra_message_sep, prompt_style.inter_message_sep]\n ret = prompt_style.system_prompt + seps[0]\n for i, message in enumerate(chat_history):\n role = message[\"role\"]\n content = message[\"content\"]\n if content:\n ret += role + \": \" + content + seps[i % 2]\n else:\n ret += role + \":\"\n return ret\n elif prompt_style.style_name == \"NO_COLON_TWO\":\n seps = [prompt_style.intra_message_sep, prompt_style.inter_message_sep]\n ret = prompt_style.system_prompt\n for i, message in enumerate(chat_history):\n role = message[\"role\"]\n content = message[\"content\"]\n if content:\n ret += role + content + seps[i % 2]\n else:\n ret += role\n return ret\n elif prompt_style.style_name == \"LLAMA2\":\n seps = [prompt_style.intra_message_sep, prompt_style.inter_message_sep]\n ret = \"\"\n for i, message in enumerate(chat_history):\n role = message[\"role\"]\n content = message[\"content\"]\n if content:\n if i == 0:\n ret += prompt_style.system_prompt + content\n else:\n ret += role + \" \" + content + seps[i % 2]\n else:\n ret += role\n return ret\n elif prompt_style.style_name == \"FALCON\":\n ret = prompt_style.system_prompt\n for message in chat_history:\n role = message[\"role\"]\n content = message[\"content\"]\n if content:\n ret += (\n role\n + \": \"\n + content.replace(\"\\r\\n\", \"\\n\").replace(\"\\n\\n\", \"\\n\")\n )\n ret += \"\\n\\n\"\n else:\n ret += role + \":\"\n return ret\n elif prompt_style.style_name == \"CHATGLM\":\n round_add_n = 1 if prompt_style.intra_message_sep == \"\\n\\n\" else 0\n if prompt_style.system_prompt:\n ret = prompt_style.system_prompt + prompt_style.intra_message_sep\n else:\n ret = \"\"\n for i, message in enumerate(chat_history):\n role = message[\"role\"]\n content = message[\"content\"]\n if i % 2 == 0:\n ret += f\"[Round {i // 2 + round_add_n}]{prompt_style.intra_message_sep}\"\n if content:\n ret += role + \"\uff1a\" + content + prompt_style.intra_message_sep\n else:\n ret += role + \"\uff1a\"\n return ret\n elif prompt_style.style_name == \"QWEN\":\n ret = f\"<|im_start|>system\\n{prompt_style.system_prompt}<|im_end|>\"\n for message in chat_history:\n role = message[\"role\"]\n content = message[\"content\"]\n\n ret += prompt_style.intra_message_sep\n if content:\n ret += f\"<|im_start|>{role}\\n{content}<|im_end|>\"\n else:\n ret += f\"<|im_start|>{role}\\n\"\n return ret\n elif prompt_style.style_name == \"CHATML\":\n ret = (\n \"\"\n if prompt_style.system_prompt == \"\"\n else prompt_style.system_prompt + prompt_style.intra_message_sep + \"\\n\"\n )\n for message in chat_history:\n role = message[\"role\"]\n content = message[\"content\"]\n\n if content:\n ret += role + \"\\n\" + content + prompt_style.intra_message_sep + \"\\n\"\n else:\n ret += role + \"\\n\"\n return ret\n elif prompt_style.style_name == \"INTERNLM\":\n seps = [prompt_style.intra_message_sep, prompt_style.inter_message_sep]\n ret = \"\"\n for i, message in enumerate(chat_history[:-2]):\n if i % 2 == 0:\n ret += \"<s>\"\n role = message[\"role\"]\n content = message[\"content\"]\n ret += role + \":\" + content + seps[i % 2]\n if len(ret) == 0:\n ret += \"<s>\"\n ret += (\n chat_history[-2][\"role\"] + \":\" + chat_history[-2][\"content\"] + seps[0]\n )\n ret += chat_history[-1][\"role\"] + \":\"\n return ret\n elif prompt_style.style_name == \"ADD_COLON_SINGLE_COT\":\n ret = prompt_style.system_prompt + prompt_style.intra_message_sep\n for message in chat_history:\n role = message[\"role\"]\n content = message[\"content\"]\n if content:\n ret += role + \": \" + content + prompt_style.intra_message_sep\n else:\n ret += role + \": Let's think step by step.\"\n return ret\n else:\n raise ValueError(f\"Invalid prompt style: {prompt_style.style_name}\")\n\n @staticmethod\n def _convert_chat_completion_chunks_to_chat(\n chunks: Iterator[CompletionChunk],\n ) -> Iterator[ChatCompletionChunk]:\n for i, chunk in enumerate(chunks):\n if i == 0:\n yield {\n \"id\": \"chat\" + chunk[\"id\"],\n \"model\": chunk[\"model\"],\n \"created\": chunk[\"created\"],\n \"object\": \"chat.completion.chunk\",\n \"choices\": [\n {\n \"index\": 0,\n \"delta\": {\n \"role\": \"assistant\",\n },\n \"finish_reason\": None,\n }\n ],\n }\n yield {\n \"id\": \"chat\" + chunk[\"id\"],\n \"model\": chunk[\"model\"],\n \"created\": chunk[\"created\"],\n \"object\": \"chat.completion.chunk\",\n \"choices\": [\n {\n \"index\": 0,\n \"delta\": {\n \"content\": chunk[\"choices\"][0][\"text\"],\n },\n \"finish_reason\": chunk[\"choices\"][0][\"finish_reason\"],\n }\n ],\n }\n\n @staticmethod\n def _convert_text_completion_to_chat(completion: Completion) -> ChatCompletion:\n return {\n \"id\": \"chat\" + completion[\"id\"],\n \"object\": \"chat.completion\",\n \"created\": completion[\"created\"],\n \"model\": completion[\"model\"],\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": completion[\"choices\"][0][\"text\"],\n },\n \"finish_reason\": completion[\"choices\"][0][\"finish_reason\"],\n }\n ],\n \"usage\": completion[\"usage\"],\n }\n\n\ndef is_valid_model_name(model_name: str) -> bool:\n import re\n\n return re.match(r\"^[A-Za-z0-9][A-Za-z0-9_\\-]*$\", model_name) is not None\n", "path": "xinference/model/llm/utils.py"}]} | 2,640 | 202 |
gh_patches_debug_20648 | rasdani/github-patches | git_diff | microsoft__ptvsd-1253 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PTVSD_LOG_DIR doesn't work with VS
No logs are generated even with the environment variable set. It looks like logging initialization is missing on the VS entry point (`debugger.py`).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/ptvsd/debugger.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 import sys
6
7 from ptvsd._local import run_module, run_file, run_main
8
9
10 # TODO: not needed?
11 DONT_DEBUG = []
12
13 LOCALHOST = 'localhost'
14
15 RUNNERS = {
16 'module': run_module, # python -m spam
17 'script': run_file, # python spam.py
18 'code': run_file, # python -c 'print("spam")'
19 None: run_file, # catchall
20 }
21
22
23 def debug(filename, port_num, debug_id, debug_options, run_as,
24 _runners=RUNNERS, _extra=None, *args, **kwargs):
25 # TODO: docstring
26 if _extra is None:
27 _extra = sys.argv[1:]
28 address = (LOCALHOST, port_num)
29 try:
30 run = _runners[run_as]
31 except KeyError:
32 # TODO: fail?
33 run = _runners[None]
34 if _extra:
35 args = _extra + list(args)
36 kwargs.setdefault('singlesession', True)
37 run(address, filename, *args, **kwargs)
38
39
40 def run(filename, port_num, run_as,
41 *args, **kwargs):
42 address = (LOCALHOST, port_num)
43 run_main(address, filename, run_as, *args, **kwargs)
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/ptvsd/debugger.py b/src/ptvsd/debugger.py
--- a/src/ptvsd/debugger.py
+++ b/src/ptvsd/debugger.py
@@ -4,6 +4,7 @@
import sys
+import ptvsd.log
from ptvsd._local import run_module, run_file, run_main
@@ -22,7 +23,10 @@
def debug(filename, port_num, debug_id, debug_options, run_as,
_runners=RUNNERS, _extra=None, *args, **kwargs):
- # TODO: docstring
+
+ ptvsd.log.to_file()
+ ptvsd.log.info('debug{0!r}', (filename, port_num, debug_id, debug_options, run_as))
+
if _extra is None:
_extra = sys.argv[1:]
address = (LOCALHOST, port_num)
@@ -39,5 +43,9 @@
def run(filename, port_num, run_as,
*args, **kwargs):
+
+ ptvsd.log.to_file()
+ ptvsd.log.info('run{0!r}', (filename, port_num, run_as))
+
address = (LOCALHOST, port_num)
run_main(address, filename, run_as, *args, **kwargs)
| {"golden_diff": "diff --git a/src/ptvsd/debugger.py b/src/ptvsd/debugger.py\n--- a/src/ptvsd/debugger.py\n+++ b/src/ptvsd/debugger.py\n@@ -4,6 +4,7 @@\n \n import sys\n \n+import ptvsd.log\n from ptvsd._local import run_module, run_file, run_main\n \n \n@@ -22,7 +23,10 @@\n \n def debug(filename, port_num, debug_id, debug_options, run_as,\n _runners=RUNNERS, _extra=None, *args, **kwargs):\n- # TODO: docstring\n+\n+ ptvsd.log.to_file()\n+ ptvsd.log.info('debug{0!r}', (filename, port_num, debug_id, debug_options, run_as))\n+\n if _extra is None:\n _extra = sys.argv[1:]\n address = (LOCALHOST, port_num)\n@@ -39,5 +43,9 @@\n \n def run(filename, port_num, run_as,\n *args, **kwargs):\n+\n+ ptvsd.log.to_file()\n+ ptvsd.log.info('run{0!r}', (filename, port_num, run_as))\n+\n address = (LOCALHOST, port_num)\n run_main(address, filename, run_as, *args, **kwargs)\n", "issue": "PTVSD_LOG_DIR doesn't work with VS\nNo logs are generated even with the environment variable set. It looks like logging initialization is missing on the VS entry point (`debugger.py`).\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\nfrom ptvsd._local import run_module, run_file, run_main\n\n\n# TODO: not needed?\nDONT_DEBUG = []\n\nLOCALHOST = 'localhost'\n\nRUNNERS = {\n 'module': run_module, # python -m spam\n 'script': run_file, # python spam.py\n 'code': run_file, # python -c 'print(\"spam\")'\n None: run_file, # catchall\n}\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as,\n _runners=RUNNERS, _extra=None, *args, **kwargs):\n # TODO: docstring\n if _extra is None:\n _extra = sys.argv[1:]\n address = (LOCALHOST, port_num)\n try:\n run = _runners[run_as]\n except KeyError:\n # TODO: fail?\n run = _runners[None]\n if _extra:\n args = _extra + list(args)\n kwargs.setdefault('singlesession', True)\n run(address, filename, *args, **kwargs)\n\n\ndef run(filename, port_num, run_as,\n *args, **kwargs):\n address = (LOCALHOST, port_num)\n run_main(address, filename, run_as, *args, **kwargs)\n", "path": "src/ptvsd/debugger.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\nimport ptvsd.log\nfrom ptvsd._local import run_module, run_file, run_main\n\n\n# TODO: not needed?\nDONT_DEBUG = []\n\nLOCALHOST = 'localhost'\n\nRUNNERS = {\n 'module': run_module, # python -m spam\n 'script': run_file, # python spam.py\n 'code': run_file, # python -c 'print(\"spam\")'\n None: run_file, # catchall\n}\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as,\n _runners=RUNNERS, _extra=None, *args, **kwargs):\n\n ptvsd.log.to_file()\n ptvsd.log.info('debug{0!r}', (filename, port_num, debug_id, debug_options, run_as))\n\n if _extra is None:\n _extra = sys.argv[1:]\n address = (LOCALHOST, port_num)\n try:\n run = _runners[run_as]\n except KeyError:\n # TODO: fail?\n run = _runners[None]\n if _extra:\n args = _extra + list(args)\n kwargs.setdefault('singlesession', True)\n run(address, filename, *args, **kwargs)\n\n\ndef run(filename, port_num, run_as,\n *args, **kwargs):\n\n ptvsd.log.to_file()\n ptvsd.log.info('run{0!r}', (filename, port_num, run_as))\n\n address = (LOCALHOST, port_num)\n run_main(address, filename, run_as, *args, **kwargs)\n", "path": "src/ptvsd/debugger.py"}]} | 699 | 295 |
gh_patches_debug_36977 | rasdani/github-patches | git_diff | bridgecrewio__checkov-4942 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Failed to run check CKV_AWS_224: TemplateAttributeError: get is invalid
**Describe the issue**
Error occurs when checked ECS Cluster using terraform_plan framework.
**Examples**
```
module "cluster" {
source = "terraform-aws-modules/ecs/aws"
version = "4.1.3"
cluster_name = "foo"
fargate_capacity_providers = {
FARGATE = {}
}
}
```
**Version (please complete the following information):**
- checkov 2.3.165
- terraform 1.4.5
- aws provider 4.63.0
**Additional context**
traceback:
```
2023-04-18 09:53:09,676 [MainThread ] [ERROR] Failed to run check CKV_AWS_224 on /tfplan.json:aws_ecs_cluster.this
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/checkov/common/checks/base_check.py", line 73, in run
check_result["result"] = self.scan_entity_conf(entity_configuration, entity_type)
File "/usr/local/lib/python3.9/site-packages/checkov/terraform/checks/resource/base_resource_check.py", line 43, in scan_entity_conf
return self.scan_resource_conf(conf)
File "/usr/local/lib/python3.9/site-packages/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py", line 21, in scan_resource_conf
if log_conf.get('cloud_watch_encryption_enabled') == [True] or \
File "/usr/local/lib/python3.9/site-packages/checkov/common/parsers/node.py", line 189, in __getattr__
raise TemplateAttributeError(f'{name} is invalid')
checkov.common.parsers.node.TemplateAttributeError: get is invalid
```
This only occurs when using terraform_plan framework. It works without issue when using vanilla terraform framework.
The plan generation is just `terraform plan -out tfplan.bin && terraform show -json tfplan.bin > tfplan.json` then running `checkof -f tfplan.json`.
Here is my checkov config file in repo:
```
➜ cat .checkov.yaml
block-list-secret-scan: []
compact: true
download-external-modules: true
evaluate-variables: true
external-modules-download-path: .external_modules
file:
- tfplan.json
framework:
- terraform_plan
mask: []
quiet: true
repo-root-for-plan-enrichment:
- .
secrets-history-timeout: 12h
secrets-scan-file-type: []
skip-check:
- CKV2_AWS_34
summary-position: top
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py`
Content:
```
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
3
4
5 class ECSClusterLoggingEncryptedWithCMK(BaseResourceCheck):
6 def __init__(self):
7 name = "Ensure Cluster logging with CMK"
8 id = "CKV_AWS_224"
9 supported_resources = ['aws_ecs_cluster']
10 categories = [CheckCategories.ENCRYPTION]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def scan_resource_conf(self, conf):
14 configuration = conf.get("configuration")
15 if configuration and isinstance(configuration[0], dict) and configuration[0].get('execute_command_configuration'):
16 command_conf = configuration[0].get('execute_command_configuration')[0]
17 if not command_conf.get('logging') == ['NONE']:
18 if command_conf.get('kms_key_id'):
19 if command_conf.get('log_configuration'):
20 log_conf = command_conf.get('log_configuration')[0]
21 if log_conf.get('cloud_watch_encryption_enabled') == [True] or \
22 log_conf.get('s3_bucket_encryption_enabled') == [True]:
23 return CheckResult.PASSED
24 return CheckResult.FAILED
25 else:
26 return CheckResult.FAILED
27
28 return CheckResult.UNKNOWN
29
30
31 check = ECSClusterLoggingEncryptedWithCMK()
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py b/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py
--- a/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py
+++ b/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py
@@ -1,28 +1,36 @@
+from __future__ import annotations
+
+from typing import Any
+
from checkov.common.models.enums import CheckResult, CheckCategories
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
class ECSClusterLoggingEncryptedWithCMK(BaseResourceCheck):
- def __init__(self):
- name = "Ensure Cluster logging with CMK"
+ def __init__(self) -> None:
+ name = "Ensure ECS Cluster logging uses CMK"
id = "CKV_AWS_224"
- supported_resources = ['aws_ecs_cluster']
- categories = [CheckCategories.ENCRYPTION]
+ supported_resources = ("aws_ecs_cluster",)
+ categories = (CheckCategories.ENCRYPTION,)
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
- def scan_resource_conf(self, conf):
+ def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:
configuration = conf.get("configuration")
- if configuration and isinstance(configuration[0], dict) and configuration[0].get('execute_command_configuration'):
- command_conf = configuration[0].get('execute_command_configuration')[0]
- if not command_conf.get('logging') == ['NONE']:
- if command_conf.get('kms_key_id'):
- if command_conf.get('log_configuration'):
- log_conf = command_conf.get('log_configuration')[0]
- if log_conf.get('cloud_watch_encryption_enabled') == [True] or \
- log_conf.get('s3_bucket_encryption_enabled') == [True]:
- return CheckResult.PASSED
- return CheckResult.FAILED
- else:
+ if configuration and isinstance(configuration, list) and isinstance(configuration[0], dict):
+ execute_command = configuration[0].get("execute_command_configuration")
+ if execute_command and isinstance(execute_command, list):
+ execute_command = execute_command[0]
+ if isinstance(execute_command, dict) and not execute_command.get("logging") == ["NONE"]:
+ if execute_command.get("kms_key_id"):
+ log_conf = execute_command.get("log_configuration")
+ if log_conf and isinstance(log_conf, list):
+ log_conf = log_conf[0]
+ if isinstance(log_conf, dict) and (
+ log_conf.get("cloud_watch_encryption_enabled") == [True]
+ or log_conf.get("s3_bucket_encryption_enabled") == [True]
+ ):
+ return CheckResult.PASSED
+
return CheckResult.FAILED
return CheckResult.UNKNOWN
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py b/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py\n--- a/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py\n+++ b/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py\n@@ -1,28 +1,36 @@\n+from __future__ import annotations\n+\n+from typing import Any\n+\n from checkov.common.models.enums import CheckResult, CheckCategories\n from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n \n \n class ECSClusterLoggingEncryptedWithCMK(BaseResourceCheck):\n- def __init__(self):\n- name = \"Ensure Cluster logging with CMK\"\n+ def __init__(self) -> None:\n+ name = \"Ensure ECS Cluster logging uses CMK\"\n id = \"CKV_AWS_224\"\n- supported_resources = ['aws_ecs_cluster']\n- categories = [CheckCategories.ENCRYPTION]\n+ supported_resources = (\"aws_ecs_cluster\",)\n+ categories = (CheckCategories.ENCRYPTION,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n- def scan_resource_conf(self, conf):\n+ def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n configuration = conf.get(\"configuration\")\n- if configuration and isinstance(configuration[0], dict) and configuration[0].get('execute_command_configuration'):\n- command_conf = configuration[0].get('execute_command_configuration')[0]\n- if not command_conf.get('logging') == ['NONE']:\n- if command_conf.get('kms_key_id'):\n- if command_conf.get('log_configuration'):\n- log_conf = command_conf.get('log_configuration')[0]\n- if log_conf.get('cloud_watch_encryption_enabled') == [True] or \\\n- log_conf.get('s3_bucket_encryption_enabled') == [True]:\n- return CheckResult.PASSED\n- return CheckResult.FAILED\n- else:\n+ if configuration and isinstance(configuration, list) and isinstance(configuration[0], dict):\n+ execute_command = configuration[0].get(\"execute_command_configuration\")\n+ if execute_command and isinstance(execute_command, list):\n+ execute_command = execute_command[0]\n+ if isinstance(execute_command, dict) and not execute_command.get(\"logging\") == [\"NONE\"]:\n+ if execute_command.get(\"kms_key_id\"):\n+ log_conf = execute_command.get(\"log_configuration\")\n+ if log_conf and isinstance(log_conf, list):\n+ log_conf = log_conf[0]\n+ if isinstance(log_conf, dict) and (\n+ log_conf.get(\"cloud_watch_encryption_enabled\") == [True]\n+ or log_conf.get(\"s3_bucket_encryption_enabled\") == [True]\n+ ):\n+ return CheckResult.PASSED\n+\n return CheckResult.FAILED\n \n return CheckResult.UNKNOWN\n", "issue": "Failed to run check CKV_AWS_224: TemplateAttributeError: get is invalid\n**Describe the issue**\r\nError occurs when checked ECS Cluster using terraform_plan framework.\r\n\r\n**Examples**\r\n```\r\nmodule \"cluster\" {\r\n source = \"terraform-aws-modules/ecs/aws\"\r\n version = \"4.1.3\"\r\n\r\n cluster_name = \"foo\"\r\n fargate_capacity_providers = {\r\n FARGATE = {}\r\n }\r\n}\r\n```\r\n\r\n**Version (please complete the following information):**\r\n- checkov 2.3.165\r\n- terraform 1.4.5\r\n- aws provider 4.63.0\r\n\r\n**Additional context**\r\ntraceback:\r\n```\r\n2023-04-18 09:53:09,676 [MainThread ] [ERROR] Failed to run check CKV_AWS_224 on /tfplan.json:aws_ecs_cluster.this\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/common/checks/base_check.py\", line 73, in run\r\n check_result[\"result\"] = self.scan_entity_conf(entity_configuration, entity_type)\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/terraform/checks/resource/base_resource_check.py\", line 43, in scan_entity_conf\r\n return self.scan_resource_conf(conf)\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py\", line 21, in scan_resource_conf\r\n if log_conf.get('cloud_watch_encryption_enabled') == [True] or \\\r\n File \"/usr/local/lib/python3.9/site-packages/checkov/common/parsers/node.py\", line 189, in __getattr__\r\n raise TemplateAttributeError(f'{name} is invalid')\r\ncheckov.common.parsers.node.TemplateAttributeError: get is invalid\r\n```\r\n\r\nThis only occurs when using terraform_plan framework. It works without issue when using vanilla terraform framework.\r\n\r\nThe plan generation is just `terraform plan -out tfplan.bin && terraform show -json tfplan.bin > tfplan.json` then running `checkof -f tfplan.json`.\r\n\r\nHere is my checkov config file in repo:\r\n```\r\n\u279c cat .checkov.yaml \r\nblock-list-secret-scan: []\r\ncompact: true\r\ndownload-external-modules: true\r\nevaluate-variables: true\r\nexternal-modules-download-path: .external_modules\r\nfile:\r\n- tfplan.json\r\nframework:\r\n- terraform_plan\r\nmask: []\r\nquiet: true\r\nrepo-root-for-plan-enrichment:\r\n- .\r\nsecrets-history-timeout: 12h\r\nsecrets-scan-file-type: []\r\nskip-check:\r\n- CKV2_AWS_34\r\nsummary-position: top\r\n```\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass ECSClusterLoggingEncryptedWithCMK(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure Cluster logging with CMK\"\n id = \"CKV_AWS_224\"\n supported_resources = ['aws_ecs_cluster']\n categories = [CheckCategories.ENCRYPTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n configuration = conf.get(\"configuration\")\n if configuration and isinstance(configuration[0], dict) and configuration[0].get('execute_command_configuration'):\n command_conf = configuration[0].get('execute_command_configuration')[0]\n if not command_conf.get('logging') == ['NONE']:\n if command_conf.get('kms_key_id'):\n if command_conf.get('log_configuration'):\n log_conf = command_conf.get('log_configuration')[0]\n if log_conf.get('cloud_watch_encryption_enabled') == [True] or \\\n log_conf.get('s3_bucket_encryption_enabled') == [True]:\n return CheckResult.PASSED\n return CheckResult.FAILED\n else:\n return CheckResult.FAILED\n\n return CheckResult.UNKNOWN\n\n\ncheck = ECSClusterLoggingEncryptedWithCMK()\n", "path": "checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any\n\nfrom checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass ECSClusterLoggingEncryptedWithCMK(BaseResourceCheck):\n def __init__(self) -> None:\n name = \"Ensure ECS Cluster logging uses CMK\"\n id = \"CKV_AWS_224\"\n supported_resources = (\"aws_ecs_cluster\",)\n categories = (CheckCategories.ENCRYPTION,)\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf: dict[str, list[Any]]) -> CheckResult:\n configuration = conf.get(\"configuration\")\n if configuration and isinstance(configuration, list) and isinstance(configuration[0], dict):\n execute_command = configuration[0].get(\"execute_command_configuration\")\n if execute_command and isinstance(execute_command, list):\n execute_command = execute_command[0]\n if isinstance(execute_command, dict) and not execute_command.get(\"logging\") == [\"NONE\"]:\n if execute_command.get(\"kms_key_id\"):\n log_conf = execute_command.get(\"log_configuration\")\n if log_conf and isinstance(log_conf, list):\n log_conf = log_conf[0]\n if isinstance(log_conf, dict) and (\n log_conf.get(\"cloud_watch_encryption_enabled\") == [True]\n or log_conf.get(\"s3_bucket_encryption_enabled\") == [True]\n ):\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n return CheckResult.UNKNOWN\n\n\ncheck = ECSClusterLoggingEncryptedWithCMK()\n", "path": "checkov/terraform/checks/resource/aws/ECSClusterLoggingEncryptedWithCMK.py"}]} | 1,234 | 671 |
gh_patches_debug_16904 | rasdani/github-patches | git_diff | saleor__saleor-5443 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Creating a new sale raises error in Celery task
### Steps to reproduce the problem
1. Run the following mutation as an admin user (with `MANAGE_DISCOUNTS` permission):
```
mutation {
saleCreate(input: {name: "Test"}) {
errors {
field
message
}
sale {
id
name
}
}
}
```
The response from API is successful, but in the Django server console I'm getting the following error:
```
ERROR celery.app.trace Task saleor.product.tasks.update_products_minimal_variant_prices_of_discount_task[4ec46245-d1f1-47ae-ab23-0c0ab73a9981] raised unexpected: ValueError('Provide at least one of the ID lists:\n\tproduct_ids,\n\tcategory_ids,\n\tcollection_ids.') [PID:31316:Thread-175]
Traceback (most recent call last):
File "/Users/marcin/.pyenv/versions/saleor3.8.1/lib/python3.8/site-packages/celery/app/trace.py", line 385, in trace_task
R = retval = fun(*args, **kwargs)
File "/Users/marcin/mirumee/saleor-platform/saleor/saleor/product/tasks.py", line 64, in update_products_minimal_variant_prices_of_discount_task
update_products_minimal_variant_prices_of_discount(discount)
File "/Users/marcin/mirumee/saleor-platform/saleor/saleor/product/utils/variant_prices.py", line 76, in update_products_minimal_variant_prices_of_discount
update_products_minimal_variant_prices_of_catalogues(
File "/Users/marcin/mirumee/saleor-platform/saleor/saleor/product/utils/variant_prices.py", line 62, in update_products_minimal_variant_prices_of_catalogues
raise ValueError(
ValueError: Provide at least one of the ID lists:
product_ids,
category_ids,
collection_ids.
```
I suppose that the Celery task that recalculates minimal variant prices is run even there are no products to update. Probably an additional check needs to be added to not run the task in this case.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/product/utils/variant_prices.py`
Content:
```
1 import operator
2 from functools import reduce
3
4 from django.db.models.query_utils import Q
5 from prices import Money
6
7 from ...discount.utils import fetch_active_discounts
8 from ..models import Product
9
10
11 def _get_product_minimal_variant_price(product, discounts) -> Money:
12 # Start with the product's price as the minimal one
13 minimal_variant_price = product.price
14 for variant in product.variants.all():
15 variant_price = variant.get_price(discounts=discounts)
16 minimal_variant_price = min(minimal_variant_price, variant_price)
17 return minimal_variant_price
18
19
20 def update_product_minimal_variant_price(product, discounts=None, save=True):
21 if discounts is None:
22 discounts = fetch_active_discounts()
23 minimal_variant_price = _get_product_minimal_variant_price(product, discounts)
24 if product.minimal_variant_price != minimal_variant_price:
25 product.minimal_variant_price_amount = minimal_variant_price.amount
26 if save:
27 product.save(update_fields=["minimal_variant_price_amount", "updated_at"])
28 return product
29
30
31 def update_products_minimal_variant_prices(products, discounts=None):
32 if discounts is None:
33 discounts = fetch_active_discounts()
34 changed_products_to_update = []
35 for product in products:
36 old_minimal_variant_price = product.minimal_variant_price
37 updated_product = update_product_minimal_variant_price(
38 product, discounts, save=False
39 )
40 # Check if the "minimal_variant_price" has changed
41 if updated_product.minimal_variant_price != old_minimal_variant_price:
42 changed_products_to_update.append(updated_product)
43 # Bulk update the changed products
44 Product.objects.bulk_update(
45 changed_products_to_update, ["minimal_variant_price_amount"]
46 )
47
48
49 def update_products_minimal_variant_prices_of_catalogues(
50 product_ids=None, category_ids=None, collection_ids=None
51 ):
52 # Building the matching products query
53 q_list = []
54 if product_ids:
55 q_list.append(Q(pk__in=product_ids))
56 if category_ids:
57 q_list.append(Q(category_id__in=category_ids))
58 if collection_ids:
59 q_list.append(Q(collectionproduct__collection_id__in=collection_ids))
60 # Asserting that the function was called with some ids
61 if not q_list:
62 raise ValueError(
63 "Provide at least one of the ID lists:\n"
64 "\tproduct_ids,\n"
65 "\tcategory_ids,\n"
66 "\tcollection_ids."
67 )
68 # Querying the products
69 q_or = reduce(operator.or_, q_list)
70 products = Product.objects.filter(q_or).distinct()
71
72 update_products_minimal_variant_prices(products)
73
74
75 def update_products_minimal_variant_prices_of_discount(discount):
76 update_products_minimal_variant_prices_of_catalogues(
77 product_ids=discount.products.all().values_list("id", flat=True),
78 category_ids=discount.categories.all().values_list("id", flat=True),
79 collection_ids=discount.collections.all().values_list("id", flat=True),
80 )
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/product/utils/variant_prices.py b/saleor/product/utils/variant_prices.py
--- a/saleor/product/utils/variant_prices.py
+++ b/saleor/product/utils/variant_prices.py
@@ -58,18 +58,12 @@
if collection_ids:
q_list.append(Q(collectionproduct__collection_id__in=collection_ids))
# Asserting that the function was called with some ids
- if not q_list:
- raise ValueError(
- "Provide at least one of the ID lists:\n"
- "\tproduct_ids,\n"
- "\tcategory_ids,\n"
- "\tcollection_ids."
- )
- # Querying the products
- q_or = reduce(operator.or_, q_list)
- products = Product.objects.filter(q_or).distinct()
+ if q_list:
+ # Querying the products
+ q_or = reduce(operator.or_, q_list)
+ products = Product.objects.filter(q_or).distinct()
- update_products_minimal_variant_prices(products)
+ update_products_minimal_variant_prices(products)
def update_products_minimal_variant_prices_of_discount(discount):
| {"golden_diff": "diff --git a/saleor/product/utils/variant_prices.py b/saleor/product/utils/variant_prices.py\n--- a/saleor/product/utils/variant_prices.py\n+++ b/saleor/product/utils/variant_prices.py\n@@ -58,18 +58,12 @@\n if collection_ids:\n q_list.append(Q(collectionproduct__collection_id__in=collection_ids))\n # Asserting that the function was called with some ids\n- if not q_list:\n- raise ValueError(\n- \"Provide at least one of the ID lists:\\n\"\n- \"\\tproduct_ids,\\n\"\n- \"\\tcategory_ids,\\n\"\n- \"\\tcollection_ids.\"\n- )\n- # Querying the products\n- q_or = reduce(operator.or_, q_list)\n- products = Product.objects.filter(q_or).distinct()\n+ if q_list:\n+ # Querying the products\n+ q_or = reduce(operator.or_, q_list)\n+ products = Product.objects.filter(q_or).distinct()\n \n- update_products_minimal_variant_prices(products)\n+ update_products_minimal_variant_prices(products)\n \n \n def update_products_minimal_variant_prices_of_discount(discount):\n", "issue": "Creating a new sale raises error in Celery task\n### Steps to reproduce the problem\r\n1. Run the following mutation as an admin user (with `MANAGE_DISCOUNTS` permission):\r\n```\r\nmutation {\r\n saleCreate(input: {name: \"Test\"}) {\r\n errors {\r\n field\r\n message\r\n }\r\n sale {\r\n id\r\n name\r\n }\r\n }\r\n}\r\n```\r\n\r\nThe response from API is successful, but in the Django server console I'm getting the following error:\r\n\r\n```\r\nERROR celery.app.trace Task saleor.product.tasks.update_products_minimal_variant_prices_of_discount_task[4ec46245-d1f1-47ae-ab23-0c0ab73a9981] raised unexpected: ValueError('Provide at least one of the ID lists:\\n\\tproduct_ids,\\n\\tcategory_ids,\\n\\tcollection_ids.') [PID:31316:Thread-175]\r\nTraceback (most recent call last):\r\n File \"/Users/marcin/.pyenv/versions/saleor3.8.1/lib/python3.8/site-packages/celery/app/trace.py\", line 385, in trace_task\r\n R = retval = fun(*args, **kwargs)\r\n File \"/Users/marcin/mirumee/saleor-platform/saleor/saleor/product/tasks.py\", line 64, in update_products_minimal_variant_prices_of_discount_task\r\n update_products_minimal_variant_prices_of_discount(discount)\r\n File \"/Users/marcin/mirumee/saleor-platform/saleor/saleor/product/utils/variant_prices.py\", line 76, in update_products_minimal_variant_prices_of_discount\r\n update_products_minimal_variant_prices_of_catalogues(\r\n File \"/Users/marcin/mirumee/saleor-platform/saleor/saleor/product/utils/variant_prices.py\", line 62, in update_products_minimal_variant_prices_of_catalogues\r\n raise ValueError(\r\nValueError: Provide at least one of the ID lists:\r\n\tproduct_ids,\r\n\tcategory_ids,\r\n\tcollection_ids.\r\n```\r\n\r\nI suppose that the Celery task that recalculates minimal variant prices is run even there are no products to update. Probably an additional check needs to be added to not run the task in this case.\n", "before_files": [{"content": "import operator\nfrom functools import reduce\n\nfrom django.db.models.query_utils import Q\nfrom prices import Money\n\nfrom ...discount.utils import fetch_active_discounts\nfrom ..models import Product\n\n\ndef _get_product_minimal_variant_price(product, discounts) -> Money:\n # Start with the product's price as the minimal one\n minimal_variant_price = product.price\n for variant in product.variants.all():\n variant_price = variant.get_price(discounts=discounts)\n minimal_variant_price = min(minimal_variant_price, variant_price)\n return minimal_variant_price\n\n\ndef update_product_minimal_variant_price(product, discounts=None, save=True):\n if discounts is None:\n discounts = fetch_active_discounts()\n minimal_variant_price = _get_product_minimal_variant_price(product, discounts)\n if product.minimal_variant_price != minimal_variant_price:\n product.minimal_variant_price_amount = minimal_variant_price.amount\n if save:\n product.save(update_fields=[\"minimal_variant_price_amount\", \"updated_at\"])\n return product\n\n\ndef update_products_minimal_variant_prices(products, discounts=None):\n if discounts is None:\n discounts = fetch_active_discounts()\n changed_products_to_update = []\n for product in products:\n old_minimal_variant_price = product.minimal_variant_price\n updated_product = update_product_minimal_variant_price(\n product, discounts, save=False\n )\n # Check if the \"minimal_variant_price\" has changed\n if updated_product.minimal_variant_price != old_minimal_variant_price:\n changed_products_to_update.append(updated_product)\n # Bulk update the changed products\n Product.objects.bulk_update(\n changed_products_to_update, [\"minimal_variant_price_amount\"]\n )\n\n\ndef update_products_minimal_variant_prices_of_catalogues(\n product_ids=None, category_ids=None, collection_ids=None\n):\n # Building the matching products query\n q_list = []\n if product_ids:\n q_list.append(Q(pk__in=product_ids))\n if category_ids:\n q_list.append(Q(category_id__in=category_ids))\n if collection_ids:\n q_list.append(Q(collectionproduct__collection_id__in=collection_ids))\n # Asserting that the function was called with some ids\n if not q_list:\n raise ValueError(\n \"Provide at least one of the ID lists:\\n\"\n \"\\tproduct_ids,\\n\"\n \"\\tcategory_ids,\\n\"\n \"\\tcollection_ids.\"\n )\n # Querying the products\n q_or = reduce(operator.or_, q_list)\n products = Product.objects.filter(q_or).distinct()\n\n update_products_minimal_variant_prices(products)\n\n\ndef update_products_minimal_variant_prices_of_discount(discount):\n update_products_minimal_variant_prices_of_catalogues(\n product_ids=discount.products.all().values_list(\"id\", flat=True),\n category_ids=discount.categories.all().values_list(\"id\", flat=True),\n collection_ids=discount.collections.all().values_list(\"id\", flat=True),\n )\n", "path": "saleor/product/utils/variant_prices.py"}], "after_files": [{"content": "import operator\nfrom functools import reduce\n\nfrom django.db.models.query_utils import Q\nfrom prices import Money\n\nfrom ...discount.utils import fetch_active_discounts\nfrom ..models import Product\n\n\ndef _get_product_minimal_variant_price(product, discounts) -> Money:\n # Start with the product's price as the minimal one\n minimal_variant_price = product.price\n for variant in product.variants.all():\n variant_price = variant.get_price(discounts=discounts)\n minimal_variant_price = min(minimal_variant_price, variant_price)\n return minimal_variant_price\n\n\ndef update_product_minimal_variant_price(product, discounts=None, save=True):\n if discounts is None:\n discounts = fetch_active_discounts()\n minimal_variant_price = _get_product_minimal_variant_price(product, discounts)\n if product.minimal_variant_price != minimal_variant_price:\n product.minimal_variant_price_amount = minimal_variant_price.amount\n if save:\n product.save(update_fields=[\"minimal_variant_price_amount\", \"updated_at\"])\n return product\n\n\ndef update_products_minimal_variant_prices(products, discounts=None):\n if discounts is None:\n discounts = fetch_active_discounts()\n changed_products_to_update = []\n for product in products:\n old_minimal_variant_price = product.minimal_variant_price\n updated_product = update_product_minimal_variant_price(\n product, discounts, save=False\n )\n # Check if the \"minimal_variant_price\" has changed\n if updated_product.minimal_variant_price != old_minimal_variant_price:\n changed_products_to_update.append(updated_product)\n # Bulk update the changed products\n Product.objects.bulk_update(\n changed_products_to_update, [\"minimal_variant_price_amount\"]\n )\n\n\ndef update_products_minimal_variant_prices_of_catalogues(\n product_ids=None, category_ids=None, collection_ids=None\n):\n # Building the matching products query\n q_list = []\n if product_ids:\n q_list.append(Q(pk__in=product_ids))\n if category_ids:\n q_list.append(Q(category_id__in=category_ids))\n if collection_ids:\n q_list.append(Q(collectionproduct__collection_id__in=collection_ids))\n # Asserting that the function was called with some ids\n if q_list:\n # Querying the products\n q_or = reduce(operator.or_, q_list)\n products = Product.objects.filter(q_or).distinct()\n\n update_products_minimal_variant_prices(products)\n\n\ndef update_products_minimal_variant_prices_of_discount(discount):\n update_products_minimal_variant_prices_of_catalogues(\n product_ids=discount.products.all().values_list(\"id\", flat=True),\n category_ids=discount.categories.all().values_list(\"id\", flat=True),\n collection_ids=discount.collections.all().values_list(\"id\", flat=True),\n )\n", "path": "saleor/product/utils/variant_prices.py"}]} | 1,523 | 254 |
gh_patches_debug_3508 | rasdani/github-patches | git_diff | translate__pootle-6497 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Top scorers list includes zero score users
The top scorer list in e.g. `/af/?details` includes a number of users with zero score.
I'm doubtful that these contributed in last 30 days. So they shouldn't be on the list at all.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_score/utils.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from datetime import date, datetime, timedelta
10
11 import pytz
12
13 from django.contrib.auth import get_user_model
14 from django.db.models import Sum
15 from django.utils.functional import cached_property
16
17 from pootle.core.decorators import persistent_property
18 from pootle.core.delegate import display, revision, scores
19 from pootle.core.utils.timezone import localdate, make_aware
20 from pootle_app.models import Directory
21 from pootle_language.models import Language
22
23 from .apps import PootleScoreConfig
24 from .models import UserTPScore
25
26
27 User = get_user_model()
28
29
30 def to_datetime(possible_dt):
31 if possible_dt is None:
32 return
33 if isinstance(possible_dt, datetime):
34 return possible_dt
35 if isinstance(possible_dt, date):
36 return make_aware(
37 datetime.combine(
38 possible_dt,
39 datetime.min.time())).astimezone(
40 pytz.timezone("UTC"))
41
42
43 class Scores(object):
44 ns = "pootle.score"
45 sw_version = PootleScoreConfig.version
46
47 def __init__(self, context):
48 self.context = context
49
50 @property
51 def revision(self):
52 return revision.get(Directory)(
53 self.context.directory).get(key="stats")
54
55 @property
56 def score_model(self):
57 return UserTPScore.objects.exclude(
58 user__username__in=User.objects.META_USERS)
59
60 def get_daterange(self, days):
61 now = localdate()
62 return now - timedelta(days), now
63
64 def scores_within_days(self, days):
65 return self.score_model.filter(
66 date__range=self.get_daterange(days))
67
68 def get_scores(self, days):
69 return self.filter_scores(self.scores_within_days(days))
70
71 def get_top_scorers(self, days=30):
72 """Returns users with the top scores.
73
74 :param days: period of days to account for scores.
75 """
76 return self.get_scores(days).order_by("user__username").values(
77 "user__username", "user__email", "user__full_name").annotate(
78 Sum("score"),
79 Sum("suggested"),
80 Sum("reviewed"),
81 Sum("translated")).order_by("-score__sum")
82
83 def filter_scores(self, qs):
84 return qs
85
86 @persistent_property
87 def top_scorers(self):
88 return tuple(self.get_top_scorers())
89
90 def display(self, offset=0, limit=5, language=None, formatter=None):
91 scorers = self.top_scorers
92 if offset or limit:
93 scorers = list(scorers)
94 if offset:
95 scorers = scorers[offset:]
96 if limit:
97 scorers = scorers[:limit]
98 return display.get(Scores)(
99 top_scores=scorers,
100 formatter=formatter,
101 language=language)
102
103
104 class LanguageScores(Scores):
105 ns = "pootle.score.language"
106
107 @cached_property
108 def cache_key(self):
109 return (
110 "%s.%s.%s"
111 % (self.context.code,
112 localdate(),
113 self.revision))
114
115 def filter_scores(self, qs):
116 return qs.filter(tp__language_id=self.context.id)
117
118
119 class ProjectScores(Scores):
120 ns = "pootle.score.project"
121
122 @cached_property
123 def cache_key(self):
124 return (
125 "%s.%s.%s"
126 % (self.context.code,
127 localdate(),
128 self.revision))
129
130 def filter_scores(self, qs):
131 return qs.filter(tp__project_id=self.context.id)
132
133
134 class ProjectSetScores(Scores):
135 ns = "pootle.score.projects"
136
137 @cached_property
138 def cache_key(self):
139 return (
140 "%s.%s"
141 % (localdate(),
142 self.revision))
143
144
145 class TPScores(Scores):
146 ns = "pootle.score.tp"
147
148 @cached_property
149 def cache_key(self):
150 return (
151 "%s/%s.%s.%s"
152 % (self.context.language.code,
153 self.context.project.code,
154 localdate(),
155 self.revision))
156
157 def filter_scores(self, qs):
158 return qs.filter(tp_id=self.context.id)
159
160
161 class UserScores(Scores):
162 ns = "pootle.score.user"
163
164 @cached_property
165 def cache_key(self):
166 return (
167 "%s.%s.%s"
168 % (self.context.id,
169 localdate(),
170 self.revision))
171
172 @property
173 def revision(self):
174 return revision.get(Directory)(
175 Directory.objects.projects).get(key="stats")
176
177 @property
178 def score_model(self):
179 return self.context.scores
180
181 @property
182 def public_score(self):
183 return self.context.public_score
184
185 @persistent_property
186 def top_language(self):
187 return self.get_top_language()
188
189 def get_top_language_within(self, days):
190 top_lang = self.get_scores_by_language(
191 days).order_by("score__sum").first()
192 if top_lang:
193 return Language.objects.get(id=top_lang["tp__language"])
194
195 def get_scores_by_language(self, days):
196 """Languages that the user has contributed to in the last `days`,
197 and the summary score
198 """
199 return self.get_scores(days).order_by(
200 "tp__language").values("tp__language").annotate(Sum("score"))
201
202 def get_language_top_scores(self, language):
203 return scores.get(language.__class__)(language).top_scorers
204
205 def get_top_language(self, days=30):
206 """Returns the top language the user has contributed to and its
207 position.
208
209 "Top language" is defined as the language with the highest
210 aggregate score delta within the last `days` days.
211
212 :param days: period of days to account for scores.
213 :return: Tuple of `(position, Language)`. If there's no delta in
214 the score for the given period for any of the languages,
215 `(-1, None)` is returned.
216 """
217 language = self.get_top_language_within(days)
218 if language:
219 # this only gets scores for the last 30 days as that is cached
220 language_scores = self.get_language_top_scores(language)
221 for index, user_score in enumerate(language_scores):
222 if user_score['user__username'] == self.context.username:
223 return index + 1, language
224 return -1, language
225
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/pootle_score/utils.py b/pootle/apps/pootle_score/utils.py
--- a/pootle/apps/pootle_score/utils.py
+++ b/pootle/apps/pootle_score/utils.py
@@ -78,7 +78,8 @@
Sum("score"),
Sum("suggested"),
Sum("reviewed"),
- Sum("translated")).order_by("-score__sum")
+ Sum("translated")).filter(
+ score__sum__gt=0).order_by("-score__sum")
def filter_scores(self, qs):
return qs
| {"golden_diff": "diff --git a/pootle/apps/pootle_score/utils.py b/pootle/apps/pootle_score/utils.py\n--- a/pootle/apps/pootle_score/utils.py\n+++ b/pootle/apps/pootle_score/utils.py\n@@ -78,7 +78,8 @@\n Sum(\"score\"),\n Sum(\"suggested\"),\n Sum(\"reviewed\"),\n- Sum(\"translated\")).order_by(\"-score__sum\")\n+ Sum(\"translated\")).filter(\n+ score__sum__gt=0).order_by(\"-score__sum\")\n \n def filter_scores(self, qs):\n return qs\n", "issue": "Top scorers list includes zero score users\nThe top scorer list in e.g. `/af/?details` includes a number of users with zero score.\r\n\r\nI'm doubtful that these contributed in last 30 days. So they shouldn't be on the list at all.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom datetime import date, datetime, timedelta\n\nimport pytz\n\nfrom django.contrib.auth import get_user_model\nfrom django.db.models import Sum\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.decorators import persistent_property\nfrom pootle.core.delegate import display, revision, scores\nfrom pootle.core.utils.timezone import localdate, make_aware\nfrom pootle_app.models import Directory\nfrom pootle_language.models import Language\n\nfrom .apps import PootleScoreConfig\nfrom .models import UserTPScore\n\n\nUser = get_user_model()\n\n\ndef to_datetime(possible_dt):\n if possible_dt is None:\n return\n if isinstance(possible_dt, datetime):\n return possible_dt\n if isinstance(possible_dt, date):\n return make_aware(\n datetime.combine(\n possible_dt,\n datetime.min.time())).astimezone(\n pytz.timezone(\"UTC\"))\n\n\nclass Scores(object):\n ns = \"pootle.score\"\n sw_version = PootleScoreConfig.version\n\n def __init__(self, context):\n self.context = context\n\n @property\n def revision(self):\n return revision.get(Directory)(\n self.context.directory).get(key=\"stats\")\n\n @property\n def score_model(self):\n return UserTPScore.objects.exclude(\n user__username__in=User.objects.META_USERS)\n\n def get_daterange(self, days):\n now = localdate()\n return now - timedelta(days), now\n\n def scores_within_days(self, days):\n return self.score_model.filter(\n date__range=self.get_daterange(days))\n\n def get_scores(self, days):\n return self.filter_scores(self.scores_within_days(days))\n\n def get_top_scorers(self, days=30):\n \"\"\"Returns users with the top scores.\n\n :param days: period of days to account for scores.\n \"\"\"\n return self.get_scores(days).order_by(\"user__username\").values(\n \"user__username\", \"user__email\", \"user__full_name\").annotate(\n Sum(\"score\"),\n Sum(\"suggested\"),\n Sum(\"reviewed\"),\n Sum(\"translated\")).order_by(\"-score__sum\")\n\n def filter_scores(self, qs):\n return qs\n\n @persistent_property\n def top_scorers(self):\n return tuple(self.get_top_scorers())\n\n def display(self, offset=0, limit=5, language=None, formatter=None):\n scorers = self.top_scorers\n if offset or limit:\n scorers = list(scorers)\n if offset:\n scorers = scorers[offset:]\n if limit:\n scorers = scorers[:limit]\n return display.get(Scores)(\n top_scores=scorers,\n formatter=formatter,\n language=language)\n\n\nclass LanguageScores(Scores):\n ns = \"pootle.score.language\"\n\n @cached_property\n def cache_key(self):\n return (\n \"%s.%s.%s\"\n % (self.context.code,\n localdate(),\n self.revision))\n\n def filter_scores(self, qs):\n return qs.filter(tp__language_id=self.context.id)\n\n\nclass ProjectScores(Scores):\n ns = \"pootle.score.project\"\n\n @cached_property\n def cache_key(self):\n return (\n \"%s.%s.%s\"\n % (self.context.code,\n localdate(),\n self.revision))\n\n def filter_scores(self, qs):\n return qs.filter(tp__project_id=self.context.id)\n\n\nclass ProjectSetScores(Scores):\n ns = \"pootle.score.projects\"\n\n @cached_property\n def cache_key(self):\n return (\n \"%s.%s\"\n % (localdate(),\n self.revision))\n\n\nclass TPScores(Scores):\n ns = \"pootle.score.tp\"\n\n @cached_property\n def cache_key(self):\n return (\n \"%s/%s.%s.%s\"\n % (self.context.language.code,\n self.context.project.code,\n localdate(),\n self.revision))\n\n def filter_scores(self, qs):\n return qs.filter(tp_id=self.context.id)\n\n\nclass UserScores(Scores):\n ns = \"pootle.score.user\"\n\n @cached_property\n def cache_key(self):\n return (\n \"%s.%s.%s\"\n % (self.context.id,\n localdate(),\n self.revision))\n\n @property\n def revision(self):\n return revision.get(Directory)(\n Directory.objects.projects).get(key=\"stats\")\n\n @property\n def score_model(self):\n return self.context.scores\n\n @property\n def public_score(self):\n return self.context.public_score\n\n @persistent_property\n def top_language(self):\n return self.get_top_language()\n\n def get_top_language_within(self, days):\n top_lang = self.get_scores_by_language(\n days).order_by(\"score__sum\").first()\n if top_lang:\n return Language.objects.get(id=top_lang[\"tp__language\"])\n\n def get_scores_by_language(self, days):\n \"\"\"Languages that the user has contributed to in the last `days`,\n and the summary score\n \"\"\"\n return self.get_scores(days).order_by(\n \"tp__language\").values(\"tp__language\").annotate(Sum(\"score\"))\n\n def get_language_top_scores(self, language):\n return scores.get(language.__class__)(language).top_scorers\n\n def get_top_language(self, days=30):\n \"\"\"Returns the top language the user has contributed to and its\n position.\n\n \"Top language\" is defined as the language with the highest\n aggregate score delta within the last `days` days.\n\n :param days: period of days to account for scores.\n :return: Tuple of `(position, Language)`. If there's no delta in\n the score for the given period for any of the languages,\n `(-1, None)` is returned.\n \"\"\"\n language = self.get_top_language_within(days)\n if language:\n # this only gets scores for the last 30 days as that is cached\n language_scores = self.get_language_top_scores(language)\n for index, user_score in enumerate(language_scores):\n if user_score['user__username'] == self.context.username:\n return index + 1, language\n return -1, language\n", "path": "pootle/apps/pootle_score/utils.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom datetime import date, datetime, timedelta\n\nimport pytz\n\nfrom django.contrib.auth import get_user_model\nfrom django.db.models import Sum\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.decorators import persistent_property\nfrom pootle.core.delegate import display, revision, scores\nfrom pootle.core.utils.timezone import localdate, make_aware\nfrom pootle_app.models import Directory\nfrom pootle_language.models import Language\n\nfrom .apps import PootleScoreConfig\nfrom .models import UserTPScore\n\n\nUser = get_user_model()\n\n\ndef to_datetime(possible_dt):\n if possible_dt is None:\n return\n if isinstance(possible_dt, datetime):\n return possible_dt\n if isinstance(possible_dt, date):\n return make_aware(\n datetime.combine(\n possible_dt,\n datetime.min.time())).astimezone(\n pytz.timezone(\"UTC\"))\n\n\nclass Scores(object):\n ns = \"pootle.score\"\n sw_version = PootleScoreConfig.version\n\n def __init__(self, context):\n self.context = context\n\n @property\n def revision(self):\n return revision.get(Directory)(\n self.context.directory).get(key=\"stats\")\n\n @property\n def score_model(self):\n return UserTPScore.objects.exclude(\n user__username__in=User.objects.META_USERS)\n\n def get_daterange(self, days):\n now = localdate()\n return now - timedelta(days), now\n\n def scores_within_days(self, days):\n return self.score_model.filter(\n date__range=self.get_daterange(days))\n\n def get_scores(self, days):\n return self.filter_scores(self.scores_within_days(days))\n\n def get_top_scorers(self, days=30):\n \"\"\"Returns users with the top scores.\n\n :param days: period of days to account for scores.\n \"\"\"\n return self.get_scores(days).order_by(\"user__username\").values(\n \"user__username\", \"user__email\", \"user__full_name\").annotate(\n Sum(\"score\"),\n Sum(\"suggested\"),\n Sum(\"reviewed\"),\n Sum(\"translated\")).filter(\n score__sum__gt=0).order_by(\"-score__sum\")\n\n def filter_scores(self, qs):\n return qs\n\n @persistent_property\n def top_scorers(self):\n return tuple(self.get_top_scorers())\n\n def display(self, offset=0, limit=5, language=None, formatter=None):\n scorers = self.top_scorers\n if offset or limit:\n scorers = list(scorers)\n if offset:\n scorers = scorers[offset:]\n if limit:\n scorers = scorers[:limit]\n return display.get(Scores)(\n top_scores=scorers,\n formatter=formatter,\n language=language)\n\n\nclass LanguageScores(Scores):\n ns = \"pootle.score.language\"\n\n @cached_property\n def cache_key(self):\n return (\n \"%s.%s.%s\"\n % (self.context.code,\n localdate(),\n self.revision))\n\n def filter_scores(self, qs):\n return qs.filter(tp__language_id=self.context.id)\n\n\nclass ProjectScores(Scores):\n ns = \"pootle.score.project\"\n\n @cached_property\n def cache_key(self):\n return (\n \"%s.%s.%s\"\n % (self.context.code,\n localdate(),\n self.revision))\n\n def filter_scores(self, qs):\n return qs.filter(tp__project_id=self.context.id)\n\n\nclass ProjectSetScores(Scores):\n ns = \"pootle.score.projects\"\n\n @cached_property\n def cache_key(self):\n return (\n \"%s.%s\"\n % (localdate(),\n self.revision))\n\n\nclass TPScores(Scores):\n ns = \"pootle.score.tp\"\n\n @cached_property\n def cache_key(self):\n return (\n \"%s/%s.%s.%s\"\n % (self.context.language.code,\n self.context.project.code,\n localdate(),\n self.revision))\n\n def filter_scores(self, qs):\n return qs.filter(tp_id=self.context.id)\n\n\nclass UserScores(Scores):\n ns = \"pootle.score.user\"\n\n @cached_property\n def cache_key(self):\n return (\n \"%s.%s.%s\"\n % (self.context.id,\n localdate(),\n self.revision))\n\n @property\n def revision(self):\n return revision.get(Directory)(\n Directory.objects.projects).get(key=\"stats\")\n\n @property\n def score_model(self):\n return self.context.scores\n\n @property\n def public_score(self):\n return self.context.public_score\n\n @persistent_property\n def top_language(self):\n return self.get_top_language()\n\n def get_top_language_within(self, days):\n top_lang = self.get_scores_by_language(\n days).order_by(\"score__sum\").first()\n if top_lang:\n return Language.objects.get(id=top_lang[\"tp__language\"])\n\n def get_scores_by_language(self, days):\n \"\"\"Languages that the user has contributed to in the last `days`,\n and the summary score\n \"\"\"\n return self.get_scores(days).order_by(\n \"tp__language\").values(\"tp__language\").annotate(Sum(\"score\"))\n\n def get_language_top_scores(self, language):\n return scores.get(language.__class__)(language).top_scorers\n\n def get_top_language(self, days=30):\n \"\"\"Returns the top language the user has contributed to and its\n position.\n\n \"Top language\" is defined as the language with the highest\n aggregate score delta within the last `days` days.\n\n :param days: period of days to account for scores.\n :return: Tuple of `(position, Language)`. If there's no delta in\n the score for the given period for any of the languages,\n `(-1, None)` is returned.\n \"\"\"\n language = self.get_top_language_within(days)\n if language:\n # this only gets scores for the last 30 days as that is cached\n language_scores = self.get_language_top_scores(language)\n for index, user_score in enumerate(language_scores):\n if user_score['user__username'] == self.context.username:\n return index + 1, language\n return -1, language\n", "path": "pootle/apps/pootle_score/utils.py"}]} | 2,414 | 132 |
gh_patches_debug_15631 | rasdani/github-patches | git_diff | chainer__chainer-7401 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve error message when ChainerX is unavailable
When a user attempt to use `cuda:0` device, ChainerX says the device specifier is invalid if `chainerx.is_available()` is `False`. It seems a bit difficult to deduce the actual problem from the message.
```
$ python train_mnist.py -d cuda:0
Traceback (most recent call last):
File "train_mnist.py", line 134, in <module>
main()
File "train_mnist.py", line 56, in main
device = chainer.get_device(args.device)
File "/path/to/chainer/chainer/backend.py", line 157, in get_device
raise ValueError('Invalid device specifier: {}'.format(device_spec))
ValueError: Invalid device specifier: cuda:0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/backend.py`
Content:
```
1 import numpy
2 import six
3
4 import chainer
5 from chainer.backends import _chainerx
6 from chainer.backends import _cpu
7 from chainer.backends import cuda
8 from chainer.backends import intel64
9 import chainerx
10
11 # Aliases
12 from chainer._backend import Device
13 from chainer.backends._chainerx import ChainerxDevice
14 from chainer.backends._chainerx import from_chx # NOQA
15 from chainer.backends._chainerx import to_chx # NOQA
16 from chainer.backends._cpu import CpuDevice
17 from chainer.backends.cuda import GpuDevice
18 from chainer.backends.intel64 import Intel64Device
19 from chainer import types # NOQA
20
21
22 def _contains_nan(x):
23 """Returns whether the input array has NaN values.
24
25 Args:
26 x (numpy.ndarray or cupy.ndarray): Array to be checked.
27
28 Returns:
29 bool: True if the input has NaN values.
30
31 """
32 if x.dtype.kind in ('f', 'c'):
33 device = get_device_from_array(x)
34 with chainer.using_device(device):
35 return device.xp.isnan(x).any()
36 else:
37 return False
38
39
40 def copyto(dst, src):
41 """Copies the elements of an ndarray to those of another one.
42
43 This function can copy the CPU/GPU arrays to the destination arrays on
44 another device.
45
46 Args:
47 dst (`numpy.ndarray`, `cupy.ndarray` or `ideep4py.mdarray`):
48 Destination array.
49 src (`numpy.ndarray`, `cupy.ndarray` or `ideep4py.mdarray`):
50 Source array.
51
52 """
53 if isinstance(dst, chainerx.ndarray):
54 dst[...] = _chainerx._array_to_chainerx(src, dst.device)
55 elif isinstance(dst, numpy.ndarray):
56 numpy.copyto(dst, _cpu._to_cpu(src))
57 elif isinstance(dst, intel64.mdarray):
58 intel64.ideep.basic_copyto(
59 dst, _cpu._to_cpu(src))
60 elif isinstance(dst, cuda.ndarray):
61 if isinstance(src, chainer.get_cpu_array_types()):
62 src = numpy.asarray(src)
63 if dst.flags.c_contiguous or dst.flags.f_contiguous:
64 dst.set(src)
65 else:
66 cuda.cupy.copyto(dst, cuda.to_gpu(src, device=dst.device))
67 elif isinstance(src, cuda.ndarray):
68 cuda.cupy.copyto(dst, src)
69 else:
70 raise TypeError('cannot copy from non-array object of type {}'
71 .format(type(src)))
72 else:
73 raise TypeError('cannot copy to non-array object of type {}'.format(
74 type(dst)))
75
76
77 def _guess_device_from_array_module(xp):
78 """Returns a plausible device from array module
79
80 .. warning::
81
82 There can be multiple devices for a module
83
84 """
85 if xp is cuda.cupy:
86 return cuda.GpuDevice(cuda.Device())
87 elif xp is chainerx:
88 return _chainerx.ChainerxDevice(chainerx.get_default_device())
89 else:
90 # Cannot detect intel64, because xp of intel64 is numpy.
91 return _cpu.CpuDevice()
92
93
94 def get_device(device_spec):
95 # type: (types.DeviceSpec) -> Device
96 """Returns a device object.
97
98 Args:
99 device_spec (object): Device specifier.
100 If a :class:`chainer.backend.Device` instance is given, it is
101 returned intact. Otherwise the following values are supported:
102
103 * ChainerX devices
104
105 * A string representing a device.
106 (ex. ``'native:0'``, ``'native'``)
107 * A :class:`chainerx.Device` object.
108
109 * CuPy
110
111 * A string starts with ``'@cupy:'``.
112 (ex. ``'@cupy:0'``)
113 * A :class:`cupy.cuda.Device` object.
114
115 * NumPy
116
117 * The string ``'@numpy'``.
118
119 * NumPy with Intel Architecture
120
121 * The string ``'@intel64'``.
122 """
123 if isinstance(device_spec, Device):
124 return device_spec
125
126 if isinstance(device_spec, cuda._integer_types):
127 return _get_device_cupy_or_numpy(device_spec)
128
129 if chainerx.is_available() and isinstance(device_spec, chainerx.Device):
130 return _chainerx.ChainerxDevice(device_spec)
131
132 if cuda.available and isinstance(device_spec, cuda.Device):
133 return cuda.GpuDevice(device_spec)
134
135 if isinstance(device_spec, six.string_types):
136 # '-1', '0', '1', ...
137 try:
138 int_device_spec = int(device_spec)
139 except ValueError:
140 pass
141 else:
142 return _get_device_cupy_or_numpy(int_device_spec)
143
144 if device_spec.startswith('@'):
145 # '@module:...'
146 mod_name, colon, precise_spec = device_spec[1:].partition(':')
147 if mod_name == 'numpy':
148 if not colon:
149 return _cpu.CpuDevice()
150 elif mod_name == 'cupy':
151 if colon:
152 return cuda.GpuDevice.from_device_id(int(precise_spec))
153 elif mod_name == 'intel64':
154 if not colon:
155 return intel64.Intel64Device()
156
157 elif chainerx.is_available():
158 return _chainerx.ChainerxDevice(chainerx.get_device(device_spec))
159
160 raise ValueError('Invalid device specifier: {}'.format(device_spec))
161
162
163 def _get_device_cupy_or_numpy(device_spec):
164 # legacy spec of (gpu) device
165 if device_spec >= 0:
166 return cuda.GpuDevice.from_device_id(device_spec)
167 else:
168 return _cpu.CpuDevice()
169
170
171 def using_device(device_spec):
172 """Context manager to apply the thread-local device state.
173
174 Args:
175 device_spec (object): Device specifier. See :func:`chainer.get_device`
176 for details.
177
178 .. admonition:: Example
179
180 .. testcode::
181 :skipif: doctest_helper.skipif_not_enough_cuda_devices(2)
182
183 with chainer.using_device('@cupy:1'):
184 a = cupy.empty((3, 2))
185
186 assert a.device.id == 1
187
188 """
189
190 # TODO(niboshi): Set default device (once this concept is introduced in
191 # Chainer).
192 device = get_device(device_spec)
193 return device.create_context()
194
195
196 def get_array_module(*args):
197 """Gets an appropriate NumPy-compatible module to process arguments
198
199 This function will return their data arrays' array module for
200 :class:`~chainer.Variable` arguments.
201
202 Args:
203 args: Values to determine whether NumPy, CuPy, or ChainerX should be
204 used.
205
206 Returns:
207 module: :mod:`numpy`, :mod:`cupy`, or :mod:`chainerx` is returned based
208 on the types of the arguments.
209
210 """
211 is_chainerx_available = chainerx.is_available()
212 if is_chainerx_available or cuda.available:
213 arrays = []
214 for arg in args:
215 # Unwrap arrays
216 if isinstance(arg, chainer.variable.Variable):
217 array = arg.data
218 else:
219 array = arg
220 if is_chainerx_available and isinstance(array, chainerx.ndarray):
221 return chainerx
222 arrays.append(array)
223 if cuda.available:
224 return cuda.cupy.get_array_module(*arrays)
225 return numpy
226
227
228 def get_device_from_array(*arrays):
229 """Gets the device from arrays.
230
231 The device on which the given array reside is returned.
232
233 .. note::
234
235 Unlike :func:`get_array_module`, this method does not recognize
236 :class:`~chainer.Variable` objects.
237 If you need to get device from the :class:`~chainer.Variable` instance
238 ``v``, you need to use ``get_device_from_array(v.array)``.
239
240 Args:
241 arrays (array or list of arrays):
242 Arrays to determine the device. If multiple arrays are given, the
243 device correspoinding to the first array which is not NumPy array
244 is returned.
245
246 Returns:
247 chainer.backend.Device: Device instance.
248 """
249 for array in arrays:
250 device = GpuDevice.from_array(array)
251 if device is not None:
252 return device
253
254 if isinstance(array, chainerx.ndarray):
255 return ChainerxDevice(array.device)
256
257 device = Intel64Device.from_array(array)
258 if device is not None:
259 return device
260
261 return CpuDevice()
262
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/backend.py b/chainer/backend.py
--- a/chainer/backend.py
+++ b/chainer/backend.py
@@ -154,7 +154,16 @@
if not colon:
return intel64.Intel64Device()
- elif chainerx.is_available():
+ else:
+ # String device specifier without '@' prefix is assumed to be a
+ # ChainerX device.
+ if not chainerx.is_available():
+ raise RuntimeError(
+ 'Tried to parse ChainerX device specifier \'{}\', '
+ 'but ChainerX is not available. '
+ 'Note that device specifiers without \'@\' prefix are '
+ 'assumed to be ChainerX device '
+ 'specifiers.'.format(device_spec))
return _chainerx.ChainerxDevice(chainerx.get_device(device_spec))
raise ValueError('Invalid device specifier: {}'.format(device_spec))
| {"golden_diff": "diff --git a/chainer/backend.py b/chainer/backend.py\n--- a/chainer/backend.py\n+++ b/chainer/backend.py\n@@ -154,7 +154,16 @@\n if not colon:\n return intel64.Intel64Device()\n \n- elif chainerx.is_available():\n+ else:\n+ # String device specifier without '@' prefix is assumed to be a\n+ # ChainerX device.\n+ if not chainerx.is_available():\n+ raise RuntimeError(\n+ 'Tried to parse ChainerX device specifier \\'{}\\', '\n+ 'but ChainerX is not available. '\n+ 'Note that device specifiers without \\'@\\' prefix are '\n+ 'assumed to be ChainerX device '\n+ 'specifiers.'.format(device_spec))\n return _chainerx.ChainerxDevice(chainerx.get_device(device_spec))\n \n raise ValueError('Invalid device specifier: {}'.format(device_spec))\n", "issue": "Improve error message when ChainerX is unavailable\nWhen a user attempt to use `cuda:0` device, ChainerX says the device specifier is invalid if `chainerx.is_available()` is `False`. It seems a bit difficult to deduce the actual problem from the message.\r\n\r\n```\r\n$ python train_mnist.py -d cuda:0\r\nTraceback (most recent call last):\r\n File \"train_mnist.py\", line 134, in <module>\r\n main()\r\n File \"train_mnist.py\", line 56, in main\r\n device = chainer.get_device(args.device)\r\n File \"/path/to/chainer/chainer/backend.py\", line 157, in get_device\r\n raise ValueError('Invalid device specifier: {}'.format(device_spec))\r\nValueError: Invalid device specifier: cuda:0\r\n```\n", "before_files": [{"content": "import numpy\nimport six\n\nimport chainer\nfrom chainer.backends import _chainerx\nfrom chainer.backends import _cpu\nfrom chainer.backends import cuda\nfrom chainer.backends import intel64\nimport chainerx\n\n# Aliases\nfrom chainer._backend import Device\nfrom chainer.backends._chainerx import ChainerxDevice\nfrom chainer.backends._chainerx import from_chx # NOQA\nfrom chainer.backends._chainerx import to_chx # NOQA\nfrom chainer.backends._cpu import CpuDevice\nfrom chainer.backends.cuda import GpuDevice\nfrom chainer.backends.intel64 import Intel64Device\nfrom chainer import types # NOQA\n\n\ndef _contains_nan(x):\n \"\"\"Returns whether the input array has NaN values.\n\n Args:\n x (numpy.ndarray or cupy.ndarray): Array to be checked.\n\n Returns:\n bool: True if the input has NaN values.\n\n \"\"\"\n if x.dtype.kind in ('f', 'c'):\n device = get_device_from_array(x)\n with chainer.using_device(device):\n return device.xp.isnan(x).any()\n else:\n return False\n\n\ndef copyto(dst, src):\n \"\"\"Copies the elements of an ndarray to those of another one.\n\n This function can copy the CPU/GPU arrays to the destination arrays on\n another device.\n\n Args:\n dst (`numpy.ndarray`, `cupy.ndarray` or `ideep4py.mdarray`):\n Destination array.\n src (`numpy.ndarray`, `cupy.ndarray` or `ideep4py.mdarray`):\n Source array.\n\n \"\"\"\n if isinstance(dst, chainerx.ndarray):\n dst[...] = _chainerx._array_to_chainerx(src, dst.device)\n elif isinstance(dst, numpy.ndarray):\n numpy.copyto(dst, _cpu._to_cpu(src))\n elif isinstance(dst, intel64.mdarray):\n intel64.ideep.basic_copyto(\n dst, _cpu._to_cpu(src))\n elif isinstance(dst, cuda.ndarray):\n if isinstance(src, chainer.get_cpu_array_types()):\n src = numpy.asarray(src)\n if dst.flags.c_contiguous or dst.flags.f_contiguous:\n dst.set(src)\n else:\n cuda.cupy.copyto(dst, cuda.to_gpu(src, device=dst.device))\n elif isinstance(src, cuda.ndarray):\n cuda.cupy.copyto(dst, src)\n else:\n raise TypeError('cannot copy from non-array object of type {}'\n .format(type(src)))\n else:\n raise TypeError('cannot copy to non-array object of type {}'.format(\n type(dst)))\n\n\ndef _guess_device_from_array_module(xp):\n \"\"\"Returns a plausible device from array module\n\n .. warning::\n\n There can be multiple devices for a module\n\n \"\"\"\n if xp is cuda.cupy:\n return cuda.GpuDevice(cuda.Device())\n elif xp is chainerx:\n return _chainerx.ChainerxDevice(chainerx.get_default_device())\n else:\n # Cannot detect intel64, because xp of intel64 is numpy.\n return _cpu.CpuDevice()\n\n\ndef get_device(device_spec):\n # type: (types.DeviceSpec) -> Device\n \"\"\"Returns a device object.\n\n Args:\n device_spec (object): Device specifier.\n If a :class:`chainer.backend.Device` instance is given, it is\n returned intact. Otherwise the following values are supported:\n\n * ChainerX devices\n\n * A string representing a device.\n (ex. ``'native:0'``, ``'native'``)\n * A :class:`chainerx.Device` object.\n\n * CuPy\n\n * A string starts with ``'@cupy:'``.\n (ex. ``'@cupy:0'``)\n * A :class:`cupy.cuda.Device` object.\n\n * NumPy\n\n * The string ``'@numpy'``.\n\n * NumPy with Intel Architecture\n\n * The string ``'@intel64'``.\n \"\"\"\n if isinstance(device_spec, Device):\n return device_spec\n\n if isinstance(device_spec, cuda._integer_types):\n return _get_device_cupy_or_numpy(device_spec)\n\n if chainerx.is_available() and isinstance(device_spec, chainerx.Device):\n return _chainerx.ChainerxDevice(device_spec)\n\n if cuda.available and isinstance(device_spec, cuda.Device):\n return cuda.GpuDevice(device_spec)\n\n if isinstance(device_spec, six.string_types):\n # '-1', '0', '1', ...\n try:\n int_device_spec = int(device_spec)\n except ValueError:\n pass\n else:\n return _get_device_cupy_or_numpy(int_device_spec)\n\n if device_spec.startswith('@'):\n # '@module:...'\n mod_name, colon, precise_spec = device_spec[1:].partition(':')\n if mod_name == 'numpy':\n if not colon:\n return _cpu.CpuDevice()\n elif mod_name == 'cupy':\n if colon:\n return cuda.GpuDevice.from_device_id(int(precise_spec))\n elif mod_name == 'intel64':\n if not colon:\n return intel64.Intel64Device()\n\n elif chainerx.is_available():\n return _chainerx.ChainerxDevice(chainerx.get_device(device_spec))\n\n raise ValueError('Invalid device specifier: {}'.format(device_spec))\n\n\ndef _get_device_cupy_or_numpy(device_spec):\n # legacy spec of (gpu) device\n if device_spec >= 0:\n return cuda.GpuDevice.from_device_id(device_spec)\n else:\n return _cpu.CpuDevice()\n\n\ndef using_device(device_spec):\n \"\"\"Context manager to apply the thread-local device state.\n\n Args:\n device_spec (object): Device specifier. See :func:`chainer.get_device`\n for details.\n\n .. admonition:: Example\n\n .. testcode::\n :skipif: doctest_helper.skipif_not_enough_cuda_devices(2)\n\n with chainer.using_device('@cupy:1'):\n a = cupy.empty((3, 2))\n\n assert a.device.id == 1\n\n \"\"\"\n\n # TODO(niboshi): Set default device (once this concept is introduced in\n # Chainer).\n device = get_device(device_spec)\n return device.create_context()\n\n\ndef get_array_module(*args):\n \"\"\"Gets an appropriate NumPy-compatible module to process arguments\n\n This function will return their data arrays' array module for\n :class:`~chainer.Variable` arguments.\n\n Args:\n args: Values to determine whether NumPy, CuPy, or ChainerX should be\n used.\n\n Returns:\n module: :mod:`numpy`, :mod:`cupy`, or :mod:`chainerx` is returned based\n on the types of the arguments.\n\n \"\"\"\n is_chainerx_available = chainerx.is_available()\n if is_chainerx_available or cuda.available:\n arrays = []\n for arg in args:\n # Unwrap arrays\n if isinstance(arg, chainer.variable.Variable):\n array = arg.data\n else:\n array = arg\n if is_chainerx_available and isinstance(array, chainerx.ndarray):\n return chainerx\n arrays.append(array)\n if cuda.available:\n return cuda.cupy.get_array_module(*arrays)\n return numpy\n\n\ndef get_device_from_array(*arrays):\n \"\"\"Gets the device from arrays.\n\n The device on which the given array reside is returned.\n\n .. note::\n\n Unlike :func:`get_array_module`, this method does not recognize\n :class:`~chainer.Variable` objects.\n If you need to get device from the :class:`~chainer.Variable` instance\n ``v``, you need to use ``get_device_from_array(v.array)``.\n\n Args:\n arrays (array or list of arrays):\n Arrays to determine the device. If multiple arrays are given, the\n device correspoinding to the first array which is not NumPy array\n is returned.\n\n Returns:\n chainer.backend.Device: Device instance.\n \"\"\"\n for array in arrays:\n device = GpuDevice.from_array(array)\n if device is not None:\n return device\n\n if isinstance(array, chainerx.ndarray):\n return ChainerxDevice(array.device)\n\n device = Intel64Device.from_array(array)\n if device is not None:\n return device\n\n return CpuDevice()\n", "path": "chainer/backend.py"}], "after_files": [{"content": "import numpy\nimport six\n\nimport chainer\nfrom chainer.backends import _chainerx\nfrom chainer.backends import _cpu\nfrom chainer.backends import cuda\nfrom chainer.backends import intel64\nimport chainerx\n\n# Aliases\nfrom chainer._backend import Device\nfrom chainer.backends._chainerx import ChainerxDevice\nfrom chainer.backends._chainerx import from_chx # NOQA\nfrom chainer.backends._chainerx import to_chx # NOQA\nfrom chainer.backends._cpu import CpuDevice\nfrom chainer.backends.cuda import GpuDevice\nfrom chainer.backends.intel64 import Intel64Device\nfrom chainer import types # NOQA\n\n\ndef _contains_nan(x):\n \"\"\"Returns whether the input array has NaN values.\n\n Args:\n x (numpy.ndarray or cupy.ndarray): Array to be checked.\n\n Returns:\n bool: True if the input has NaN values.\n\n \"\"\"\n if x.dtype.kind in ('f', 'c'):\n device = get_device_from_array(x)\n with chainer.using_device(device):\n return device.xp.isnan(x).any()\n else:\n return False\n\n\ndef copyto(dst, src):\n \"\"\"Copies the elements of an ndarray to those of another one.\n\n This function can copy the CPU/GPU arrays to the destination arrays on\n another device.\n\n Args:\n dst (`numpy.ndarray`, `cupy.ndarray` or `ideep4py.mdarray`):\n Destination array.\n src (`numpy.ndarray`, `cupy.ndarray` or `ideep4py.mdarray`):\n Source array.\n\n \"\"\"\n if isinstance(dst, chainerx.ndarray):\n dst[...] = _chainerx._array_to_chainerx(src, dst.device)\n elif isinstance(dst, numpy.ndarray):\n numpy.copyto(dst, _cpu._to_cpu(src))\n elif isinstance(dst, intel64.mdarray):\n intel64.ideep.basic_copyto(\n dst, _cpu._to_cpu(src))\n elif isinstance(dst, cuda.ndarray):\n if isinstance(src, chainer.get_cpu_array_types()):\n src = numpy.asarray(src)\n if dst.flags.c_contiguous or dst.flags.f_contiguous:\n dst.set(src)\n else:\n cuda.cupy.copyto(dst, cuda.to_gpu(src, device=dst.device))\n elif isinstance(src, cuda.ndarray):\n cuda.cupy.copyto(dst, src)\n else:\n raise TypeError('cannot copy from non-array object of type {}'\n .format(type(src)))\n else:\n raise TypeError('cannot copy to non-array object of type {}'.format(\n type(dst)))\n\n\ndef _guess_device_from_array_module(xp):\n \"\"\"Returns a plausible device from array module\n\n .. warning::\n\n There can be multiple devices for a module\n\n \"\"\"\n if xp is cuda.cupy:\n return cuda.GpuDevice(cuda.Device())\n elif xp is chainerx:\n return _chainerx.ChainerxDevice(chainerx.get_default_device())\n else:\n # Cannot detect intel64, because xp of intel64 is numpy.\n return _cpu.CpuDevice()\n\n\ndef get_device(device_spec):\n # type: (types.DeviceSpec) -> Device\n \"\"\"Returns a device object.\n\n Args:\n device_spec (object): Device specifier.\n If a :class:`chainer.backend.Device` instance is given, it is\n returned intact. Otherwise the following values are supported:\n\n * ChainerX devices\n\n * A string representing a device.\n (ex. ``'native:0'``, ``'native'``)\n * A :class:`chainerx.Device` object.\n\n * CuPy\n\n * A string starts with ``'@cupy:'``.\n (ex. ``'@cupy:0'``)\n * A :class:`cupy.cuda.Device` object.\n\n * NumPy\n\n * The string ``'@numpy'``.\n\n * NumPy with Intel Architecture\n\n * The string ``'@intel64'``.\n \"\"\"\n if isinstance(device_spec, Device):\n return device_spec\n\n if isinstance(device_spec, cuda._integer_types):\n return _get_device_cupy_or_numpy(device_spec)\n\n if chainerx.is_available() and isinstance(device_spec, chainerx.Device):\n return _chainerx.ChainerxDevice(device_spec)\n\n if cuda.available and isinstance(device_spec, cuda.Device):\n return cuda.GpuDevice(device_spec)\n\n if isinstance(device_spec, six.string_types):\n # '-1', '0', '1', ...\n try:\n int_device_spec = int(device_spec)\n except ValueError:\n pass\n else:\n return _get_device_cupy_or_numpy(int_device_spec)\n\n if device_spec.startswith('@'):\n # '@module:...'\n mod_name, colon, precise_spec = device_spec[1:].partition(':')\n if mod_name == 'numpy':\n if not colon:\n return _cpu.CpuDevice()\n elif mod_name == 'cupy':\n if colon:\n return cuda.GpuDevice.from_device_id(int(precise_spec))\n elif mod_name == 'intel64':\n if not colon:\n return intel64.Intel64Device()\n\n else:\n # String device specifier without '@' prefix is assumed to be a\n # ChainerX device.\n if not chainerx.is_available():\n raise RuntimeError(\n 'Tried to parse ChainerX device specifier \\'{}\\', '\n 'but ChainerX is not available. '\n 'Note that device specifiers without \\'@\\' prefix are '\n 'assumed to be ChainerX device '\n 'specifiers.'.format(device_spec))\n return _chainerx.ChainerxDevice(chainerx.get_device(device_spec))\n\n raise ValueError('Invalid device specifier: {}'.format(device_spec))\n\n\ndef _get_device_cupy_or_numpy(device_spec):\n # legacy spec of (gpu) device\n if device_spec >= 0:\n return cuda.GpuDevice.from_device_id(device_spec)\n else:\n return _cpu.CpuDevice()\n\n\ndef using_device(device_spec):\n \"\"\"Context manager to apply the thread-local device state.\n\n Args:\n device_spec (object): Device specifier. See :func:`chainer.get_device`\n for details.\n\n .. admonition:: Example\n\n .. testcode::\n :skipif: doctest_helper.skipif_not_enough_cuda_devices(2)\n\n with chainer.using_device('@cupy:1'):\n a = cupy.empty((3, 2))\n\n assert a.device.id == 1\n\n \"\"\"\n\n # TODO(niboshi): Set default device (once this concept is introduced in\n # Chainer).\n device = get_device(device_spec)\n return device.create_context()\n\n\ndef get_array_module(*args):\n \"\"\"Gets an appropriate NumPy-compatible module to process arguments\n\n This function will return their data arrays' array module for\n :class:`~chainer.Variable` arguments.\n\n Args:\n args: Values to determine whether NumPy, CuPy, or ChainerX should be\n used.\n\n Returns:\n module: :mod:`numpy`, :mod:`cupy`, or :mod:`chainerx` is returned based\n on the types of the arguments.\n\n \"\"\"\n is_chainerx_available = chainerx.is_available()\n if is_chainerx_available or cuda.available:\n arrays = []\n for arg in args:\n # Unwrap arrays\n if isinstance(arg, chainer.variable.Variable):\n array = arg.data\n else:\n array = arg\n if is_chainerx_available and isinstance(array, chainerx.ndarray):\n return chainerx\n arrays.append(array)\n if cuda.available:\n return cuda.cupy.get_array_module(*arrays)\n return numpy\n\n\ndef get_device_from_array(*arrays):\n \"\"\"Gets the device from arrays.\n\n The device on which the given array reside is returned.\n\n .. note::\n\n Unlike :func:`get_array_module`, this method does not recognize\n :class:`~chainer.Variable` objects.\n If you need to get device from the :class:`~chainer.Variable` instance\n ``v``, you need to use ``get_device_from_array(v.array)``.\n\n Args:\n arrays (array or list of arrays):\n Arrays to determine the device. If multiple arrays are given, the\n device correspoinding to the first array which is not NumPy array\n is returned.\n\n Returns:\n chainer.backend.Device: Device instance.\n \"\"\"\n for array in arrays:\n device = GpuDevice.from_array(array)\n if device is not None:\n return device\n\n if isinstance(array, chainerx.ndarray):\n return ChainerxDevice(array.device)\n\n device = Intel64Device.from_array(array)\n if device is not None:\n return device\n\n return CpuDevice()\n", "path": "chainer/backend.py"}]} | 2,972 | 209 |
gh_patches_debug_34299 | rasdani/github-patches | git_diff | quantumlib__Cirq-4642 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
importlib.abc in Python 3.10
**Description of the issue**
In Python 3.10.0, the command `import cirq` fails with the error:
```
class InstrumentedFinder(importlib.abc.MetaPathFinder):
AttributeError: module 'importlib' has no attribute 'abc'. Did you mean: '_abc'?
```
**Workaround**
If one imports `importlib.abc` prior to importing cirq, no error occurs:
```python
from importlib import abc
import cirq
```
**Suggestion**
Probably you should add `from importlib import abc` somewhere in the Сirq's code.
Searching on Google, I've found a similar issue in another project: [grpc/issues/26062](https://github.com/grpc/grpc/issues/26062)
**Cirq version**
0.13.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cirq-core/cirq/_import.py`
Content:
```
1 # Copyright 2019 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Any, Callable, List, Optional
16
17 from contextlib import contextmanager
18 import importlib
19 import sys
20
21 # Bug workaround: https://github.com/python/mypy/issues/1498
22 ModuleType = Any
23
24
25 class InstrumentedFinder(importlib.abc.MetaPathFinder):
26 """A module finder used to hook the python import statement."""
27
28 def __init__(
29 self,
30 finder: Any,
31 module_name: str,
32 wrap_module: Callable[[ModuleType], Optional[ModuleType]],
33 after_exec: Callable[[ModuleType], None],
34 ):
35 """A module finder that uses an existing module finder to find a python
36 module spec and intercept the execution of matching modules.
37
38 Replace finders in `sys.meta_path` with instances of this class to
39 instrument import statements.
40
41 Args:
42 finder: The original module finder to wrap.
43 module_name: The fully qualified module name to instrument e.g.
44 `'pkg.submodule'`. Submodules of this are also instrumented.
45 wrap_module: A callback function that takes a module object before
46 it is run and either modifies or replaces it before it is run.
47 The module returned by this function will be executed. If None
48 is returned the module is not executed and may be executed
49 later.
50 after_exec: A callback function that is called with the return value
51 of `wrap_module` after that module was executed if `wrap_module`
52 didn't return None.
53 """
54
55 self.finder = finder
56 self.module_name = module_name
57 self.match_components: List[str] = []
58 if self.module_name:
59 self.match_components = self.module_name.split('.')
60 self.wrap_module = wrap_module
61 self.after_exec = after_exec
62
63 def find_spec(self, fullname: str, path: Any = None, target: Any = None) -> Any:
64 components = fullname.split('.')
65 spec = self.finder.find_spec(fullname, path=path, target=target)
66 if spec is None:
67 return None
68 if components[: len(self.match_components)] == self.match_components:
69 spec = self.wrap_spec(spec)
70 return spec
71
72 def wrap_spec(self, spec: Any) -> Any:
73 spec.loader = InstrumentedLoader(spec.loader, self.wrap_module, self.after_exec)
74 return spec
75
76
77 class InstrumentedLoader(importlib.abc.Loader):
78 """A module loader used to hook the python import statement."""
79
80 def __init__(
81 self,
82 loader: Any,
83 wrap_module: Callable[[ModuleType], Optional[ModuleType]],
84 after_exec: Callable[[ModuleType], None],
85 ):
86 """A module loader that uses an existing module loader and intercepts
87 the execution of a module.
88
89 Use `InstrumentedFinder` to instrument modules with instances of this
90 class.
91
92 Args:
93 loader: The original module loader to wrap.
94 module_name: The fully qualified module name to instrument e.g.
95 `'pkg.submodule'`. Submodules of this are also instrumented.
96 wrap_module: A callback function that takes a module object before
97 it is run and either modifies or replaces it before it is run.
98 The module returned by this function will be executed. If None
99 is returned the module is not executed and may be executed
100 later.
101 after_exec: A callback function that is called with the return value
102 of `wrap_module` after that module was executed if `wrap_module`
103 didn't return None.
104 """
105 self.loader = loader
106 self.wrap_module = wrap_module
107 self.after_exec = after_exec
108
109 def create_module(self, spec: ModuleType) -> ModuleType:
110 return self.loader.create_module(spec)
111
112 def exec_module(self, module: ModuleType) -> None:
113 module = self.wrap_module(module)
114 if module is not None:
115 self.loader.exec_module(module)
116 self.after_exec(module)
117
118
119 @contextmanager
120 def wrap_module_executions(
121 module_name: str,
122 wrap_func: Callable[[ModuleType], Optional[ModuleType]],
123 after_exec: Callable[[ModuleType], None] = lambda m: None,
124 assert_meta_path_unchanged: bool = True,
125 ):
126 """A context manager that hooks python's import machinery within the
127 context.
128
129 `wrap_func` is called before executing the module called `module_name` and
130 any of its submodules. The module returned by `wrap_func` will be executed.
131 """
132
133 def wrap(finder: Any) -> Any:
134 if not hasattr(finder, 'find_spec'):
135 return finder
136 return InstrumentedFinder(finder, module_name, wrap_func, after_exec)
137
138 new_meta_path = [wrap(finder) for finder in sys.meta_path]
139
140 try:
141 orig_meta_path, sys.meta_path = sys.meta_path, new_meta_path
142 yield
143 finally:
144 if assert_meta_path_unchanged:
145 assert sys.meta_path == new_meta_path
146 sys.meta_path = orig_meta_path
147
148
149 @contextmanager
150 def delay_import(module_name: str):
151 """A context manager that allows the module or submodule named `module_name`
152 to be imported without the contents of the module executing until the
153 context manager exits.
154 """
155 delay = True
156 execute_list = []
157
158 def wrap_func(module: ModuleType) -> Optional[ModuleType]:
159 if delay:
160 execute_list.append(module)
161 return None # Don't allow the module to be executed yet
162 return module # Now allow the module to be executed
163
164 with wrap_module_executions(module_name, wrap_func):
165 importlib.import_module(module_name)
166
167 yield # Run the body of the context
168
169 delay = False
170 for module in execute_list:
171 module.__loader__.exec_module(module) # Calls back into wrap_func
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cirq-core/cirq/_import.py b/cirq-core/cirq/_import.py
--- a/cirq-core/cirq/_import.py
+++ b/cirq-core/cirq/_import.py
@@ -12,17 +12,18 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from typing import Any, Callable, List, Optional
+from typing import Any, Callable, cast, List, Optional
+from types import ModuleType
+from importlib.machinery import ModuleSpec
+from importlib.abc import Loader
from contextlib import contextmanager
import importlib
+from importlib import abc
import sys
-# Bug workaround: https://github.com/python/mypy/issues/1498
-ModuleType = Any
-
-class InstrumentedFinder(importlib.abc.MetaPathFinder):
+class InstrumentedFinder(abc.MetaPathFinder):
"""A module finder used to hook the python import statement."""
def __init__(
@@ -74,7 +75,7 @@
return spec
-class InstrumentedLoader(importlib.abc.Loader):
+class InstrumentedLoader(abc.Loader):
"""A module loader used to hook the python import statement."""
def __init__(
@@ -106,12 +107,12 @@
self.wrap_module = wrap_module
self.after_exec = after_exec
- def create_module(self, spec: ModuleType) -> ModuleType:
+ def create_module(self, spec: ModuleSpec) -> ModuleType:
return self.loader.create_module(spec)
def exec_module(self, module: ModuleType) -> None:
- module = self.wrap_module(module)
- if module is not None:
+ wrapped_module = self.wrap_module(module)
+ if wrapped_module is not None:
self.loader.exec_module(module)
self.after_exec(module)
@@ -168,4 +169,5 @@
delay = False
for module in execute_list:
- module.__loader__.exec_module(module) # Calls back into wrap_func
+ if module.__loader__ is not None and hasattr(module.__loader__, 'exec_module'):
+ cast(Loader, module.__loader__).exec_module(module) # Calls back into wrap_func
| {"golden_diff": "diff --git a/cirq-core/cirq/_import.py b/cirq-core/cirq/_import.py\n--- a/cirq-core/cirq/_import.py\n+++ b/cirq-core/cirq/_import.py\n@@ -12,17 +12,18 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-from typing import Any, Callable, List, Optional\n+from typing import Any, Callable, cast, List, Optional\n+from types import ModuleType\n+from importlib.machinery import ModuleSpec\n+from importlib.abc import Loader\n \n from contextlib import contextmanager\n import importlib\n+from importlib import abc\n import sys\n \n-# Bug workaround: https://github.com/python/mypy/issues/1498\n-ModuleType = Any\n \n-\n-class InstrumentedFinder(importlib.abc.MetaPathFinder):\n+class InstrumentedFinder(abc.MetaPathFinder):\n \"\"\"A module finder used to hook the python import statement.\"\"\"\n \n def __init__(\n@@ -74,7 +75,7 @@\n return spec\n \n \n-class InstrumentedLoader(importlib.abc.Loader):\n+class InstrumentedLoader(abc.Loader):\n \"\"\"A module loader used to hook the python import statement.\"\"\"\n \n def __init__(\n@@ -106,12 +107,12 @@\n self.wrap_module = wrap_module\n self.after_exec = after_exec\n \n- def create_module(self, spec: ModuleType) -> ModuleType:\n+ def create_module(self, spec: ModuleSpec) -> ModuleType:\n return self.loader.create_module(spec)\n \n def exec_module(self, module: ModuleType) -> None:\n- module = self.wrap_module(module)\n- if module is not None:\n+ wrapped_module = self.wrap_module(module)\n+ if wrapped_module is not None:\n self.loader.exec_module(module)\n self.after_exec(module)\n \n@@ -168,4 +169,5 @@\n \n delay = False\n for module in execute_list:\n- module.__loader__.exec_module(module) # Calls back into wrap_func\n+ if module.__loader__ is not None and hasattr(module.__loader__, 'exec_module'):\n+ cast(Loader, module.__loader__).exec_module(module) # Calls back into wrap_func\n", "issue": "importlib.abc in Python 3.10\n**Description of the issue**\r\n\r\nIn Python 3.10.0, the command `import cirq` fails with the error:\r\n\r\n```\r\nclass InstrumentedFinder(importlib.abc.MetaPathFinder):\r\nAttributeError: module 'importlib' has no attribute 'abc'. Did you mean: '_abc'? \r\n```\r\n\r\n**Workaround**\r\n\r\nIf one imports `importlib.abc` prior to importing cirq, no error occurs:\r\n\r\n```python\r\nfrom importlib import abc\r\nimport cirq\r\n```\r\n\r\n**Suggestion**\r\n\r\nProbably you should add `from importlib import abc` somewhere in the \u0421irq's code.\r\n\r\nSearching on Google, I've found a similar issue in another project: [grpc/issues/26062](https://github.com/grpc/grpc/issues/26062)\r\n\r\n**Cirq version**\r\n0.13.1\r\n\r\n\n", "before_files": [{"content": "# Copyright 2019 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Any, Callable, List, Optional\n\nfrom contextlib import contextmanager\nimport importlib\nimport sys\n\n# Bug workaround: https://github.com/python/mypy/issues/1498\nModuleType = Any\n\n\nclass InstrumentedFinder(importlib.abc.MetaPathFinder):\n \"\"\"A module finder used to hook the python import statement.\"\"\"\n\n def __init__(\n self,\n finder: Any,\n module_name: str,\n wrap_module: Callable[[ModuleType], Optional[ModuleType]],\n after_exec: Callable[[ModuleType], None],\n ):\n \"\"\"A module finder that uses an existing module finder to find a python\n module spec and intercept the execution of matching modules.\n\n Replace finders in `sys.meta_path` with instances of this class to\n instrument import statements.\n\n Args:\n finder: The original module finder to wrap.\n module_name: The fully qualified module name to instrument e.g.\n `'pkg.submodule'`. Submodules of this are also instrumented.\n wrap_module: A callback function that takes a module object before\n it is run and either modifies or replaces it before it is run.\n The module returned by this function will be executed. If None\n is returned the module is not executed and may be executed\n later.\n after_exec: A callback function that is called with the return value\n of `wrap_module` after that module was executed if `wrap_module`\n didn't return None.\n \"\"\"\n\n self.finder = finder\n self.module_name = module_name\n self.match_components: List[str] = []\n if self.module_name:\n self.match_components = self.module_name.split('.')\n self.wrap_module = wrap_module\n self.after_exec = after_exec\n\n def find_spec(self, fullname: str, path: Any = None, target: Any = None) -> Any:\n components = fullname.split('.')\n spec = self.finder.find_spec(fullname, path=path, target=target)\n if spec is None:\n return None\n if components[: len(self.match_components)] == self.match_components:\n spec = self.wrap_spec(spec)\n return spec\n\n def wrap_spec(self, spec: Any) -> Any:\n spec.loader = InstrumentedLoader(spec.loader, self.wrap_module, self.after_exec)\n return spec\n\n\nclass InstrumentedLoader(importlib.abc.Loader):\n \"\"\"A module loader used to hook the python import statement.\"\"\"\n\n def __init__(\n self,\n loader: Any,\n wrap_module: Callable[[ModuleType], Optional[ModuleType]],\n after_exec: Callable[[ModuleType], None],\n ):\n \"\"\"A module loader that uses an existing module loader and intercepts\n the execution of a module.\n\n Use `InstrumentedFinder` to instrument modules with instances of this\n class.\n\n Args:\n loader: The original module loader to wrap.\n module_name: The fully qualified module name to instrument e.g.\n `'pkg.submodule'`. Submodules of this are also instrumented.\n wrap_module: A callback function that takes a module object before\n it is run and either modifies or replaces it before it is run.\n The module returned by this function will be executed. If None\n is returned the module is not executed and may be executed\n later.\n after_exec: A callback function that is called with the return value\n of `wrap_module` after that module was executed if `wrap_module`\n didn't return None.\n \"\"\"\n self.loader = loader\n self.wrap_module = wrap_module\n self.after_exec = after_exec\n\n def create_module(self, spec: ModuleType) -> ModuleType:\n return self.loader.create_module(spec)\n\n def exec_module(self, module: ModuleType) -> None:\n module = self.wrap_module(module)\n if module is not None:\n self.loader.exec_module(module)\n self.after_exec(module)\n\n\n@contextmanager\ndef wrap_module_executions(\n module_name: str,\n wrap_func: Callable[[ModuleType], Optional[ModuleType]],\n after_exec: Callable[[ModuleType], None] = lambda m: None,\n assert_meta_path_unchanged: bool = True,\n):\n \"\"\"A context manager that hooks python's import machinery within the\n context.\n\n `wrap_func` is called before executing the module called `module_name` and\n any of its submodules. The module returned by `wrap_func` will be executed.\n \"\"\"\n\n def wrap(finder: Any) -> Any:\n if not hasattr(finder, 'find_spec'):\n return finder\n return InstrumentedFinder(finder, module_name, wrap_func, after_exec)\n\n new_meta_path = [wrap(finder) for finder in sys.meta_path]\n\n try:\n orig_meta_path, sys.meta_path = sys.meta_path, new_meta_path\n yield\n finally:\n if assert_meta_path_unchanged:\n assert sys.meta_path == new_meta_path\n sys.meta_path = orig_meta_path\n\n\n@contextmanager\ndef delay_import(module_name: str):\n \"\"\"A context manager that allows the module or submodule named `module_name`\n to be imported without the contents of the module executing until the\n context manager exits.\n \"\"\"\n delay = True\n execute_list = []\n\n def wrap_func(module: ModuleType) -> Optional[ModuleType]:\n if delay:\n execute_list.append(module)\n return None # Don't allow the module to be executed yet\n return module # Now allow the module to be executed\n\n with wrap_module_executions(module_name, wrap_func):\n importlib.import_module(module_name)\n\n yield # Run the body of the context\n\n delay = False\n for module in execute_list:\n module.__loader__.exec_module(module) # Calls back into wrap_func\n", "path": "cirq-core/cirq/_import.py"}], "after_files": [{"content": "# Copyright 2019 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Any, Callable, cast, List, Optional\nfrom types import ModuleType\nfrom importlib.machinery import ModuleSpec\nfrom importlib.abc import Loader\n\nfrom contextlib import contextmanager\nimport importlib\nfrom importlib import abc\nimport sys\n\n\nclass InstrumentedFinder(abc.MetaPathFinder):\n \"\"\"A module finder used to hook the python import statement.\"\"\"\n\n def __init__(\n self,\n finder: Any,\n module_name: str,\n wrap_module: Callable[[ModuleType], Optional[ModuleType]],\n after_exec: Callable[[ModuleType], None],\n ):\n \"\"\"A module finder that uses an existing module finder to find a python\n module spec and intercept the execution of matching modules.\n\n Replace finders in `sys.meta_path` with instances of this class to\n instrument import statements.\n\n Args:\n finder: The original module finder to wrap.\n module_name: The fully qualified module name to instrument e.g.\n `'pkg.submodule'`. Submodules of this are also instrumented.\n wrap_module: A callback function that takes a module object before\n it is run and either modifies or replaces it before it is run.\n The module returned by this function will be executed. If None\n is returned the module is not executed and may be executed\n later.\n after_exec: A callback function that is called with the return value\n of `wrap_module` after that module was executed if `wrap_module`\n didn't return None.\n \"\"\"\n\n self.finder = finder\n self.module_name = module_name\n self.match_components: List[str] = []\n if self.module_name:\n self.match_components = self.module_name.split('.')\n self.wrap_module = wrap_module\n self.after_exec = after_exec\n\n def find_spec(self, fullname: str, path: Any = None, target: Any = None) -> Any:\n components = fullname.split('.')\n spec = self.finder.find_spec(fullname, path=path, target=target)\n if spec is None:\n return None\n if components[: len(self.match_components)] == self.match_components:\n spec = self.wrap_spec(spec)\n return spec\n\n def wrap_spec(self, spec: Any) -> Any:\n spec.loader = InstrumentedLoader(spec.loader, self.wrap_module, self.after_exec)\n return spec\n\n\nclass InstrumentedLoader(abc.Loader):\n \"\"\"A module loader used to hook the python import statement.\"\"\"\n\n def __init__(\n self,\n loader: Any,\n wrap_module: Callable[[ModuleType], Optional[ModuleType]],\n after_exec: Callable[[ModuleType], None],\n ):\n \"\"\"A module loader that uses an existing module loader and intercepts\n the execution of a module.\n\n Use `InstrumentedFinder` to instrument modules with instances of this\n class.\n\n Args:\n loader: The original module loader to wrap.\n module_name: The fully qualified module name to instrument e.g.\n `'pkg.submodule'`. Submodules of this are also instrumented.\n wrap_module: A callback function that takes a module object before\n it is run and either modifies or replaces it before it is run.\n The module returned by this function will be executed. If None\n is returned the module is not executed and may be executed\n later.\n after_exec: A callback function that is called with the return value\n of `wrap_module` after that module was executed if `wrap_module`\n didn't return None.\n \"\"\"\n self.loader = loader\n self.wrap_module = wrap_module\n self.after_exec = after_exec\n\n def create_module(self, spec: ModuleSpec) -> ModuleType:\n return self.loader.create_module(spec)\n\n def exec_module(self, module: ModuleType) -> None:\n wrapped_module = self.wrap_module(module)\n if wrapped_module is not None:\n self.loader.exec_module(module)\n self.after_exec(module)\n\n\n@contextmanager\ndef wrap_module_executions(\n module_name: str,\n wrap_func: Callable[[ModuleType], Optional[ModuleType]],\n after_exec: Callable[[ModuleType], None] = lambda m: None,\n assert_meta_path_unchanged: bool = True,\n):\n \"\"\"A context manager that hooks python's import machinery within the\n context.\n\n `wrap_func` is called before executing the module called `module_name` and\n any of its submodules. The module returned by `wrap_func` will be executed.\n \"\"\"\n\n def wrap(finder: Any) -> Any:\n if not hasattr(finder, 'find_spec'):\n return finder\n return InstrumentedFinder(finder, module_name, wrap_func, after_exec)\n\n new_meta_path = [wrap(finder) for finder in sys.meta_path]\n\n try:\n orig_meta_path, sys.meta_path = sys.meta_path, new_meta_path\n yield\n finally:\n if assert_meta_path_unchanged:\n assert sys.meta_path == new_meta_path\n sys.meta_path = orig_meta_path\n\n\n@contextmanager\ndef delay_import(module_name: str):\n \"\"\"A context manager that allows the module or submodule named `module_name`\n to be imported without the contents of the module executing until the\n context manager exits.\n \"\"\"\n delay = True\n execute_list = []\n\n def wrap_func(module: ModuleType) -> Optional[ModuleType]:\n if delay:\n execute_list.append(module)\n return None # Don't allow the module to be executed yet\n return module # Now allow the module to be executed\n\n with wrap_module_executions(module_name, wrap_func):\n importlib.import_module(module_name)\n\n yield # Run the body of the context\n\n delay = False\n for module in execute_list:\n if module.__loader__ is not None and hasattr(module.__loader__, 'exec_module'):\n cast(Loader, module.__loader__).exec_module(module) # Calls back into wrap_func\n", "path": "cirq-core/cirq/_import.py"}]} | 2,249 | 496 |
gh_patches_debug_24558 | rasdani/github-patches | git_diff | marshmallow-code__webargs-43 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pyramid parser use_kwargs throws exception when used
The following code using the pyramid parser throws an exception:
``` python
@parser.use_kwargs({'myvalue': Arg(int)})
def baz(request, myvalue):
return {'myvalue': myvalue}
```
The exception:
```
kwargs['as_kwargs'] = True
> return self.use_args(*args, **kwargs)
E TypeError: use_args() got an unexpected keyword argument 'as_kwargs'
```
Pyramid parser use_kwargs throws exception when used
The following code using the pyramid parser throws an exception:
``` python
@parser.use_kwargs({'myvalue': Arg(int)})
def baz(request, myvalue):
return {'myvalue': myvalue}
```
The exception:
```
kwargs['as_kwargs'] = True
> return self.use_args(*args, **kwargs)
E TypeError: use_args() got an unexpected keyword argument 'as_kwargs'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `webargs/pyramidparser.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Pyramid request argument parsing.
3
4 Example usage: ::
5
6 from wsgiref.simple_server import make_server
7 from pyramid.config import Configurator
8 from pyramid.response import Response
9 from webargs import Arg
10 from webargs.pyramidparser import use_args
11
12 hello_args = {
13 'name': Arg(str, default='World')
14 }
15
16 @use_args(hello_args)
17 def hello_world(request, args):
18 return Response('Hello ' + args['name'])
19
20 if __name__ == '__main__':
21 config = Configurator()
22 config.add_route('hello', '/')
23 config.add_view(hello_world, route_name='hello')
24 app = config.make_wsgi_app()
25 server = make_server('0.0.0.0', 6543, app)
26 server.serve_forever()
27 """
28 import functools
29 import logging
30
31 from webob.multidict import MultiDict
32 from pyramid.httpexceptions import exception_response
33
34 from webargs import core
35 from webargs.core import text_type
36
37 logger = logging.getLogger(__name__)
38
39 class PyramidParser(core.Parser):
40 """Pyramid request argument parser."""
41
42 def parse_querystring(self, req, name, arg):
43 """Pull a querystring value from the request."""
44 return core.get_value(req.GET, name, arg.multiple)
45
46 def parse_form(self, req, name, arg):
47 """Pull a form value from the request."""
48 return core.get_value(req.POST, name, arg.multiple)
49
50 def parse_json(self, req, name, arg):
51 """Pull a json value from the request."""
52 try:
53 json_data = req.json_body
54 except ValueError:
55 return core.Missing
56
57 return core.get_value(json_data, name, arg.multiple)
58
59 def parse_cookies(self, req, name, arg):
60 """Pull the value from the cookiejar."""
61 return core.get_value(req.cookies, name, arg.multiple)
62
63 def parse_headers(self, req, name, arg):
64 """Pull a value from the header data."""
65 return core.get_value(req.headers, name, arg.multiple)
66
67 def parse_files(self, req, name, arg):
68 """Pull a file from the request."""
69 files = ((k, v) for k, v in req.POST.items() if hasattr(v, 'file'))
70 return core.get_value(MultiDict(files), name, arg.multiple)
71
72 def handle_error(self, error):
73 """Handles errors during parsing. Aborts the current HTTP request and
74 responds with a 400 error.
75 """
76 logger.error(error)
77 status_code = getattr(error, 'status_code', 400)
78 data = getattr(error, 'data', {})
79 raise exception_response(status_code, detail=text_type(error), **data)
80
81 def use_args(self, argmap, req=None, locations=core.Parser.DEFAULT_LOCATIONS,
82 validate=None):
83 """Decorator that injects parsed arguments into a view callable.
84 Supports the *Class-based View* pattern where `request` is saved as an instance
85 attribute on a view class.
86
87 :param dict argmap: Dictionary of argument_name:Arg object pairs.
88 :param req: The request object to parse
89 :param tuple locations: Where on the request to search for values.
90 :param callable validate:
91 Validation function that receives the dictionary of parsed arguments.
92 If the function returns ``False``, the parser will raise a
93 :exc:`ValidationError`.
94 """
95 def decorator(func):
96 @functools.wraps(func)
97 def wrapper(obj, *args, **kwargs):
98 # The first argument is either `self` or `request`
99 try: # get self.request
100 request = obj.request
101 except AttributeError: # first arg is request
102 request = obj
103 parsed_args = self.parse(argmap, req=request, locations=locations,
104 validate=None)
105 return func(obj, parsed_args, *args, **kwargs)
106 return wrapper
107 return decorator
108
109 parser = PyramidParser()
110 use_args = parser.use_args
111 use_kwargs = parser.use_kwargs
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/webargs/pyramidparser.py b/webargs/pyramidparser.py
--- a/webargs/pyramidparser.py
+++ b/webargs/pyramidparser.py
@@ -79,7 +79,7 @@
raise exception_response(status_code, detail=text_type(error), **data)
def use_args(self, argmap, req=None, locations=core.Parser.DEFAULT_LOCATIONS,
- validate=None):
+ as_kwargs=False, validate=None):
"""Decorator that injects parsed arguments into a view callable.
Supports the *Class-based View* pattern where `request` is saved as an instance
attribute on a view class.
@@ -102,7 +102,11 @@
request = obj
parsed_args = self.parse(argmap, req=request, locations=locations,
validate=None)
- return func(obj, parsed_args, *args, **kwargs)
+ if as_kwargs:
+ kwargs.update(parsed_args)
+ return func(obj, *args, **kwargs)
+ else:
+ return func(obj, parsed_args, *args, **kwargs)
return wrapper
return decorator
| {"golden_diff": "diff --git a/webargs/pyramidparser.py b/webargs/pyramidparser.py\n--- a/webargs/pyramidparser.py\n+++ b/webargs/pyramidparser.py\n@@ -79,7 +79,7 @@\n raise exception_response(status_code, detail=text_type(error), **data)\n \n def use_args(self, argmap, req=None, locations=core.Parser.DEFAULT_LOCATIONS,\n- validate=None):\n+ as_kwargs=False, validate=None):\n \"\"\"Decorator that injects parsed arguments into a view callable.\n Supports the *Class-based View* pattern where `request` is saved as an instance\n attribute on a view class.\n@@ -102,7 +102,11 @@\n request = obj\n parsed_args = self.parse(argmap, req=request, locations=locations,\n validate=None)\n- return func(obj, parsed_args, *args, **kwargs)\n+ if as_kwargs:\n+ kwargs.update(parsed_args)\n+ return func(obj, *args, **kwargs)\n+ else:\n+ return func(obj, parsed_args, *args, **kwargs)\n return wrapper\n return decorator\n", "issue": "Pyramid parser use_kwargs throws exception when used\nThe following code using the pyramid parser throws an exception:\n\n``` python\[email protected]_kwargs({'myvalue': Arg(int)})\ndef baz(request, myvalue):\n return {'myvalue': myvalue}\n```\n\nThe exception:\n\n```\n kwargs['as_kwargs'] = True\n> return self.use_args(*args, **kwargs)\nE TypeError: use_args() got an unexpected keyword argument 'as_kwargs'\n```\n\nPyramid parser use_kwargs throws exception when used\nThe following code using the pyramid parser throws an exception:\n\n``` python\[email protected]_kwargs({'myvalue': Arg(int)})\ndef baz(request, myvalue):\n return {'myvalue': myvalue}\n```\n\nThe exception:\n\n```\n kwargs['as_kwargs'] = True\n> return self.use_args(*args, **kwargs)\nE TypeError: use_args() got an unexpected keyword argument 'as_kwargs'\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Pyramid request argument parsing.\n\nExample usage: ::\n\n from wsgiref.simple_server import make_server\n from pyramid.config import Configurator\n from pyramid.response import Response\n from webargs import Arg\n from webargs.pyramidparser import use_args\n\n hello_args = {\n 'name': Arg(str, default='World')\n }\n\n @use_args(hello_args)\n def hello_world(request, args):\n return Response('Hello ' + args['name'])\n\n if __name__ == '__main__':\n config = Configurator()\n config.add_route('hello', '/')\n config.add_view(hello_world, route_name='hello')\n app = config.make_wsgi_app()\n server = make_server('0.0.0.0', 6543, app)\n server.serve_forever()\n\"\"\"\nimport functools\nimport logging\n\nfrom webob.multidict import MultiDict\nfrom pyramid.httpexceptions import exception_response\n\nfrom webargs import core\nfrom webargs.core import text_type\n\nlogger = logging.getLogger(__name__)\n\nclass PyramidParser(core.Parser):\n \"\"\"Pyramid request argument parser.\"\"\"\n\n def parse_querystring(self, req, name, arg):\n \"\"\"Pull a querystring value from the request.\"\"\"\n return core.get_value(req.GET, name, arg.multiple)\n\n def parse_form(self, req, name, arg):\n \"\"\"Pull a form value from the request.\"\"\"\n return core.get_value(req.POST, name, arg.multiple)\n\n def parse_json(self, req, name, arg):\n \"\"\"Pull a json value from the request.\"\"\"\n try:\n json_data = req.json_body\n except ValueError:\n return core.Missing\n\n return core.get_value(json_data, name, arg.multiple)\n\n def parse_cookies(self, req, name, arg):\n \"\"\"Pull the value from the cookiejar.\"\"\"\n return core.get_value(req.cookies, name, arg.multiple)\n\n def parse_headers(self, req, name, arg):\n \"\"\"Pull a value from the header data.\"\"\"\n return core.get_value(req.headers, name, arg.multiple)\n\n def parse_files(self, req, name, arg):\n \"\"\"Pull a file from the request.\"\"\"\n files = ((k, v) for k, v in req.POST.items() if hasattr(v, 'file'))\n return core.get_value(MultiDict(files), name, arg.multiple)\n\n def handle_error(self, error):\n \"\"\"Handles errors during parsing. Aborts the current HTTP request and\n responds with a 400 error.\n \"\"\"\n logger.error(error)\n status_code = getattr(error, 'status_code', 400)\n data = getattr(error, 'data', {})\n raise exception_response(status_code, detail=text_type(error), **data)\n\n def use_args(self, argmap, req=None, locations=core.Parser.DEFAULT_LOCATIONS,\n validate=None):\n \"\"\"Decorator that injects parsed arguments into a view callable.\n Supports the *Class-based View* pattern where `request` is saved as an instance\n attribute on a view class.\n\n :param dict argmap: Dictionary of argument_name:Arg object pairs.\n :param req: The request object to parse\n :param tuple locations: Where on the request to search for values.\n :param callable validate:\n Validation function that receives the dictionary of parsed arguments.\n If the function returns ``False``, the parser will raise a\n :exc:`ValidationError`.\n \"\"\"\n def decorator(func):\n @functools.wraps(func)\n def wrapper(obj, *args, **kwargs):\n # The first argument is either `self` or `request`\n try: # get self.request\n request = obj.request\n except AttributeError: # first arg is request\n request = obj\n parsed_args = self.parse(argmap, req=request, locations=locations,\n validate=None)\n return func(obj, parsed_args, *args, **kwargs)\n return wrapper\n return decorator\n\nparser = PyramidParser()\nuse_args = parser.use_args\nuse_kwargs = parser.use_kwargs\n", "path": "webargs/pyramidparser.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Pyramid request argument parsing.\n\nExample usage: ::\n\n from wsgiref.simple_server import make_server\n from pyramid.config import Configurator\n from pyramid.response import Response\n from webargs import Arg\n from webargs.pyramidparser import use_args\n\n hello_args = {\n 'name': Arg(str, default='World')\n }\n\n @use_args(hello_args)\n def hello_world(request, args):\n return Response('Hello ' + args['name'])\n\n if __name__ == '__main__':\n config = Configurator()\n config.add_route('hello', '/')\n config.add_view(hello_world, route_name='hello')\n app = config.make_wsgi_app()\n server = make_server('0.0.0.0', 6543, app)\n server.serve_forever()\n\"\"\"\nimport functools\nimport logging\n\nfrom webob.multidict import MultiDict\nfrom pyramid.httpexceptions import exception_response\n\nfrom webargs import core\nfrom webargs.core import text_type\n\nlogger = logging.getLogger(__name__)\n\nclass PyramidParser(core.Parser):\n \"\"\"Pyramid request argument parser.\"\"\"\n\n def parse_querystring(self, req, name, arg):\n \"\"\"Pull a querystring value from the request.\"\"\"\n return core.get_value(req.GET, name, arg.multiple)\n\n def parse_form(self, req, name, arg):\n \"\"\"Pull a form value from the request.\"\"\"\n return core.get_value(req.POST, name, arg.multiple)\n\n def parse_json(self, req, name, arg):\n \"\"\"Pull a json value from the request.\"\"\"\n try:\n json_data = req.json_body\n except ValueError:\n return core.Missing\n\n return core.get_value(json_data, name, arg.multiple)\n\n def parse_cookies(self, req, name, arg):\n \"\"\"Pull the value from the cookiejar.\"\"\"\n return core.get_value(req.cookies, name, arg.multiple)\n\n def parse_headers(self, req, name, arg):\n \"\"\"Pull a value from the header data.\"\"\"\n return core.get_value(req.headers, name, arg.multiple)\n\n def parse_files(self, req, name, arg):\n \"\"\"Pull a file from the request.\"\"\"\n files = ((k, v) for k, v in req.POST.items() if hasattr(v, 'file'))\n return core.get_value(MultiDict(files), name, arg.multiple)\n\n def handle_error(self, error):\n \"\"\"Handles errors during parsing. Aborts the current HTTP request and\n responds with a 400 error.\n \"\"\"\n logger.error(error)\n status_code = getattr(error, 'status_code', 400)\n data = getattr(error, 'data', {})\n raise exception_response(status_code, detail=text_type(error), **data)\n\n def use_args(self, argmap, req=None, locations=core.Parser.DEFAULT_LOCATIONS,\n as_kwargs=False, validate=None):\n \"\"\"Decorator that injects parsed arguments into a view callable.\n Supports the *Class-based View* pattern where `request` is saved as an instance\n attribute on a view class.\n\n :param dict argmap: Dictionary of argument_name:Arg object pairs.\n :param req: The request object to parse\n :param tuple locations: Where on the request to search for values.\n :param callable validate:\n Validation function that receives the dictionary of parsed arguments.\n If the function returns ``False``, the parser will raise a\n :exc:`ValidationError`.\n \"\"\"\n def decorator(func):\n @functools.wraps(func)\n def wrapper(obj, *args, **kwargs):\n # The first argument is either `self` or `request`\n try: # get self.request\n request = obj.request\n except AttributeError: # first arg is request\n request = obj\n parsed_args = self.parse(argmap, req=request, locations=locations,\n validate=None)\n if as_kwargs:\n kwargs.update(parsed_args)\n return func(obj, *args, **kwargs)\n else:\n return func(obj, parsed_args, *args, **kwargs)\n return wrapper\n return decorator\n\nparser = PyramidParser()\nuse_args = parser.use_args\nuse_kwargs = parser.use_kwargs\n", "path": "webargs/pyramidparser.py"}]} | 1,565 | 244 |
gh_patches_debug_2647 | rasdani/github-patches | git_diff | dj-stripe__dj-stripe-1312 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Issue when attempting to sync tiered Price Model in 2.4.2
**Describe the bug**
It looks like 9bd896ffd944e809b95abae884a2149dc8a79f27 introduced a regression when trying to sync a tiered Price model. Probably Price is not the only model affected.
Check out this trace:
```
$ ./manage.py djstripe_sync_models Price
Syncing Price:
INFO stripe.log_info:64- message='Request to Stripe api' method=get path=https://api.stripe.com/v1/prices?expand[0]=data.tiers
INFO stripe.log_info:64- message='Stripe API response' path=https://api.stripe.com/v1/prices?expand[0]=data.tiers response_code=200
id=price_1IFltoFz0jfFqjGsm5fbXWt5, pk=6 (xxx)
id=price_1IFe29Fz0jfFqjGsTpBrPQql, pk=1 (xxx)
id=price_1IFe29Fz0jfFqjGslZM7rvu1, pk=2 (xxx)
id=price_1IFe28Fz0jfFqjGsM0SIOAa6, pk=3 (xxx)
id=price_1IFe27Fz0jfFqjGsEN4c0MxR, pk=4 (xxx)
id=price_1IFe23Fz0jfFqjGsbFrlPDSi, pk=5 (xxx)
INFO stripe.log_info:64- message='Request to Stripe api' method=get path=https://api.stripe.com/v1/prices
INFO stripe.log_info:64- message='Stripe API response' path=https://api.stripe.com/v1/prices response_code=200
id=price_1IFltoFz0jfFqjGsm5fbXWt5, pk=6 (xxx)
id=price_1IFe29Fz0jfFqjGsTpBrPQql, pk=1 (xxx)
id=price_1IFe29Fz0jfFqjGslZM7rvu1, pk=2 (xxx)
id=price_1IFe28Fz0jfFqjGsM0SIOAa6, pk=3 (xxx)
id=price_1IFe27Fz0jfFqjGsEN4c0MxR, pk=4 (xxx)
id=price_1IFe23Fz0jfFqjGsbFrlPDSi, pk=5 (xxx)
Synced 12 Price
```
The Price objects are synced twice. The first time with the tiers attribute expanded and the second time without expanding it and overwriting it, so the final object doesn't include tiers.
**Software versions**
- dj-stripe version: 2.4.2
- Python version: 3.7
- Django version: 3.0.11
- Stripe API version: 2.55
- Database type and version: postgresql 10.10
**Steps To Reproduce**
1. Create tiered Price and add tiers in Stripe Dashboard
2. Sync Price models with manage command
**Can you reproduce the issue with the latest version of master?**
Yes, both 2.4.2 and master are affected (2.4.1 is not affected)
**Expected Behavior**
The Price Model should have the tiers JSONField object populated.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `djstripe/management/commands/djstripe_sync_models.py`
Content:
```
1 from typing import List
2
3 from django.apps import apps
4 from django.core.management.base import BaseCommand, CommandError
5
6 from ... import models, settings
7
8
9 class Command(BaseCommand):
10 """Sync models from stripe."""
11
12 help = "Sync models from stripe."
13
14 def add_arguments(self, parser):
15 parser.add_argument(
16 "args",
17 metavar="ModelName",
18 nargs="*",
19 help="restricts sync to these model names (default is to sync all "
20 "supported models)",
21 )
22
23 def handle(self, *args, **options):
24 app_label = "djstripe"
25 app_config = apps.get_app_config(app_label)
26 model_list = [] # type: List[models.StripeModel]
27
28 if args:
29 for model_label in args:
30 try:
31 model = app_config.get_model(model_label)
32 except LookupError:
33 raise CommandError(
34 "Unknown model: {}.{}".format(app_label, model_label)
35 )
36
37 model_list.append(model)
38 else:
39 model_list = app_config.get_models()
40
41 for model in model_list:
42 self.sync_model(model)
43
44 def _should_sync_model(self, model):
45 if not issubclass(model, models.StripeModel):
46 return False, "not a StripeModel"
47
48 if model.stripe_class is None:
49 return False, "no stripe_class"
50
51 if not hasattr(model.stripe_class, "list"):
52 return False, "no stripe_class.list"
53
54 if model is models.UpcomingInvoice:
55 return False, "Upcoming Invoices are virtual only"
56
57 if not settings.STRIPE_LIVE_MODE:
58 if model is models.ScheduledQueryRun:
59 return False, "only available in live mode"
60
61 return True, ""
62
63 def sync_model(self, model):
64 model_name = model.__name__
65
66 should_sync, reason = self._should_sync_model(model)
67 if not should_sync:
68 self.stdout.write(f"Skipping {model}: {reason}")
69 return
70
71 self.stdout.write("Syncing {}:".format(model_name))
72
73 count = 0
74 for list_kwargs in self.get_list_kwargs(model):
75 try:
76 if model is models.Account:
77 # special case, since own account isn't returned by Account.api_list
78 stripe_obj = models.Account.stripe_class.retrieve(
79 api_key=settings.STRIPE_SECRET_KEY
80 )
81 count += 1
82 djstripe_obj = model.sync_from_stripe_data(stripe_obj)
83 self.stdout.write(
84 " id={id}, pk={pk} ({djstripe_obj})".format(
85 id=djstripe_obj.id,
86 pk=djstripe_obj.pk,
87 djstripe_obj=djstripe_obj,
88 )
89 )
90
91 for stripe_obj in model.api_list(**list_kwargs):
92 count += 1
93 djstripe_obj = model.sync_from_stripe_data(stripe_obj)
94 self.stdout.write(
95 " id={id}, pk={pk} ({djstripe_obj})".format(
96 id=djstripe_obj.id,
97 pk=djstripe_obj.pk,
98 djstripe_obj=djstripe_obj,
99 )
100 )
101
102 except Exception as e:
103 self.stderr.write(str(e))
104
105 if count == 0:
106 self.stdout.write(" (no results)")
107 else:
108 self.stdout.write(
109 " Synced {count} {model_name}".format(
110 count=count, model_name=model_name
111 )
112 )
113
114 def get_list_kwargs(self, model):
115 """
116 Returns a sequence of kwargs dicts to pass to model.api_list
117
118 This allows us to sync models that require parameters to api_list
119
120 :param model:
121 :return: Sequence[dict]
122 """
123 all_list_kwargs = (
124 [{"expand": [f"data.{k}" for k in model.expand_fields]}]
125 if model.expand_fields
126 else []
127 )
128 if model is models.PaymentMethod:
129 # special case
130 all_list_kwargs.extend(
131 (
132 {"customer": stripe_customer.id, "type": "card"}
133 for stripe_customer in models.Customer.api_list()
134 )
135 )
136 elif model is models.SubscriptionItem:
137 all_list_kwargs.extend(
138 (
139 {"subscription": subscription.id}
140 for subscription in models.Subscription.api_list()
141 )
142 )
143 else:
144 all_list_kwargs.append({})
145
146 return all_list_kwargs
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/djstripe/management/commands/djstripe_sync_models.py b/djstripe/management/commands/djstripe_sync_models.py
--- a/djstripe/management/commands/djstripe_sync_models.py
+++ b/djstripe/management/commands/djstripe_sync_models.py
@@ -140,7 +140,7 @@
for subscription in models.Subscription.api_list()
)
)
- else:
+ elif not all_list_kwargs:
all_list_kwargs.append({})
return all_list_kwargs
| {"golden_diff": "diff --git a/djstripe/management/commands/djstripe_sync_models.py b/djstripe/management/commands/djstripe_sync_models.py\n--- a/djstripe/management/commands/djstripe_sync_models.py\n+++ b/djstripe/management/commands/djstripe_sync_models.py\n@@ -140,7 +140,7 @@\n for subscription in models.Subscription.api_list()\n )\n )\n- else:\n+ elif not all_list_kwargs:\n all_list_kwargs.append({})\n \n return all_list_kwargs\n", "issue": "Issue when attempting to sync tiered Price Model in 2.4.2\n**Describe the bug**\r\n\r\nIt looks like 9bd896ffd944e809b95abae884a2149dc8a79f27 introduced a regression when trying to sync a tiered Price model. Probably Price is not the only model affected.\r\n\r\nCheck out this trace:\r\n\r\n```\r\n$ ./manage.py djstripe_sync_models Price\r\nSyncing Price:\r\nINFO stripe.log_info:64- message='Request to Stripe api' method=get path=https://api.stripe.com/v1/prices?expand[0]=data.tiers\r\nINFO stripe.log_info:64- message='Stripe API response' path=https://api.stripe.com/v1/prices?expand[0]=data.tiers response_code=200\r\n id=price_1IFltoFz0jfFqjGsm5fbXWt5, pk=6 (xxx)\r\n id=price_1IFe29Fz0jfFqjGsTpBrPQql, pk=1 (xxx)\r\n id=price_1IFe29Fz0jfFqjGslZM7rvu1, pk=2 (xxx)\r\n id=price_1IFe28Fz0jfFqjGsM0SIOAa6, pk=3 (xxx)\r\n id=price_1IFe27Fz0jfFqjGsEN4c0MxR, pk=4 (xxx)\r\n id=price_1IFe23Fz0jfFqjGsbFrlPDSi, pk=5 (xxx)\r\nINFO stripe.log_info:64- message='Request to Stripe api' method=get path=https://api.stripe.com/v1/prices\r\nINFO stripe.log_info:64- message='Stripe API response' path=https://api.stripe.com/v1/prices response_code=200\r\n id=price_1IFltoFz0jfFqjGsm5fbXWt5, pk=6 (xxx)\r\n id=price_1IFe29Fz0jfFqjGsTpBrPQql, pk=1 (xxx)\r\n id=price_1IFe29Fz0jfFqjGslZM7rvu1, pk=2 (xxx)\r\n id=price_1IFe28Fz0jfFqjGsM0SIOAa6, pk=3 (xxx)\r\n id=price_1IFe27Fz0jfFqjGsEN4c0MxR, pk=4 (xxx)\r\n id=price_1IFe23Fz0jfFqjGsbFrlPDSi, pk=5 (xxx)\r\n Synced 12 Price\r\n```\r\n\r\nThe Price objects are synced twice. The first time with the tiers attribute expanded and the second time without expanding it and overwriting it, so the final object doesn't include tiers.\r\n\r\n**Software versions**\r\n- dj-stripe version: 2.4.2\r\n- Python version: 3.7\r\n- Django version: 3.0.11\r\n- Stripe API version: 2.55\r\n- Database type and version: postgresql 10.10\r\n\r\n**Steps To Reproduce**\r\n\r\n1. Create tiered Price and add tiers in Stripe Dashboard\r\n2. Sync Price models with manage command\r\n\r\n**Can you reproduce the issue with the latest version of master?**\r\n\r\nYes, both 2.4.2 and master are affected (2.4.1 is not affected)\r\n\r\n**Expected Behavior**\r\n\r\nThe Price Model should have the tiers JSONField object populated.\n", "before_files": [{"content": "from typing import List\n\nfrom django.apps import apps\nfrom django.core.management.base import BaseCommand, CommandError\n\nfrom ... import models, settings\n\n\nclass Command(BaseCommand):\n \"\"\"Sync models from stripe.\"\"\"\n\n help = \"Sync models from stripe.\"\n\n def add_arguments(self, parser):\n parser.add_argument(\n \"args\",\n metavar=\"ModelName\",\n nargs=\"*\",\n help=\"restricts sync to these model names (default is to sync all \"\n \"supported models)\",\n )\n\n def handle(self, *args, **options):\n app_label = \"djstripe\"\n app_config = apps.get_app_config(app_label)\n model_list = [] # type: List[models.StripeModel]\n\n if args:\n for model_label in args:\n try:\n model = app_config.get_model(model_label)\n except LookupError:\n raise CommandError(\n \"Unknown model: {}.{}\".format(app_label, model_label)\n )\n\n model_list.append(model)\n else:\n model_list = app_config.get_models()\n\n for model in model_list:\n self.sync_model(model)\n\n def _should_sync_model(self, model):\n if not issubclass(model, models.StripeModel):\n return False, \"not a StripeModel\"\n\n if model.stripe_class is None:\n return False, \"no stripe_class\"\n\n if not hasattr(model.stripe_class, \"list\"):\n return False, \"no stripe_class.list\"\n\n if model is models.UpcomingInvoice:\n return False, \"Upcoming Invoices are virtual only\"\n\n if not settings.STRIPE_LIVE_MODE:\n if model is models.ScheduledQueryRun:\n return False, \"only available in live mode\"\n\n return True, \"\"\n\n def sync_model(self, model):\n model_name = model.__name__\n\n should_sync, reason = self._should_sync_model(model)\n if not should_sync:\n self.stdout.write(f\"Skipping {model}: {reason}\")\n return\n\n self.stdout.write(\"Syncing {}:\".format(model_name))\n\n count = 0\n for list_kwargs in self.get_list_kwargs(model):\n try:\n if model is models.Account:\n # special case, since own account isn't returned by Account.api_list\n stripe_obj = models.Account.stripe_class.retrieve(\n api_key=settings.STRIPE_SECRET_KEY\n )\n count += 1\n djstripe_obj = model.sync_from_stripe_data(stripe_obj)\n self.stdout.write(\n \" id={id}, pk={pk} ({djstripe_obj})\".format(\n id=djstripe_obj.id,\n pk=djstripe_obj.pk,\n djstripe_obj=djstripe_obj,\n )\n )\n\n for stripe_obj in model.api_list(**list_kwargs):\n count += 1\n djstripe_obj = model.sync_from_stripe_data(stripe_obj)\n self.stdout.write(\n \" id={id}, pk={pk} ({djstripe_obj})\".format(\n id=djstripe_obj.id,\n pk=djstripe_obj.pk,\n djstripe_obj=djstripe_obj,\n )\n )\n\n except Exception as e:\n self.stderr.write(str(e))\n\n if count == 0:\n self.stdout.write(\" (no results)\")\n else:\n self.stdout.write(\n \" Synced {count} {model_name}\".format(\n count=count, model_name=model_name\n )\n )\n\n def get_list_kwargs(self, model):\n \"\"\"\n Returns a sequence of kwargs dicts to pass to model.api_list\n\n This allows us to sync models that require parameters to api_list\n\n :param model:\n :return: Sequence[dict]\n \"\"\"\n all_list_kwargs = (\n [{\"expand\": [f\"data.{k}\" for k in model.expand_fields]}]\n if model.expand_fields\n else []\n )\n if model is models.PaymentMethod:\n # special case\n all_list_kwargs.extend(\n (\n {\"customer\": stripe_customer.id, \"type\": \"card\"}\n for stripe_customer in models.Customer.api_list()\n )\n )\n elif model is models.SubscriptionItem:\n all_list_kwargs.extend(\n (\n {\"subscription\": subscription.id}\n for subscription in models.Subscription.api_list()\n )\n )\n else:\n all_list_kwargs.append({})\n\n return all_list_kwargs\n", "path": "djstripe/management/commands/djstripe_sync_models.py"}], "after_files": [{"content": "from typing import List\n\nfrom django.apps import apps\nfrom django.core.management.base import BaseCommand, CommandError\n\nfrom ... import models, settings\n\n\nclass Command(BaseCommand):\n \"\"\"Sync models from stripe.\"\"\"\n\n help = \"Sync models from stripe.\"\n\n def add_arguments(self, parser):\n parser.add_argument(\n \"args\",\n metavar=\"ModelName\",\n nargs=\"*\",\n help=\"restricts sync to these model names (default is to sync all \"\n \"supported models)\",\n )\n\n def handle(self, *args, **options):\n app_label = \"djstripe\"\n app_config = apps.get_app_config(app_label)\n model_list = [] # type: List[models.StripeModel]\n\n if args:\n for model_label in args:\n try:\n model = app_config.get_model(model_label)\n except LookupError:\n raise CommandError(\n \"Unknown model: {}.{}\".format(app_label, model_label)\n )\n\n model_list.append(model)\n else:\n model_list = app_config.get_models()\n\n for model in model_list:\n self.sync_model(model)\n\n def _should_sync_model(self, model):\n if not issubclass(model, models.StripeModel):\n return False, \"not a StripeModel\"\n\n if model.stripe_class is None:\n return False, \"no stripe_class\"\n\n if not hasattr(model.stripe_class, \"list\"):\n return False, \"no stripe_class.list\"\n\n if model is models.UpcomingInvoice:\n return False, \"Upcoming Invoices are virtual only\"\n\n if not settings.STRIPE_LIVE_MODE:\n if model is models.ScheduledQueryRun:\n return False, \"only available in live mode\"\n\n return True, \"\"\n\n def sync_model(self, model):\n model_name = model.__name__\n\n should_sync, reason = self._should_sync_model(model)\n if not should_sync:\n self.stdout.write(f\"Skipping {model}: {reason}\")\n return\n\n self.stdout.write(\"Syncing {}:\".format(model_name))\n\n count = 0\n for list_kwargs in self.get_list_kwargs(model):\n try:\n if model is models.Account:\n # special case, since own account isn't returned by Account.api_list\n stripe_obj = models.Account.stripe_class.retrieve(\n api_key=settings.STRIPE_SECRET_KEY\n )\n count += 1\n djstripe_obj = model.sync_from_stripe_data(stripe_obj)\n self.stdout.write(\n \" id={id}, pk={pk} ({djstripe_obj})\".format(\n id=djstripe_obj.id,\n pk=djstripe_obj.pk,\n djstripe_obj=djstripe_obj,\n )\n )\n\n for stripe_obj in model.api_list(**list_kwargs):\n count += 1\n djstripe_obj = model.sync_from_stripe_data(stripe_obj)\n self.stdout.write(\n \" id={id}, pk={pk} ({djstripe_obj})\".format(\n id=djstripe_obj.id,\n pk=djstripe_obj.pk,\n djstripe_obj=djstripe_obj,\n )\n )\n\n except Exception as e:\n self.stderr.write(str(e))\n\n if count == 0:\n self.stdout.write(\" (no results)\")\n else:\n self.stdout.write(\n \" Synced {count} {model_name}\".format(\n count=count, model_name=model_name\n )\n )\n\n def get_list_kwargs(self, model):\n \"\"\"\n Returns a sequence of kwargs dicts to pass to model.api_list\n\n This allows us to sync models that require parameters to api_list\n\n :param model:\n :return: Sequence[dict]\n \"\"\"\n all_list_kwargs = (\n [{\"expand\": [f\"data.{k}\" for k in model.expand_fields]}]\n if model.expand_fields\n else []\n )\n if model is models.PaymentMethod:\n # special case\n all_list_kwargs.extend(\n (\n {\"customer\": stripe_customer.id, \"type\": \"card\"}\n for stripe_customer in models.Customer.api_list()\n )\n )\n elif model is models.SubscriptionItem:\n all_list_kwargs.extend(\n (\n {\"subscription\": subscription.id}\n for subscription in models.Subscription.api_list()\n )\n )\n elif not all_list_kwargs:\n all_list_kwargs.append({})\n\n return all_list_kwargs\n", "path": "djstripe/management/commands/djstripe_sync_models.py"}]} | 2,349 | 117 |
gh_patches_debug_58825 | rasdani/github-patches | git_diff | modin-project__modin-3390 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Do not check ASV benchmarks on test data, where the number of rows is much less than the number of columns
These sizes can be removed because such cases are not used in benchmarking: https://github.com/modin-project/modin/blob/dd91a78ad3f4b8e3e569215e9c8e540ad099d4a8/asv_bench/benchmarks/utils/data_shapes.py#L33 and https://github.com/modin-project/modin/blob/dd91a78ad3f4b8e3e569215e9c8e540ad099d4a8/asv_bench/benchmarks/utils/data_shapes.py#L46
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `asv_bench/benchmarks/utils/data_shapes.py`
Content:
```
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 """Define data shapes."""
15
16 import os
17 import json
18
19 from .compatibility import ASV_USE_BACKEND, ASV_DATASET_SIZE
20
21 RAND_LOW = 0
22 RAND_HIGH = 1_000_000_000 if ASV_USE_BACKEND == "omnisci" else 100
23
24 BINARY_OP_DATA_SIZE = {
25 "big": [
26 [[5000, 5000], [5000, 5000]],
27 # the case extremely inefficient
28 # [[20, 500_000], [10, 1_000_000]],
29 [[500_000, 20], [1_000_000, 10]],
30 ],
31 "small": [
32 [[250, 250], [250, 250]],
33 [[20, 10_000], [10, 25_000]],
34 [[10_000, 20], [25_000, 10]],
35 ],
36 }
37 UNARY_OP_DATA_SIZE = {
38 "big": [
39 [5000, 5000],
40 # the case extremely inefficient
41 # [10, 1_000_000],
42 [1_000_000, 10],
43 ],
44 "small": [
45 [250, 250],
46 [10, 10_000],
47 [10_000, 10],
48 ],
49 }
50 SERIES_DATA_SIZE = {
51 "big": [
52 (100_000, 1),
53 ],
54 "small": [
55 (10_000, 1),
56 ],
57 }
58
59
60 OMNISCI_BINARY_OP_DATA_SIZE = {
61 "big": [
62 [[500_000, 20], [1_000_000, 10]],
63 ],
64 "small": [
65 [[10_000, 20], [25_000, 10]],
66 ],
67 }
68 OMNISCI_UNARY_OP_DATA_SIZE = {
69 "big": [
70 [1_000_000, 10],
71 ],
72 "small": [
73 [10_000, 10],
74 ],
75 }
76 OMNISCI_SERIES_DATA_SIZE = {
77 "big": [
78 [10_000_000, 1],
79 ],
80 "small": [
81 [100_000, 1],
82 ],
83 }
84
85 BINARY_SHAPES = (
86 OMNISCI_BINARY_OP_DATA_SIZE[ASV_DATASET_SIZE]
87 if ASV_USE_BACKEND == "omnisci"
88 else BINARY_OP_DATA_SIZE[ASV_DATASET_SIZE]
89 )
90 UNARY_SHAPES = (
91 OMNISCI_UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE]
92 if ASV_USE_BACKEND == "omnisci"
93 else UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE]
94 )
95 SERIES_SHAPES = (
96 OMNISCI_SERIES_DATA_SIZE[ASV_DATASET_SIZE]
97 if ASV_USE_BACKEND == "omnisci"
98 else SERIES_DATA_SIZE[ASV_DATASET_SIZE]
99 )
100
101 DEFAULT_GROUPBY_NGROUPS = {
102 "big": [100, "huge_amount_groups"],
103 "small": [5],
104 }
105 GROUPBY_NGROUPS = DEFAULT_GROUPBY_NGROUPS[ASV_DATASET_SIZE]
106
107 _DEFAULT_CONFIG_T = [
108 (
109 UNARY_SHAPES,
110 [
111 # Pandas backend benchmarks
112 "TimeGroupByMultiColumn",
113 "TimeGroupByDefaultAggregations",
114 "TimeGroupByDictionaryAggregation",
115 "TimeSetItem",
116 "TimeInsert",
117 "TimeArithmetic",
118 "TimeSortValues",
119 "TimeDrop",
120 "TimeHead",
121 "TimeFillna",
122 "TimeFillnaDataFrame",
123 "TimeValueCountsFrame",
124 "TimeValueCountsSeries",
125 "TimeIndexing",
126 "TimeMultiIndexing",
127 "TimeResetIndex",
128 "TimeAstype",
129 "TimeDescribe",
130 "TimeProperties",
131 # IO benchmarks
132 "TimeReadCsvSkiprows",
133 "TimeReadCsvTrueFalseValues",
134 "TimeReadCsvNamesDtype",
135 # Scalability benchmarks
136 "TimeFromPandas",
137 "TimeToPandas",
138 # OmniSci backend benchmarks
139 "omnisci.TimeJoin",
140 "omnisci.TimeBinaryOpDataFrame",
141 "omnisci.TimeArithmetic",
142 "omnisci.TimeSortValues",
143 "omnisci.TimeDrop",
144 "omnisci.TimeHead",
145 "omnisci.TimeFillna",
146 "omnisci.TimeIndexing",
147 "omnisci.TimeResetIndex",
148 "omnisci.TimeAstype",
149 "omnisci.TimeDescribe",
150 "omnisci.TimeProperties",
151 "omnisci.TimeGroupByDefaultAggregations",
152 "omnisci.TimeGroupByMultiColumn",
153 # OmniSci backend IO benchmarks
154 "omnisci.TimeReadCsvNames",
155 ],
156 ),
157 (
158 BINARY_SHAPES,
159 [
160 # Pandas backend benchmarks
161 "TimeJoin",
162 "TimeMerge",
163 "TimeConcat",
164 "TimeAppend",
165 "TimeBinaryOp",
166 # OmniSci backend benchmarks
167 "omnisci.TimeMerge",
168 "omnisci.TimeAppend",
169 ],
170 ),
171 (
172 SERIES_SHAPES,
173 [
174 # Pandas backend benchmarks
175 "TimeFillnaSeries",
176 # OmniSci backend benchmarks
177 "omnisci.TimeBinaryOpSeries",
178 "omnisci.TimeValueCountsSeries",
179 ],
180 ),
181 ]
182 DEFAULT_CONFIG = {}
183 for _shape, _names in _DEFAULT_CONFIG_T:
184 DEFAULT_CONFIG.update({_name: _shape for _name in _names})
185
186 CONFIG_FROM_FILE = None
187
188
189 def get_benchmark_shapes(bench_id: str):
190 """
191 Get custom benchmark shapes from a json file stored in MODIN_ASV_DATASIZE_CONFIG.
192
193 If `bench_id` benchmark is not found in the file, then the default value will
194 be used.
195
196 Parameters
197 ----------
198 bench_id : str
199 Unique benchmark identifier that is used to get shapes.
200
201 Returns
202 -------
203 list
204 Benchmark shapes.
205 """
206 global CONFIG_FROM_FILE
207 if not CONFIG_FROM_FILE:
208 try:
209 from modin.config import AsvDataSizeConfig
210
211 filename = AsvDataSizeConfig.get()
212 except ImportError:
213 filename = os.environ.get("MODIN_ASV_DATASIZE_CONFIG", None)
214 if filename:
215 # should be json
216 with open(filename) as _f:
217 CONFIG_FROM_FILE = json.load(_f)
218
219 if CONFIG_FROM_FILE and bench_id in CONFIG_FROM_FILE:
220 # example: "omnisci.TimeReadCsvNames": [[5555, 55], [3333, 33]]
221 return CONFIG_FROM_FILE[bench_id]
222 return DEFAULT_CONFIG[bench_id]
223
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/asv_bench/benchmarks/utils/data_shapes.py b/asv_bench/benchmarks/utils/data_shapes.py
--- a/asv_bench/benchmarks/utils/data_shapes.py
+++ b/asv_bench/benchmarks/utils/data_shapes.py
@@ -30,7 +30,6 @@
],
"small": [
[[250, 250], [250, 250]],
- [[20, 10_000], [10, 25_000]],
[[10_000, 20], [25_000, 10]],
],
}
@@ -43,7 +42,6 @@
],
"small": [
[250, 250],
- [10, 10_000],
[10_000, 10],
],
}
| {"golden_diff": "diff --git a/asv_bench/benchmarks/utils/data_shapes.py b/asv_bench/benchmarks/utils/data_shapes.py\n--- a/asv_bench/benchmarks/utils/data_shapes.py\n+++ b/asv_bench/benchmarks/utils/data_shapes.py\n@@ -30,7 +30,6 @@\n ],\n \"small\": [\n [[250, 250], [250, 250]],\n- [[20, 10_000], [10, 25_000]],\n [[10_000, 20], [25_000, 10]],\n ],\n }\n@@ -43,7 +42,6 @@\n ],\n \"small\": [\n [250, 250],\n- [10, 10_000],\n [10_000, 10],\n ],\n }\n", "issue": "Do not check ASV benchmarks on test data, where the number of rows is much less than the number of columns\nThese sizes can be removed because such cases are not used in benchmarking: https://github.com/modin-project/modin/blob/dd91a78ad3f4b8e3e569215e9c8e540ad099d4a8/asv_bench/benchmarks/utils/data_shapes.py#L33 and https://github.com/modin-project/modin/blob/dd91a78ad3f4b8e3e569215e9c8e540ad099d4a8/asv_bench/benchmarks/utils/data_shapes.py#L46\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\"\"\"Define data shapes.\"\"\"\n\nimport os\nimport json\n\nfrom .compatibility import ASV_USE_BACKEND, ASV_DATASET_SIZE\n\nRAND_LOW = 0\nRAND_HIGH = 1_000_000_000 if ASV_USE_BACKEND == \"omnisci\" else 100\n\nBINARY_OP_DATA_SIZE = {\n \"big\": [\n [[5000, 5000], [5000, 5000]],\n # the case extremely inefficient\n # [[20, 500_000], [10, 1_000_000]],\n [[500_000, 20], [1_000_000, 10]],\n ],\n \"small\": [\n [[250, 250], [250, 250]],\n [[20, 10_000], [10, 25_000]],\n [[10_000, 20], [25_000, 10]],\n ],\n}\nUNARY_OP_DATA_SIZE = {\n \"big\": [\n [5000, 5000],\n # the case extremely inefficient\n # [10, 1_000_000],\n [1_000_000, 10],\n ],\n \"small\": [\n [250, 250],\n [10, 10_000],\n [10_000, 10],\n ],\n}\nSERIES_DATA_SIZE = {\n \"big\": [\n (100_000, 1),\n ],\n \"small\": [\n (10_000, 1),\n ],\n}\n\n\nOMNISCI_BINARY_OP_DATA_SIZE = {\n \"big\": [\n [[500_000, 20], [1_000_000, 10]],\n ],\n \"small\": [\n [[10_000, 20], [25_000, 10]],\n ],\n}\nOMNISCI_UNARY_OP_DATA_SIZE = {\n \"big\": [\n [1_000_000, 10],\n ],\n \"small\": [\n [10_000, 10],\n ],\n}\nOMNISCI_SERIES_DATA_SIZE = {\n \"big\": [\n [10_000_000, 1],\n ],\n \"small\": [\n [100_000, 1],\n ],\n}\n\nBINARY_SHAPES = (\n OMNISCI_BINARY_OP_DATA_SIZE[ASV_DATASET_SIZE]\n if ASV_USE_BACKEND == \"omnisci\"\n else BINARY_OP_DATA_SIZE[ASV_DATASET_SIZE]\n)\nUNARY_SHAPES = (\n OMNISCI_UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE]\n if ASV_USE_BACKEND == \"omnisci\"\n else UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE]\n)\nSERIES_SHAPES = (\n OMNISCI_SERIES_DATA_SIZE[ASV_DATASET_SIZE]\n if ASV_USE_BACKEND == \"omnisci\"\n else SERIES_DATA_SIZE[ASV_DATASET_SIZE]\n)\n\nDEFAULT_GROUPBY_NGROUPS = {\n \"big\": [100, \"huge_amount_groups\"],\n \"small\": [5],\n}\nGROUPBY_NGROUPS = DEFAULT_GROUPBY_NGROUPS[ASV_DATASET_SIZE]\n\n_DEFAULT_CONFIG_T = [\n (\n UNARY_SHAPES,\n [\n # Pandas backend benchmarks\n \"TimeGroupByMultiColumn\",\n \"TimeGroupByDefaultAggregations\",\n \"TimeGroupByDictionaryAggregation\",\n \"TimeSetItem\",\n \"TimeInsert\",\n \"TimeArithmetic\",\n \"TimeSortValues\",\n \"TimeDrop\",\n \"TimeHead\",\n \"TimeFillna\",\n \"TimeFillnaDataFrame\",\n \"TimeValueCountsFrame\",\n \"TimeValueCountsSeries\",\n \"TimeIndexing\",\n \"TimeMultiIndexing\",\n \"TimeResetIndex\",\n \"TimeAstype\",\n \"TimeDescribe\",\n \"TimeProperties\",\n # IO benchmarks\n \"TimeReadCsvSkiprows\",\n \"TimeReadCsvTrueFalseValues\",\n \"TimeReadCsvNamesDtype\",\n # Scalability benchmarks\n \"TimeFromPandas\",\n \"TimeToPandas\",\n # OmniSci backend benchmarks\n \"omnisci.TimeJoin\",\n \"omnisci.TimeBinaryOpDataFrame\",\n \"omnisci.TimeArithmetic\",\n \"omnisci.TimeSortValues\",\n \"omnisci.TimeDrop\",\n \"omnisci.TimeHead\",\n \"omnisci.TimeFillna\",\n \"omnisci.TimeIndexing\",\n \"omnisci.TimeResetIndex\",\n \"omnisci.TimeAstype\",\n \"omnisci.TimeDescribe\",\n \"omnisci.TimeProperties\",\n \"omnisci.TimeGroupByDefaultAggregations\",\n \"omnisci.TimeGroupByMultiColumn\",\n # OmniSci backend IO benchmarks\n \"omnisci.TimeReadCsvNames\",\n ],\n ),\n (\n BINARY_SHAPES,\n [\n # Pandas backend benchmarks\n \"TimeJoin\",\n \"TimeMerge\",\n \"TimeConcat\",\n \"TimeAppend\",\n \"TimeBinaryOp\",\n # OmniSci backend benchmarks\n \"omnisci.TimeMerge\",\n \"omnisci.TimeAppend\",\n ],\n ),\n (\n SERIES_SHAPES,\n [\n # Pandas backend benchmarks\n \"TimeFillnaSeries\",\n # OmniSci backend benchmarks\n \"omnisci.TimeBinaryOpSeries\",\n \"omnisci.TimeValueCountsSeries\",\n ],\n ),\n]\nDEFAULT_CONFIG = {}\nfor _shape, _names in _DEFAULT_CONFIG_T:\n DEFAULT_CONFIG.update({_name: _shape for _name in _names})\n\nCONFIG_FROM_FILE = None\n\n\ndef get_benchmark_shapes(bench_id: str):\n \"\"\"\n Get custom benchmark shapes from a json file stored in MODIN_ASV_DATASIZE_CONFIG.\n\n If `bench_id` benchmark is not found in the file, then the default value will\n be used.\n\n Parameters\n ----------\n bench_id : str\n Unique benchmark identifier that is used to get shapes.\n\n Returns\n -------\n list\n Benchmark shapes.\n \"\"\"\n global CONFIG_FROM_FILE\n if not CONFIG_FROM_FILE:\n try:\n from modin.config import AsvDataSizeConfig\n\n filename = AsvDataSizeConfig.get()\n except ImportError:\n filename = os.environ.get(\"MODIN_ASV_DATASIZE_CONFIG\", None)\n if filename:\n # should be json\n with open(filename) as _f:\n CONFIG_FROM_FILE = json.load(_f)\n\n if CONFIG_FROM_FILE and bench_id in CONFIG_FROM_FILE:\n # example: \"omnisci.TimeReadCsvNames\": [[5555, 55], [3333, 33]]\n return CONFIG_FROM_FILE[bench_id]\n return DEFAULT_CONFIG[bench_id]\n", "path": "asv_bench/benchmarks/utils/data_shapes.py"}], "after_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\"\"\"Define data shapes.\"\"\"\n\nimport os\nimport json\n\nfrom .compatibility import ASV_USE_BACKEND, ASV_DATASET_SIZE\n\nRAND_LOW = 0\nRAND_HIGH = 1_000_000_000 if ASV_USE_BACKEND == \"omnisci\" else 100\n\nBINARY_OP_DATA_SIZE = {\n \"big\": [\n [[5000, 5000], [5000, 5000]],\n # the case extremely inefficient\n # [[20, 500_000], [10, 1_000_000]],\n [[500_000, 20], [1_000_000, 10]],\n ],\n \"small\": [\n [[250, 250], [250, 250]],\n [[10_000, 20], [25_000, 10]],\n ],\n}\nUNARY_OP_DATA_SIZE = {\n \"big\": [\n [5000, 5000],\n # the case extremely inefficient\n # [10, 1_000_000],\n [1_000_000, 10],\n ],\n \"small\": [\n [250, 250],\n [10_000, 10],\n ],\n}\nSERIES_DATA_SIZE = {\n \"big\": [\n (100_000, 1),\n ],\n \"small\": [\n (10_000, 1),\n ],\n}\n\n\nOMNISCI_BINARY_OP_DATA_SIZE = {\n \"big\": [\n [[500_000, 20], [1_000_000, 10]],\n ],\n \"small\": [\n [[10_000, 20], [25_000, 10]],\n ],\n}\nOMNISCI_UNARY_OP_DATA_SIZE = {\n \"big\": [\n [1_000_000, 10],\n ],\n \"small\": [\n [10_000, 10],\n ],\n}\nOMNISCI_SERIES_DATA_SIZE = {\n \"big\": [\n [10_000_000, 1],\n ],\n \"small\": [\n [100_000, 1],\n ],\n}\n\nBINARY_SHAPES = (\n OMNISCI_BINARY_OP_DATA_SIZE[ASV_DATASET_SIZE]\n if ASV_USE_BACKEND == \"omnisci\"\n else BINARY_OP_DATA_SIZE[ASV_DATASET_SIZE]\n)\nUNARY_SHAPES = (\n OMNISCI_UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE]\n if ASV_USE_BACKEND == \"omnisci\"\n else UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE]\n)\nSERIES_SHAPES = (\n OMNISCI_SERIES_DATA_SIZE[ASV_DATASET_SIZE]\n if ASV_USE_BACKEND == \"omnisci\"\n else SERIES_DATA_SIZE[ASV_DATASET_SIZE]\n)\n\nDEFAULT_GROUPBY_NGROUPS = {\n \"big\": [100, \"huge_amount_groups\"],\n \"small\": [5],\n}\nGROUPBY_NGROUPS = DEFAULT_GROUPBY_NGROUPS[ASV_DATASET_SIZE]\n\n_DEFAULT_CONFIG_T = [\n (\n UNARY_SHAPES,\n [\n # Pandas backend benchmarks\n \"TimeGroupByMultiColumn\",\n \"TimeGroupByDefaultAggregations\",\n \"TimeGroupByDictionaryAggregation\",\n \"TimeSetItem\",\n \"TimeInsert\",\n \"TimeArithmetic\",\n \"TimeSortValues\",\n \"TimeDrop\",\n \"TimeHead\",\n \"TimeFillna\",\n \"TimeFillnaDataFrame\",\n \"TimeValueCountsFrame\",\n \"TimeValueCountsSeries\",\n \"TimeIndexing\",\n \"TimeMultiIndexing\",\n \"TimeResetIndex\",\n \"TimeAstype\",\n \"TimeDescribe\",\n \"TimeProperties\",\n # IO benchmarks\n \"TimeReadCsvSkiprows\",\n \"TimeReadCsvTrueFalseValues\",\n \"TimeReadCsvNamesDtype\",\n # Scalability benchmarks\n \"TimeFromPandas\",\n \"TimeToPandas\",\n # OmniSci backend benchmarks\n \"omnisci.TimeJoin\",\n \"omnisci.TimeBinaryOpDataFrame\",\n \"omnisci.TimeArithmetic\",\n \"omnisci.TimeSortValues\",\n \"omnisci.TimeDrop\",\n \"omnisci.TimeHead\",\n \"omnisci.TimeFillna\",\n \"omnisci.TimeIndexing\",\n \"omnisci.TimeResetIndex\",\n \"omnisci.TimeAstype\",\n \"omnisci.TimeDescribe\",\n \"omnisci.TimeProperties\",\n \"omnisci.TimeGroupByDefaultAggregations\",\n \"omnisci.TimeGroupByMultiColumn\",\n # OmniSci backend IO benchmarks\n \"omnisci.TimeReadCsvNames\",\n ],\n ),\n (\n BINARY_SHAPES,\n [\n # Pandas backend benchmarks\n \"TimeJoin\",\n \"TimeMerge\",\n \"TimeConcat\",\n \"TimeAppend\",\n \"TimeBinaryOp\",\n # OmniSci backend benchmarks\n \"omnisci.TimeMerge\",\n \"omnisci.TimeAppend\",\n ],\n ),\n (\n SERIES_SHAPES,\n [\n # Pandas backend benchmarks\n \"TimeFillnaSeries\",\n # OmniSci backend benchmarks\n \"omnisci.TimeBinaryOpSeries\",\n \"omnisci.TimeValueCountsSeries\",\n ],\n ),\n]\nDEFAULT_CONFIG = {}\nfor _shape, _names in _DEFAULT_CONFIG_T:\n DEFAULT_CONFIG.update({_name: _shape for _name in _names})\n\nCONFIG_FROM_FILE = None\n\n\ndef get_benchmark_shapes(bench_id: str):\n \"\"\"\n Get custom benchmark shapes from a json file stored in MODIN_ASV_DATASIZE_CONFIG.\n\n If `bench_id` benchmark is not found in the file, then the default value will\n be used.\n\n Parameters\n ----------\n bench_id : str\n Unique benchmark identifier that is used to get shapes.\n\n Returns\n -------\n list\n Benchmark shapes.\n \"\"\"\n global CONFIG_FROM_FILE\n if not CONFIG_FROM_FILE:\n try:\n from modin.config import AsvDataSizeConfig\n\n filename = AsvDataSizeConfig.get()\n except ImportError:\n filename = os.environ.get(\"MODIN_ASV_DATASIZE_CONFIG\", None)\n if filename:\n # should be json\n with open(filename) as _f:\n CONFIG_FROM_FILE = json.load(_f)\n\n if CONFIG_FROM_FILE and bench_id in CONFIG_FROM_FILE:\n # example: \"omnisci.TimeReadCsvNames\": [[5555, 55], [3333, 33]]\n return CONFIG_FROM_FILE[bench_id]\n return DEFAULT_CONFIG[bench_id]\n", "path": "asv_bench/benchmarks/utils/data_shapes.py"}]} | 2,730 | 210 |
gh_patches_debug_28049 | rasdani/github-patches | git_diff | Cog-Creators__Red-DiscordBot-4397 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[serverinfo/userinfo] Pluralize strings
# Commands improvement
#### Command name
`[p]userinfo` and `[p]serverinfo`
#### What cog is this command from?
From `Mod` cog.
#### What improvement are you expecting to?
I would like to see all strings pluralized (ie. `Text Channels` being `Text Channel` if there's only one channel and so on).
#### What actually happened?
Roles and previous nickname/names aren't pluralized.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redbot/cogs/mod/names.py`
Content:
```
1 from datetime import datetime
2 from typing import cast
3
4 import discord
5 from redbot.core import commands, i18n, checks
6 from redbot.core.utils.common_filters import (
7 filter_invites,
8 filter_various_mentions,
9 escape_spoilers_and_mass_mentions,
10 )
11 from redbot.core.utils.mod import get_audit_reason
12 from .abc import MixinMeta
13
14 _ = i18n.Translator("Mod", __file__)
15
16
17 class ModInfo(MixinMeta):
18 """
19 Commands regarding names, userinfo, etc.
20 """
21
22 async def get_names_and_nicks(self, user):
23 names = await self.config.user(user).past_names()
24 nicks = await self.config.member(user).past_nicks()
25 if names:
26 names = [escape_spoilers_and_mass_mentions(name) for name in names if name]
27 if nicks:
28 nicks = [escape_spoilers_and_mass_mentions(nick) for nick in nicks if nick]
29 return names, nicks
30
31 @commands.command()
32 @commands.guild_only()
33 @commands.bot_has_permissions(manage_nicknames=True)
34 @checks.admin_or_permissions(manage_nicknames=True)
35 async def rename(self, ctx: commands.Context, user: discord.Member, *, nickname: str = ""):
36 """Change a user's nickname.
37
38 Leaving the nickname empty will remove it.
39 """
40 nickname = nickname.strip()
41 me = cast(discord.Member, ctx.me)
42 if not nickname:
43 nickname = None
44 elif not 2 <= len(nickname) <= 32:
45 await ctx.send(_("Nicknames must be between 2 and 32 characters long."))
46 return
47 if not (
48 (me.guild_permissions.manage_nicknames or me.guild_permissions.administrator)
49 and me.top_role > user.top_role
50 and user != ctx.guild.owner
51 ):
52 await ctx.send(
53 _(
54 "I do not have permission to rename that member. They may be higher than or "
55 "equal to me in the role hierarchy."
56 )
57 )
58 else:
59 try:
60 await user.edit(reason=get_audit_reason(ctx.author, None), nick=nickname)
61 except discord.Forbidden:
62 # Just in case we missed something in the permissions check above
63 await ctx.send(_("I do not have permission to rename that member."))
64 except discord.HTTPException as exc:
65 if exc.status == 400: # BAD REQUEST
66 await ctx.send(_("That nickname is invalid."))
67 else:
68 await ctx.send(_("An unexpected error has occured."))
69 else:
70 await ctx.send(_("Done."))
71
72 def handle_custom(self, user):
73 a = [c for c in user.activities if c.type == discord.ActivityType.custom]
74 if not a:
75 return None, discord.ActivityType.custom
76 a = a[0]
77 c_status = None
78 if not a.name and not a.emoji:
79 return None, discord.ActivityType.custom
80 elif a.name and a.emoji:
81 c_status = _("Custom: {emoji} {name}").format(emoji=a.emoji, name=a.name)
82 elif a.emoji:
83 c_status = _("Custom: {emoji}").format(emoji=a.emoji)
84 elif a.name:
85 c_status = _("Custom: {name}").format(name=a.name)
86 return c_status, discord.ActivityType.custom
87
88 def handle_playing(self, user):
89 p_acts = [c for c in user.activities if c.type == discord.ActivityType.playing]
90 if not p_acts:
91 return None, discord.ActivityType.playing
92 p_act = p_acts[0]
93 act = _("Playing: {name}").format(name=p_act.name)
94 return act, discord.ActivityType.playing
95
96 def handle_streaming(self, user):
97 s_acts = [c for c in user.activities if c.type == discord.ActivityType.streaming]
98 if not s_acts:
99 return None, discord.ActivityType.streaming
100 s_act = s_acts[0]
101 if isinstance(s_act, discord.Streaming):
102 act = _("Streaming: [{name}{sep}{game}]({url})").format(
103 name=discord.utils.escape_markdown(s_act.name),
104 sep=" | " if s_act.game else "",
105 game=discord.utils.escape_markdown(s_act.game) if s_act.game else "",
106 url=s_act.url,
107 )
108 else:
109 act = _("Streaming: {name}").format(name=s_act.name)
110 return act, discord.ActivityType.streaming
111
112 def handle_listening(self, user):
113 l_acts = [c for c in user.activities if c.type == discord.ActivityType.listening]
114 if not l_acts:
115 return None, discord.ActivityType.listening
116 l_act = l_acts[0]
117 if isinstance(l_act, discord.Spotify):
118 act = _("Listening: [{title}{sep}{artist}]({url})").format(
119 title=discord.utils.escape_markdown(l_act.title),
120 sep=" | " if l_act.artist else "",
121 artist=discord.utils.escape_markdown(l_act.artist) if l_act.artist else "",
122 url=f"https://open.spotify.com/track/{l_act.track_id}",
123 )
124 else:
125 act = _("Listening: {title}").format(title=l_act.name)
126 return act, discord.ActivityType.listening
127
128 def handle_watching(self, user):
129 w_acts = [c for c in user.activities if c.type == discord.ActivityType.watching]
130 if not w_acts:
131 return None, discord.ActivityType.watching
132 w_act = w_acts[0]
133 act = _("Watching: {name}").format(name=w_act.name)
134 return act, discord.ActivityType.watching
135
136 def get_status_string(self, user):
137 string = ""
138 for a in [
139 self.handle_custom(user),
140 self.handle_playing(user),
141 self.handle_listening(user),
142 self.handle_streaming(user),
143 self.handle_watching(user),
144 ]:
145 status_string, status_type = a
146 if status_string is None:
147 continue
148 string += f"{status_string}\n"
149 return string
150
151 @commands.command()
152 @commands.guild_only()
153 @commands.bot_has_permissions(embed_links=True)
154 async def userinfo(self, ctx, *, user: discord.Member = None):
155 """Show information about a user.
156
157 This includes fields for status, discord join date, server
158 join date, voice state and previous names/nicknames.
159
160 If the user has no roles, previous names or previous nicknames,
161 these fields will be omitted.
162 """
163 author = ctx.author
164 guild = ctx.guild
165
166 if not user:
167 user = author
168
169 # A special case for a special someone :^)
170 special_date = datetime(2016, 1, 10, 6, 8, 4, 443000)
171 is_special = user.id == 96130341705637888 and guild.id == 133049272517001216
172
173 roles = user.roles[-1:0:-1]
174 names, nicks = await self.get_names_and_nicks(user)
175
176 joined_at = user.joined_at if not is_special else special_date
177 since_created = (ctx.message.created_at - user.created_at).days
178 if joined_at is not None:
179 since_joined = (ctx.message.created_at - joined_at).days
180 user_joined = joined_at.strftime("%d %b %Y %H:%M")
181 else:
182 since_joined = "?"
183 user_joined = _("Unknown")
184 user_created = user.created_at.strftime("%d %b %Y %H:%M")
185 voice_state = user.voice
186 member_number = (
187 sorted(guild.members, key=lambda m: m.joined_at or ctx.message.created_at).index(user)
188 + 1
189 )
190
191 created_on = _("{}\n({} days ago)").format(user_created, since_created)
192 joined_on = _("{}\n({} days ago)").format(user_joined, since_joined)
193
194 if any(a.type is discord.ActivityType.streaming for a in user.activities):
195 statusemoji = "\N{LARGE PURPLE CIRCLE}"
196 elif user.status.name == "online":
197 statusemoji = "\N{LARGE GREEN CIRCLE}"
198 elif user.status.name == "offline":
199 statusemoji = "\N{MEDIUM WHITE CIRCLE}\N{VARIATION SELECTOR-16}"
200 elif user.status.name == "dnd":
201 statusemoji = "\N{LARGE RED CIRCLE}"
202 elif user.status.name == "idle":
203 statusemoji = "\N{LARGE ORANGE CIRCLE}"
204 activity = _("Chilling in {} status").format(user.status)
205 status_string = self.get_status_string(user)
206
207 if roles:
208
209 role_str = ", ".join([x.mention for x in roles])
210 # 400 BAD REQUEST (error code: 50035): Invalid Form Body
211 # In embed.fields.2.value: Must be 1024 or fewer in length.
212 if len(role_str) > 1024:
213 # Alternative string building time.
214 # This is not the most optimal, but if you're hitting this, you are losing more time
215 # to every single check running on users than the occasional user info invoke
216 # We don't start by building this way, since the number of times we hit this should be
217 # infintesimally small compared to when we don't across all uses of Red.
218 continuation_string = _(
219 "and {numeric_number} more roles not displayed due to embed limits."
220 )
221 available_length = 1024 - len(continuation_string) # do not attempt to tweak, i18n
222
223 role_chunks = []
224 remaining_roles = 0
225
226 for r in roles:
227 chunk = f"{r.mention}, "
228 chunk_size = len(chunk)
229
230 if chunk_size < available_length:
231 available_length -= chunk_size
232 role_chunks.append(chunk)
233 else:
234 remaining_roles += 1
235
236 role_chunks.append(continuation_string.format(numeric_number=remaining_roles))
237
238 role_str = "".join(role_chunks)
239
240 else:
241 role_str = None
242
243 data = discord.Embed(description=status_string or activity, colour=user.colour)
244
245 data.add_field(name=_("Joined Discord on"), value=created_on)
246 data.add_field(name=_("Joined this server on"), value=joined_on)
247 if role_str is not None:
248 data.add_field(name=_("Roles"), value=role_str, inline=False)
249 if names:
250 # May need sanitizing later, but mentions do not ping in embeds currently
251 val = filter_invites(", ".join(names))
252 data.add_field(name=_("Previous Names"), value=val, inline=False)
253 if nicks:
254 # May need sanitizing later, but mentions do not ping in embeds currently
255 val = filter_invites(", ".join(nicks))
256 data.add_field(name=_("Previous Nicknames"), value=val, inline=False)
257 if voice_state and voice_state.channel:
258 data.add_field(
259 name=_("Current voice channel"),
260 value="{0.mention} ID: {0.id}".format(voice_state.channel),
261 inline=False,
262 )
263 data.set_footer(text=_("Member #{} | User ID: {}").format(member_number, user.id))
264
265 name = str(user)
266 name = " ~ ".join((name, user.nick)) if user.nick else name
267 name = filter_invites(name)
268
269 avatar = user.avatar_url_as(static_format="png")
270 data.set_author(name=f"{statusemoji} {name}", url=avatar)
271 data.set_thumbnail(url=avatar)
272
273 await ctx.send(embed=data)
274
275 @commands.command()
276 async def names(self, ctx: commands.Context, *, user: discord.Member):
277 """Show previous names and nicknames of a user."""
278 names, nicks = await self.get_names_and_nicks(user)
279 msg = ""
280 if names:
281 msg += _("**Past 20 names**:")
282 msg += "\n"
283 msg += ", ".join(names)
284 if nicks:
285 if msg:
286 msg += "\n\n"
287 msg += _("**Past 20 nicknames**:")
288 msg += "\n"
289 msg += ", ".join(nicks)
290 if msg:
291 msg = filter_various_mentions(msg)
292 await ctx.send(msg)
293 else:
294 await ctx.send(_("That user doesn't have any recorded name or nickname change."))
295
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redbot/cogs/mod/names.py b/redbot/cogs/mod/names.py
--- a/redbot/cogs/mod/names.py
+++ b/redbot/cogs/mod/names.py
@@ -245,15 +245,25 @@
data.add_field(name=_("Joined Discord on"), value=created_on)
data.add_field(name=_("Joined this server on"), value=joined_on)
if role_str is not None:
- data.add_field(name=_("Roles"), value=role_str, inline=False)
+ data.add_field(
+ name=_("Roles") if len(roles) > 1 else _("Role"), value=role_str, inline=False
+ )
if names:
# May need sanitizing later, but mentions do not ping in embeds currently
val = filter_invites(", ".join(names))
- data.add_field(name=_("Previous Names"), value=val, inline=False)
+ data.add_field(
+ name=_("Previous Names") if len(names) > 1 else _("Previous Name"),
+ value=val,
+ inline=False,
+ )
if nicks:
# May need sanitizing later, but mentions do not ping in embeds currently
val = filter_invites(", ".join(nicks))
- data.add_field(name=_("Previous Nicknames"), value=val, inline=False)
+ data.add_field(
+ name=_("Previous Nicknames") if len(nicks) > 1 else _("Previous Nickname"),
+ value=val,
+ inline=False,
+ )
if voice_state and voice_state.channel:
data.add_field(
name=_("Current voice channel"),
| {"golden_diff": "diff --git a/redbot/cogs/mod/names.py b/redbot/cogs/mod/names.py\n--- a/redbot/cogs/mod/names.py\n+++ b/redbot/cogs/mod/names.py\n@@ -245,15 +245,25 @@\n data.add_field(name=_(\"Joined Discord on\"), value=created_on)\n data.add_field(name=_(\"Joined this server on\"), value=joined_on)\n if role_str is not None:\n- data.add_field(name=_(\"Roles\"), value=role_str, inline=False)\n+ data.add_field(\n+ name=_(\"Roles\") if len(roles) > 1 else _(\"Role\"), value=role_str, inline=False\n+ )\n if names:\n # May need sanitizing later, but mentions do not ping in embeds currently\n val = filter_invites(\", \".join(names))\n- data.add_field(name=_(\"Previous Names\"), value=val, inline=False)\n+ data.add_field(\n+ name=_(\"Previous Names\") if len(names) > 1 else _(\"Previous Name\"),\n+ value=val,\n+ inline=False,\n+ )\n if nicks:\n # May need sanitizing later, but mentions do not ping in embeds currently\n val = filter_invites(\", \".join(nicks))\n- data.add_field(name=_(\"Previous Nicknames\"), value=val, inline=False)\n+ data.add_field(\n+ name=_(\"Previous Nicknames\") if len(nicks) > 1 else _(\"Previous Nickname\"),\n+ value=val,\n+ inline=False,\n+ )\n if voice_state and voice_state.channel:\n data.add_field(\n name=_(\"Current voice channel\"),\n", "issue": "[serverinfo/userinfo] Pluralize strings\n# Commands improvement\r\n\r\n#### Command name\r\n\r\n`[p]userinfo` and `[p]serverinfo`\r\n\r\n#### What cog is this command from?\r\n\r\nFrom `Mod` cog.\r\n\r\n#### What improvement are you expecting to?\r\n\r\nI would like to see all strings pluralized (ie. `Text Channels` being `Text Channel` if there's only one channel and so on).\r\n\r\n#### What actually happened?\r\n\r\nRoles and previous nickname/names aren't pluralized.\n", "before_files": [{"content": "from datetime import datetime\nfrom typing import cast\n\nimport discord\nfrom redbot.core import commands, i18n, checks\nfrom redbot.core.utils.common_filters import (\n filter_invites,\n filter_various_mentions,\n escape_spoilers_and_mass_mentions,\n)\nfrom redbot.core.utils.mod import get_audit_reason\nfrom .abc import MixinMeta\n\n_ = i18n.Translator(\"Mod\", __file__)\n\n\nclass ModInfo(MixinMeta):\n \"\"\"\n Commands regarding names, userinfo, etc.\n \"\"\"\n\n async def get_names_and_nicks(self, user):\n names = await self.config.user(user).past_names()\n nicks = await self.config.member(user).past_nicks()\n if names:\n names = [escape_spoilers_and_mass_mentions(name) for name in names if name]\n if nicks:\n nicks = [escape_spoilers_and_mass_mentions(nick) for nick in nicks if nick]\n return names, nicks\n\n @commands.command()\n @commands.guild_only()\n @commands.bot_has_permissions(manage_nicknames=True)\n @checks.admin_or_permissions(manage_nicknames=True)\n async def rename(self, ctx: commands.Context, user: discord.Member, *, nickname: str = \"\"):\n \"\"\"Change a user's nickname.\n\n Leaving the nickname empty will remove it.\n \"\"\"\n nickname = nickname.strip()\n me = cast(discord.Member, ctx.me)\n if not nickname:\n nickname = None\n elif not 2 <= len(nickname) <= 32:\n await ctx.send(_(\"Nicknames must be between 2 and 32 characters long.\"))\n return\n if not (\n (me.guild_permissions.manage_nicknames or me.guild_permissions.administrator)\n and me.top_role > user.top_role\n and user != ctx.guild.owner\n ):\n await ctx.send(\n _(\n \"I do not have permission to rename that member. They may be higher than or \"\n \"equal to me in the role hierarchy.\"\n )\n )\n else:\n try:\n await user.edit(reason=get_audit_reason(ctx.author, None), nick=nickname)\n except discord.Forbidden:\n # Just in case we missed something in the permissions check above\n await ctx.send(_(\"I do not have permission to rename that member.\"))\n except discord.HTTPException as exc:\n if exc.status == 400: # BAD REQUEST\n await ctx.send(_(\"That nickname is invalid.\"))\n else:\n await ctx.send(_(\"An unexpected error has occured.\"))\n else:\n await ctx.send(_(\"Done.\"))\n\n def handle_custom(self, user):\n a = [c for c in user.activities if c.type == discord.ActivityType.custom]\n if not a:\n return None, discord.ActivityType.custom\n a = a[0]\n c_status = None\n if not a.name and not a.emoji:\n return None, discord.ActivityType.custom\n elif a.name and a.emoji:\n c_status = _(\"Custom: {emoji} {name}\").format(emoji=a.emoji, name=a.name)\n elif a.emoji:\n c_status = _(\"Custom: {emoji}\").format(emoji=a.emoji)\n elif a.name:\n c_status = _(\"Custom: {name}\").format(name=a.name)\n return c_status, discord.ActivityType.custom\n\n def handle_playing(self, user):\n p_acts = [c for c in user.activities if c.type == discord.ActivityType.playing]\n if not p_acts:\n return None, discord.ActivityType.playing\n p_act = p_acts[0]\n act = _(\"Playing: {name}\").format(name=p_act.name)\n return act, discord.ActivityType.playing\n\n def handle_streaming(self, user):\n s_acts = [c for c in user.activities if c.type == discord.ActivityType.streaming]\n if not s_acts:\n return None, discord.ActivityType.streaming\n s_act = s_acts[0]\n if isinstance(s_act, discord.Streaming):\n act = _(\"Streaming: [{name}{sep}{game}]({url})\").format(\n name=discord.utils.escape_markdown(s_act.name),\n sep=\" | \" if s_act.game else \"\",\n game=discord.utils.escape_markdown(s_act.game) if s_act.game else \"\",\n url=s_act.url,\n )\n else:\n act = _(\"Streaming: {name}\").format(name=s_act.name)\n return act, discord.ActivityType.streaming\n\n def handle_listening(self, user):\n l_acts = [c for c in user.activities if c.type == discord.ActivityType.listening]\n if not l_acts:\n return None, discord.ActivityType.listening\n l_act = l_acts[0]\n if isinstance(l_act, discord.Spotify):\n act = _(\"Listening: [{title}{sep}{artist}]({url})\").format(\n title=discord.utils.escape_markdown(l_act.title),\n sep=\" | \" if l_act.artist else \"\",\n artist=discord.utils.escape_markdown(l_act.artist) if l_act.artist else \"\",\n url=f\"https://open.spotify.com/track/{l_act.track_id}\",\n )\n else:\n act = _(\"Listening: {title}\").format(title=l_act.name)\n return act, discord.ActivityType.listening\n\n def handle_watching(self, user):\n w_acts = [c for c in user.activities if c.type == discord.ActivityType.watching]\n if not w_acts:\n return None, discord.ActivityType.watching\n w_act = w_acts[0]\n act = _(\"Watching: {name}\").format(name=w_act.name)\n return act, discord.ActivityType.watching\n\n def get_status_string(self, user):\n string = \"\"\n for a in [\n self.handle_custom(user),\n self.handle_playing(user),\n self.handle_listening(user),\n self.handle_streaming(user),\n self.handle_watching(user),\n ]:\n status_string, status_type = a\n if status_string is None:\n continue\n string += f\"{status_string}\\n\"\n return string\n\n @commands.command()\n @commands.guild_only()\n @commands.bot_has_permissions(embed_links=True)\n async def userinfo(self, ctx, *, user: discord.Member = None):\n \"\"\"Show information about a user.\n\n This includes fields for status, discord join date, server\n join date, voice state and previous names/nicknames.\n\n If the user has no roles, previous names or previous nicknames,\n these fields will be omitted.\n \"\"\"\n author = ctx.author\n guild = ctx.guild\n\n if not user:\n user = author\n\n # A special case for a special someone :^)\n special_date = datetime(2016, 1, 10, 6, 8, 4, 443000)\n is_special = user.id == 96130341705637888 and guild.id == 133049272517001216\n\n roles = user.roles[-1:0:-1]\n names, nicks = await self.get_names_and_nicks(user)\n\n joined_at = user.joined_at if not is_special else special_date\n since_created = (ctx.message.created_at - user.created_at).days\n if joined_at is not None:\n since_joined = (ctx.message.created_at - joined_at).days\n user_joined = joined_at.strftime(\"%d %b %Y %H:%M\")\n else:\n since_joined = \"?\"\n user_joined = _(\"Unknown\")\n user_created = user.created_at.strftime(\"%d %b %Y %H:%M\")\n voice_state = user.voice\n member_number = (\n sorted(guild.members, key=lambda m: m.joined_at or ctx.message.created_at).index(user)\n + 1\n )\n\n created_on = _(\"{}\\n({} days ago)\").format(user_created, since_created)\n joined_on = _(\"{}\\n({} days ago)\").format(user_joined, since_joined)\n\n if any(a.type is discord.ActivityType.streaming for a in user.activities):\n statusemoji = \"\\N{LARGE PURPLE CIRCLE}\"\n elif user.status.name == \"online\":\n statusemoji = \"\\N{LARGE GREEN CIRCLE}\"\n elif user.status.name == \"offline\":\n statusemoji = \"\\N{MEDIUM WHITE CIRCLE}\\N{VARIATION SELECTOR-16}\"\n elif user.status.name == \"dnd\":\n statusemoji = \"\\N{LARGE RED CIRCLE}\"\n elif user.status.name == \"idle\":\n statusemoji = \"\\N{LARGE ORANGE CIRCLE}\"\n activity = _(\"Chilling in {} status\").format(user.status)\n status_string = self.get_status_string(user)\n\n if roles:\n\n role_str = \", \".join([x.mention for x in roles])\n # 400 BAD REQUEST (error code: 50035): Invalid Form Body\n # In embed.fields.2.value: Must be 1024 or fewer in length.\n if len(role_str) > 1024:\n # Alternative string building time.\n # This is not the most optimal, but if you're hitting this, you are losing more time\n # to every single check running on users than the occasional user info invoke\n # We don't start by building this way, since the number of times we hit this should be\n # infintesimally small compared to when we don't across all uses of Red.\n continuation_string = _(\n \"and {numeric_number} more roles not displayed due to embed limits.\"\n )\n available_length = 1024 - len(continuation_string) # do not attempt to tweak, i18n\n\n role_chunks = []\n remaining_roles = 0\n\n for r in roles:\n chunk = f\"{r.mention}, \"\n chunk_size = len(chunk)\n\n if chunk_size < available_length:\n available_length -= chunk_size\n role_chunks.append(chunk)\n else:\n remaining_roles += 1\n\n role_chunks.append(continuation_string.format(numeric_number=remaining_roles))\n\n role_str = \"\".join(role_chunks)\n\n else:\n role_str = None\n\n data = discord.Embed(description=status_string or activity, colour=user.colour)\n\n data.add_field(name=_(\"Joined Discord on\"), value=created_on)\n data.add_field(name=_(\"Joined this server on\"), value=joined_on)\n if role_str is not None:\n data.add_field(name=_(\"Roles\"), value=role_str, inline=False)\n if names:\n # May need sanitizing later, but mentions do not ping in embeds currently\n val = filter_invites(\", \".join(names))\n data.add_field(name=_(\"Previous Names\"), value=val, inline=False)\n if nicks:\n # May need sanitizing later, but mentions do not ping in embeds currently\n val = filter_invites(\", \".join(nicks))\n data.add_field(name=_(\"Previous Nicknames\"), value=val, inline=False)\n if voice_state and voice_state.channel:\n data.add_field(\n name=_(\"Current voice channel\"),\n value=\"{0.mention} ID: {0.id}\".format(voice_state.channel),\n inline=False,\n )\n data.set_footer(text=_(\"Member #{} | User ID: {}\").format(member_number, user.id))\n\n name = str(user)\n name = \" ~ \".join((name, user.nick)) if user.nick else name\n name = filter_invites(name)\n\n avatar = user.avatar_url_as(static_format=\"png\")\n data.set_author(name=f\"{statusemoji} {name}\", url=avatar)\n data.set_thumbnail(url=avatar)\n\n await ctx.send(embed=data)\n\n @commands.command()\n async def names(self, ctx: commands.Context, *, user: discord.Member):\n \"\"\"Show previous names and nicknames of a user.\"\"\"\n names, nicks = await self.get_names_and_nicks(user)\n msg = \"\"\n if names:\n msg += _(\"**Past 20 names**:\")\n msg += \"\\n\"\n msg += \", \".join(names)\n if nicks:\n if msg:\n msg += \"\\n\\n\"\n msg += _(\"**Past 20 nicknames**:\")\n msg += \"\\n\"\n msg += \", \".join(nicks)\n if msg:\n msg = filter_various_mentions(msg)\n await ctx.send(msg)\n else:\n await ctx.send(_(\"That user doesn't have any recorded name or nickname change.\"))\n", "path": "redbot/cogs/mod/names.py"}], "after_files": [{"content": "from datetime import datetime\nfrom typing import cast\n\nimport discord\nfrom redbot.core import commands, i18n, checks\nfrom redbot.core.utils.common_filters import (\n filter_invites,\n filter_various_mentions,\n escape_spoilers_and_mass_mentions,\n)\nfrom redbot.core.utils.mod import get_audit_reason\nfrom .abc import MixinMeta\n\n_ = i18n.Translator(\"Mod\", __file__)\n\n\nclass ModInfo(MixinMeta):\n \"\"\"\n Commands regarding names, userinfo, etc.\n \"\"\"\n\n async def get_names_and_nicks(self, user):\n names = await self.config.user(user).past_names()\n nicks = await self.config.member(user).past_nicks()\n if names:\n names = [escape_spoilers_and_mass_mentions(name) for name in names if name]\n if nicks:\n nicks = [escape_spoilers_and_mass_mentions(nick) for nick in nicks if nick]\n return names, nicks\n\n @commands.command()\n @commands.guild_only()\n @commands.bot_has_permissions(manage_nicknames=True)\n @checks.admin_or_permissions(manage_nicknames=True)\n async def rename(self, ctx: commands.Context, user: discord.Member, *, nickname: str = \"\"):\n \"\"\"Change a user's nickname.\n\n Leaving the nickname empty will remove it.\n \"\"\"\n nickname = nickname.strip()\n me = cast(discord.Member, ctx.me)\n if not nickname:\n nickname = None\n elif not 2 <= len(nickname) <= 32:\n await ctx.send(_(\"Nicknames must be between 2 and 32 characters long.\"))\n return\n if not (\n (me.guild_permissions.manage_nicknames or me.guild_permissions.administrator)\n and me.top_role > user.top_role\n and user != ctx.guild.owner\n ):\n await ctx.send(\n _(\n \"I do not have permission to rename that member. They may be higher than or \"\n \"equal to me in the role hierarchy.\"\n )\n )\n else:\n try:\n await user.edit(reason=get_audit_reason(ctx.author, None), nick=nickname)\n except discord.Forbidden:\n # Just in case we missed something in the permissions check above\n await ctx.send(_(\"I do not have permission to rename that member.\"))\n except discord.HTTPException as exc:\n if exc.status == 400: # BAD REQUEST\n await ctx.send(_(\"That nickname is invalid.\"))\n else:\n await ctx.send(_(\"An unexpected error has occured.\"))\n else:\n await ctx.send(_(\"Done.\"))\n\n def handle_custom(self, user):\n a = [c for c in user.activities if c.type == discord.ActivityType.custom]\n if not a:\n return None, discord.ActivityType.custom\n a = a[0]\n c_status = None\n if not a.name and not a.emoji:\n return None, discord.ActivityType.custom\n elif a.name and a.emoji:\n c_status = _(\"Custom: {emoji} {name}\").format(emoji=a.emoji, name=a.name)\n elif a.emoji:\n c_status = _(\"Custom: {emoji}\").format(emoji=a.emoji)\n elif a.name:\n c_status = _(\"Custom: {name}\").format(name=a.name)\n return c_status, discord.ActivityType.custom\n\n def handle_playing(self, user):\n p_acts = [c for c in user.activities if c.type == discord.ActivityType.playing]\n if not p_acts:\n return None, discord.ActivityType.playing\n p_act = p_acts[0]\n act = _(\"Playing: {name}\").format(name=p_act.name)\n return act, discord.ActivityType.playing\n\n def handle_streaming(self, user):\n s_acts = [c for c in user.activities if c.type == discord.ActivityType.streaming]\n if not s_acts:\n return None, discord.ActivityType.streaming\n s_act = s_acts[0]\n if isinstance(s_act, discord.Streaming):\n act = _(\"Streaming: [{name}{sep}{game}]({url})\").format(\n name=discord.utils.escape_markdown(s_act.name),\n sep=\" | \" if s_act.game else \"\",\n game=discord.utils.escape_markdown(s_act.game) if s_act.game else \"\",\n url=s_act.url,\n )\n else:\n act = _(\"Streaming: {name}\").format(name=s_act.name)\n return act, discord.ActivityType.streaming\n\n def handle_listening(self, user):\n l_acts = [c for c in user.activities if c.type == discord.ActivityType.listening]\n if not l_acts:\n return None, discord.ActivityType.listening\n l_act = l_acts[0]\n if isinstance(l_act, discord.Spotify):\n act = _(\"Listening: [{title}{sep}{artist}]({url})\").format(\n title=discord.utils.escape_markdown(l_act.title),\n sep=\" | \" if l_act.artist else \"\",\n artist=discord.utils.escape_markdown(l_act.artist) if l_act.artist else \"\",\n url=f\"https://open.spotify.com/track/{l_act.track_id}\",\n )\n else:\n act = _(\"Listening: {title}\").format(title=l_act.name)\n return act, discord.ActivityType.listening\n\n def handle_watching(self, user):\n w_acts = [c for c in user.activities if c.type == discord.ActivityType.watching]\n if not w_acts:\n return None, discord.ActivityType.watching\n w_act = w_acts[0]\n act = _(\"Watching: {name}\").format(name=w_act.name)\n return act, discord.ActivityType.watching\n\n def get_status_string(self, user):\n string = \"\"\n for a in [\n self.handle_custom(user),\n self.handle_playing(user),\n self.handle_listening(user),\n self.handle_streaming(user),\n self.handle_watching(user),\n ]:\n status_string, status_type = a\n if status_string is None:\n continue\n string += f\"{status_string}\\n\"\n return string\n\n @commands.command()\n @commands.guild_only()\n @commands.bot_has_permissions(embed_links=True)\n async def userinfo(self, ctx, *, user: discord.Member = None):\n \"\"\"Show information about a user.\n\n This includes fields for status, discord join date, server\n join date, voice state and previous names/nicknames.\n\n If the user has no roles, previous names or previous nicknames,\n these fields will be omitted.\n \"\"\"\n author = ctx.author\n guild = ctx.guild\n\n if not user:\n user = author\n\n # A special case for a special someone :^)\n special_date = datetime(2016, 1, 10, 6, 8, 4, 443000)\n is_special = user.id == 96130341705637888 and guild.id == 133049272517001216\n\n roles = user.roles[-1:0:-1]\n names, nicks = await self.get_names_and_nicks(user)\n\n joined_at = user.joined_at if not is_special else special_date\n since_created = (ctx.message.created_at - user.created_at).days\n if joined_at is not None:\n since_joined = (ctx.message.created_at - joined_at).days\n user_joined = joined_at.strftime(\"%d %b %Y %H:%M\")\n else:\n since_joined = \"?\"\n user_joined = _(\"Unknown\")\n user_created = user.created_at.strftime(\"%d %b %Y %H:%M\")\n voice_state = user.voice\n member_number = (\n sorted(guild.members, key=lambda m: m.joined_at or ctx.message.created_at).index(user)\n + 1\n )\n\n created_on = _(\"{}\\n({} days ago)\").format(user_created, since_created)\n joined_on = _(\"{}\\n({} days ago)\").format(user_joined, since_joined)\n\n if any(a.type is discord.ActivityType.streaming for a in user.activities):\n statusemoji = \"\\N{LARGE PURPLE CIRCLE}\"\n elif user.status.name == \"online\":\n statusemoji = \"\\N{LARGE GREEN CIRCLE}\"\n elif user.status.name == \"offline\":\n statusemoji = \"\\N{MEDIUM WHITE CIRCLE}\\N{VARIATION SELECTOR-16}\"\n elif user.status.name == \"dnd\":\n statusemoji = \"\\N{LARGE RED CIRCLE}\"\n elif user.status.name == \"idle\":\n statusemoji = \"\\N{LARGE ORANGE CIRCLE}\"\n activity = _(\"Chilling in {} status\").format(user.status)\n status_string = self.get_status_string(user)\n\n if roles:\n\n role_str = \", \".join([x.mention for x in roles])\n # 400 BAD REQUEST (error code: 50035): Invalid Form Body\n # In embed.fields.2.value: Must be 1024 or fewer in length.\n if len(role_str) > 1024:\n # Alternative string building time.\n # This is not the most optimal, but if you're hitting this, you are losing more time\n # to every single check running on users than the occasional user info invoke\n # We don't start by building this way, since the number of times we hit this should be\n # infintesimally small compared to when we don't across all uses of Red.\n continuation_string = _(\n \"and {numeric_number} more roles not displayed due to embed limits.\"\n )\n available_length = 1024 - len(continuation_string) # do not attempt to tweak, i18n\n\n role_chunks = []\n remaining_roles = 0\n\n for r in roles:\n chunk = f\"{r.mention}, \"\n chunk_size = len(chunk)\n\n if chunk_size < available_length:\n available_length -= chunk_size\n role_chunks.append(chunk)\n else:\n remaining_roles += 1\n\n role_chunks.append(continuation_string.format(numeric_number=remaining_roles))\n\n role_str = \"\".join(role_chunks)\n\n else:\n role_str = None\n\n data = discord.Embed(description=status_string or activity, colour=user.colour)\n\n data.add_field(name=_(\"Joined Discord on\"), value=created_on)\n data.add_field(name=_(\"Joined this server on\"), value=joined_on)\n if role_str is not None:\n data.add_field(\n name=_(\"Roles\") if len(roles) > 1 else _(\"Role\"), value=role_str, inline=False\n )\n if names:\n # May need sanitizing later, but mentions do not ping in embeds currently\n val = filter_invites(\", \".join(names))\n data.add_field(\n name=_(\"Previous Names\") if len(names) > 1 else _(\"Previous Name\"),\n value=val,\n inline=False,\n )\n if nicks:\n # May need sanitizing later, but mentions do not ping in embeds currently\n val = filter_invites(\", \".join(nicks))\n data.add_field(\n name=_(\"Previous Nicknames\") if len(nicks) > 1 else _(\"Previous Nickname\"),\n value=val,\n inline=False,\n )\n if voice_state and voice_state.channel:\n data.add_field(\n name=_(\"Current voice channel\"),\n value=\"{0.mention} ID: {0.id}\".format(voice_state.channel),\n inline=False,\n )\n data.set_footer(text=_(\"Member #{} | User ID: {}\").format(member_number, user.id))\n\n name = str(user)\n name = \" ~ \".join((name, user.nick)) if user.nick else name\n name = filter_invites(name)\n\n avatar = user.avatar_url_as(static_format=\"png\")\n data.set_author(name=f\"{statusemoji} {name}\", url=avatar)\n data.set_thumbnail(url=avatar)\n\n await ctx.send(embed=data)\n\n @commands.command()\n async def names(self, ctx: commands.Context, *, user: discord.Member):\n \"\"\"Show previous names and nicknames of a user.\"\"\"\n names, nicks = await self.get_names_and_nicks(user)\n msg = \"\"\n if names:\n msg += _(\"**Past 20 names**:\")\n msg += \"\\n\"\n msg += \", \".join(names)\n if nicks:\n if msg:\n msg += \"\\n\\n\"\n msg += _(\"**Past 20 nicknames**:\")\n msg += \"\\n\"\n msg += \", \".join(nicks)\n if msg:\n msg = filter_various_mentions(msg)\n await ctx.send(msg)\n else:\n await ctx.send(_(\"That user doesn't have any recorded name or nickname change.\"))\n", "path": "redbot/cogs/mod/names.py"}]} | 3,888 | 352 |
gh_patches_debug_17610 | rasdani/github-patches | git_diff | apache__tvm-6236 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix conv2d grad for strided cases under the new conv2d_transpose def
https://ci.tvm.ai/blue/organizations/jenkins/tvm/detail/PR-6173/3/pipeline
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/tvm/topi/cuda/conv2d_transpose_nchw.py`
Content:
```
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 # pylint: disable=invalid-name
18 """Conv2d transpose template for cuda backend"""
19
20 import tvm
21 from tvm import te
22 from tvm import autotvm
23 from tvm.autotvm.task.space import SplitEntity, OtherOptionEntity
24 from .. import nn
25 from ..util import get_const_tuple, traverse_inline
26
27
28
29 @autotvm.register_topi_compute("conv2d_transpose_nchw.cuda")
30 def conv2d_transpose_nchw(cfg, data, kernel, stride, padding, out_dtype,
31 output_padding):
32 """Transposed 2D convolution nchw forward operator.
33
34 Parameters
35 ----------
36 cfg: ConfigEntity
37 The config for this template
38 Input : tvm.te.Tensor
39 4-D with shape [batch, in_channel, in_height, in_width]
40 Filter : tvm.te.Tensor
41 4-D with shape [in_channel, num_filter, filter_height, filter_width]
42 strides : tuple of two ints
43 The spatial stride along height and width
44 padding : int or str
45 Padding size, or ['VALID', 'SAME']
46 out_dtype: str
47 The output type. This is used in mixed precision
48 output_padding : tuple of two ints
49 Used to disambiguate output shape.
50
51 Returns
52 -------
53 Output : tvm.te.Tensor
54 4-D with shape [batch, out_channel, out_height, out_width]
55 """
56 batch, inp_channels, inp_height, inp_width = get_const_tuple(data.shape)
57 _, out_channels, kernel_height, kernel_width = get_const_tuple(kernel.shape)
58 stride_height, stride_width = stride
59 outpad_height, outpad_width = output_padding
60 assert outpad_height < stride_height and outpad_width < stride_width
61 cfg.stride = stride
62 pad_top, pad_left, pad_bottom, pad_right = nn.get_pad_tuple(
63 padding, (kernel_height, kernel_width))
64
65 out_width = (inp_width - 1) * stride_width + \
66 kernel_width - pad_left - pad_right + outpad_width
67 pad_left = kernel_width - 1 - pad_left
68 pad_right = kernel_width - 1 - pad_right
69 dilated_width = stride_width * (inp_width - 1) + 1
70
71 out_height = (inp_height - 1) * stride_height + \
72 kernel_height - pad_top - pad_bottom + outpad_height
73 pad_top = kernel_height - 1 - pad_top
74 pad_bottom = kernel_height - 1 - pad_bottom
75 dilated_height = stride_height * (inp_height - 1) + 1
76
77 # compute pad
78 data = te.compute(
79 (batch, inp_channels,
80 pad_top + dilated_height + pad_bottom,
81 pad_left + dilated_width + pad_right),
82 lambda n, c, y, x: tvm.tir.if_then_else(
83 tvm.tir.all(x >= pad_left,
84 x < pad_left + dilated_width,
85 tvm.tir.indexmod(x - pad_left, stride_width).equal(0),
86 y >= pad_top,
87 y < pad_top + dilated_height,
88 tvm.tir.indexmod(y - pad_top, stride_height).equal(0)),
89 data[n, c,
90 tvm.tir.indexdiv(y - pad_top, stride_height),
91 tvm.tir.indexdiv(x - pad_left, stride_width)],
92 tvm.tir.const(0., "float32")),
93 name='data_pad')
94
95 # compute transposed conv
96 dc = te.reduce_axis((0, inp_channels), name='dc')
97 dh = te.reduce_axis((0, kernel_height), name='dh')
98 dw = te.reduce_axis((0, kernel_width), name='dw')
99 data_out = te.compute(
100 (batch, out_channels, out_height, out_width),
101 lambda b, c, h, w: te.sum(
102 data[b, dc, h + dh, w + dw].astype(out_dtype) *
103 kernel[dc,
104 c,
105 kernel_height - 1 - dh,
106 kernel_width - 1 - dw].astype(out_dtype),
107 axis=[dc, dh, dw]), tag="conv2d_transpose_nchw")
108
109 return data_out
110
111 @autotvm.register_topi_schedule("conv2d_transpose_nchw.cuda")
112 def schedule_conv2d_transpose_nchw(cfg, outs):
113 """TOPI Schedule callback for conv2d transpose operator.
114
115 Parameters
116 ----------
117 cfg: ConfigEntity
118 The parameters for this template
119
120 outs: Array of Tensor
121 The computation graph description of conv2d transpose
122 in the format of an array of tensors.
123
124 Returns
125 -------
126 s: Schedule
127 The computation schedule for conv2d transpose.
128 """
129 outs = [outs] if isinstance(outs, te.tensor.Tensor) else outs
130 s = te.create_schedule([x.op for x in outs])
131
132 def _fallback_schedule(N, F, Y, X):
133 # pylint: disable=unused-argument
134 # split N (batch dimension)
135 if N > 1:
136 cfg["tile_n"] = SplitEntity([-1, 1, 1, 4])
137 else:
138 cfg["tile_n"] = SplitEntity([1, 1, 1, 1])
139 # split F (output channel dimension)
140 if F > 1:
141 cfg["tile_f"] = SplitEntity([-1, 1, 64, 1])
142 # split Y (height dimension)
143 y_split_factor = 1
144 for candidate in range(5, 17):
145 if Y % candidate == 0:
146 y_split_factor = candidate
147 break
148 cfg["tile_y"] = SplitEntity([-1, 1, 1, y_split_factor])
149 # split X (width dimension)
150 x_split_factor = 1
151 for candidate in range(5, 17):
152 if X % candidate == 0:
153 x_split_factor = candidate
154 break
155 cfg["tile_x"] = SplitEntity([-1, x_split_factor, 1, 1])
156 # split RC (input channel dimension, which is a reduction axis)
157 cfg["tile_rc"] = SplitEntity([-1, 1, 16])
158 # other configurations
159 cfg["fuse_yx"] = OtherOptionEntity(False)
160 cfg["unroll_explicit"] = OtherOptionEntity(True)
161 cfg["auto_unroll_max_step"] = OtherOptionEntity(1500)
162
163 def _callback(op):
164 if op.tag == 'conv2d_transpose_nchw':
165 pad_data = op.input_tensors[0]
166 kernel = op.input_tensors[1]
167 conv = op.output(0)
168
169 ##### space definition begin #####
170 n, f, y, x = s[conv].op.axis
171 rc = s[conv].op.reduce_axis[0]
172 cfg.define_split("tile_n", cfg.axis(n), num_outputs=4)
173 cfg.define_split("tile_f", cfg.axis(f), num_outputs=4)
174 cfg.define_split("tile_y", cfg.axis(y), num_outputs=4)
175 cfg.define_split("tile_x", cfg.axis(x), num_outputs=4)
176 cfg.define_split("tile_rc", cfg.axis(rc), num_outputs=3)
177 cfg.define_knob("auto_unroll_max_step", [64, 512, 1500])
178
179 target = tvm.target.Target.current()
180 if target.kind.name in ['nvptx', 'rocm']:
181 cfg.define_knob("unroll_explicit", [1])
182 else:
183 cfg.define_knob("unroll_explicit", [0, 1])
184
185 if cfg.is_fallback:
186 N, F, Y, X = get_const_tuple(conv.shape)
187 _fallback_schedule(N, F, Y, X)
188
189 ##### space definition end #####
190
191 if isinstance(kernel.op, tvm.te.ComputeOp) and 'dilate' in kernel.op.tag:
192 s[kernel].compute_inline()
193
194 if conv.op in s.outputs:
195 output = conv
196 OL = s.cache_write(conv, 'local')
197 else:
198 output = s.outputs[0].output(0)
199 s[conv].set_scope('local')
200 OL = conv
201
202 # create cache stage
203 s[pad_data].set_scope('shared')
204 AA = pad_data
205 WW = s.cache_read(kernel, 'shared', [OL])
206
207 # tile and bind spatial axes
208 n, f, y, x = s[output].op.axis
209 kernel_scope, n = s[output].split(n, nparts=1)
210 bn, vn, tn, ni = cfg["tile_n"].apply(s, output, n)
211 bf, vf, tf, fi = cfg["tile_f"].apply(s, output, f)
212 by, vy, ty, yi = cfg["tile_y"].apply(s, output, y)
213 bx, vx, tx, xi = cfg["tile_x"].apply(s, output, x)
214
215 s[output].reorder(bn, bf, by, bx, vn, vf, vy, vx, tn, tf, ty, tx, ni, fi, yi, xi)
216 s[output].bind(bn, te.thread_axis("blockIdx.z"))
217 s[output].bind(bf, te.thread_axis("blockIdx.y"))
218 s[output].bind(s[output].fuse(by, bx), te.thread_axis("blockIdx.x"))
219 s[output].bind(vn, te.thread_axis("vthread"))
220 s[output].bind(vf, te.thread_axis("vthread"))
221 s[output].bind(vy, te.thread_axis("vthread"))
222 s[output].bind(vx, te.thread_axis("vthread"))
223
224 cfg.define_knob("fuse_yx", [0, 1]) # fuse ty,tx or tn,tf
225
226 if cfg["fuse_yx"].val:
227 s[output].bind(tn, te.thread_axis("threadIdx.z"))
228 s[output].bind(tf, te.thread_axis("threadIdx.y"))
229 tyx = s[output].fuse(ty, tx)
230 s[output].bind(s[output].fuse(ty, tx), te.thread_axis("threadIdx.x"))
231 s[OL].compute_at(s[output], tyx)
232
233 # number of threads
234 n_tz = cfg["tile_n"].size[2]
235 n_ty = cfg["tile_f"].size[2]
236 n_tx = cfg["tile_y"].size[2] * cfg["tile_x"].size[2]
237 else:
238 s[output].bind(s[output].fuse(tn, tf), te.thread_axis("threadIdx.z"))
239 s[output].bind(ty, te.thread_axis("threadIdx.y"))
240 s[output].bind(tx, te.thread_axis("threadIdx.x"))
241 s[OL].compute_at(s[output], tx)
242
243 # number of threads
244 n_tz = cfg["tile_n"].size[2] * cfg["tile_f"].size[2]
245 n_ty = cfg["tile_y"].size[2]
246 n_tx = cfg["tile_x"].size[2]
247
248 # tile reduction axes
249 n, f, y, x = s[OL].op.axis
250 rc, ry, rx = s[OL].op.reduce_axis
251 rco, rcm, rci = cfg['tile_rc'].apply(s, OL, rc)
252 s[OL].reorder(rco, rcm, ry, rx, rci, n, f, y, x)
253
254 s[AA].compute_at(s[OL], rx)
255 s[WW].compute_at(s[OL], rx)
256
257 # cooperative fetching
258 for load in [AA, WW]:
259 n, f, y, x = s[load].op.axis
260 fused = s[load].fuse(f, y, x)
261 tz, fused = s[load].split(fused, nparts=n_tz)
262 ty, fused = s[load].split(fused, nparts=n_ty)
263 tx, fused = s[load].split(fused, nparts=n_tx)
264 s[load].bind(tz, te.thread_axis("threadIdx.z"))
265 s[load].bind(ty, te.thread_axis("threadIdx.y"))
266 s[load].bind(tx, te.thread_axis("threadIdx.x"))
267
268 s[output].pragma(kernel_scope, 'auto_unroll_max_step', cfg['auto_unroll_max_step'].val)
269 s[output].pragma(kernel_scope, 'unroll_explicit', cfg['unroll_explicit'].val)
270
271 traverse_inline(s, outs[0].op, _callback)
272
273 return s
274
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/tvm/topi/cuda/conv2d_transpose_nchw.py b/python/tvm/topi/cuda/conv2d_transpose_nchw.py
--- a/python/tvm/topi/cuda/conv2d_transpose_nchw.py
+++ b/python/tvm/topi/cuda/conv2d_transpose_nchw.py
@@ -65,13 +65,13 @@
out_width = (inp_width - 1) * stride_width + \
kernel_width - pad_left - pad_right + outpad_width
pad_left = kernel_width - 1 - pad_left
- pad_right = kernel_width - 1 - pad_right
+ pad_right = kernel_width - 1 - pad_right + outpad_width
dilated_width = stride_width * (inp_width - 1) + 1
out_height = (inp_height - 1) * stride_height + \
kernel_height - pad_top - pad_bottom + outpad_height
pad_top = kernel_height - 1 - pad_top
- pad_bottom = kernel_height - 1 - pad_bottom
+ pad_bottom = kernel_height - 1 - pad_bottom + outpad_height
dilated_height = stride_height * (inp_height - 1) + 1
# compute pad
| {"golden_diff": "diff --git a/python/tvm/topi/cuda/conv2d_transpose_nchw.py b/python/tvm/topi/cuda/conv2d_transpose_nchw.py\n--- a/python/tvm/topi/cuda/conv2d_transpose_nchw.py\n+++ b/python/tvm/topi/cuda/conv2d_transpose_nchw.py\n@@ -65,13 +65,13 @@\n out_width = (inp_width - 1) * stride_width + \\\n kernel_width - pad_left - pad_right + outpad_width\n pad_left = kernel_width - 1 - pad_left\n- pad_right = kernel_width - 1 - pad_right\n+ pad_right = kernel_width - 1 - pad_right + outpad_width\n dilated_width = stride_width * (inp_width - 1) + 1\n \n out_height = (inp_height - 1) * stride_height + \\\n kernel_height - pad_top - pad_bottom + outpad_height\n pad_top = kernel_height - 1 - pad_top\n- pad_bottom = kernel_height - 1 - pad_bottom\n+ pad_bottom = kernel_height - 1 - pad_bottom + outpad_height\n dilated_height = stride_height * (inp_height - 1) + 1\n \n # compute pad\n", "issue": "Fix conv2d grad for strided cases under the new conv2d_transpose def\nhttps://ci.tvm.ai/blue/organizations/jenkins/tvm/detail/PR-6173/3/pipeline\r\n\r\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n# pylint: disable=invalid-name\n\"\"\"Conv2d transpose template for cuda backend\"\"\"\n\nimport tvm\nfrom tvm import te\nfrom tvm import autotvm\nfrom tvm.autotvm.task.space import SplitEntity, OtherOptionEntity\nfrom .. import nn\nfrom ..util import get_const_tuple, traverse_inline\n\n\n\[email protected]_topi_compute(\"conv2d_transpose_nchw.cuda\")\ndef conv2d_transpose_nchw(cfg, data, kernel, stride, padding, out_dtype,\n output_padding):\n \"\"\"Transposed 2D convolution nchw forward operator.\n\n Parameters\n ----------\n cfg: ConfigEntity\n The config for this template\n Input : tvm.te.Tensor\n 4-D with shape [batch, in_channel, in_height, in_width]\n Filter : tvm.te.Tensor\n 4-D with shape [in_channel, num_filter, filter_height, filter_width]\n strides : tuple of two ints\n The spatial stride along height and width\n padding : int or str\n Padding size, or ['VALID', 'SAME']\n out_dtype: str\n The output type. This is used in mixed precision\n output_padding : tuple of two ints\n Used to disambiguate output shape.\n\n Returns\n -------\n Output : tvm.te.Tensor\n 4-D with shape [batch, out_channel, out_height, out_width]\n \"\"\"\n batch, inp_channels, inp_height, inp_width = get_const_tuple(data.shape)\n _, out_channels, kernel_height, kernel_width = get_const_tuple(kernel.shape)\n stride_height, stride_width = stride\n outpad_height, outpad_width = output_padding\n assert outpad_height < stride_height and outpad_width < stride_width\n cfg.stride = stride\n pad_top, pad_left, pad_bottom, pad_right = nn.get_pad_tuple(\n padding, (kernel_height, kernel_width))\n\n out_width = (inp_width - 1) * stride_width + \\\n kernel_width - pad_left - pad_right + outpad_width\n pad_left = kernel_width - 1 - pad_left\n pad_right = kernel_width - 1 - pad_right\n dilated_width = stride_width * (inp_width - 1) + 1\n\n out_height = (inp_height - 1) * stride_height + \\\n kernel_height - pad_top - pad_bottom + outpad_height\n pad_top = kernel_height - 1 - pad_top\n pad_bottom = kernel_height - 1 - pad_bottom\n dilated_height = stride_height * (inp_height - 1) + 1\n\n # compute pad\n data = te.compute(\n (batch, inp_channels,\n pad_top + dilated_height + pad_bottom,\n pad_left + dilated_width + pad_right),\n lambda n, c, y, x: tvm.tir.if_then_else(\n tvm.tir.all(x >= pad_left,\n x < pad_left + dilated_width,\n tvm.tir.indexmod(x - pad_left, stride_width).equal(0),\n y >= pad_top,\n y < pad_top + dilated_height,\n tvm.tir.indexmod(y - pad_top, stride_height).equal(0)),\n data[n, c,\n tvm.tir.indexdiv(y - pad_top, stride_height),\n tvm.tir.indexdiv(x - pad_left, stride_width)],\n tvm.tir.const(0., \"float32\")),\n name='data_pad')\n\n # compute transposed conv\n dc = te.reduce_axis((0, inp_channels), name='dc')\n dh = te.reduce_axis((0, kernel_height), name='dh')\n dw = te.reduce_axis((0, kernel_width), name='dw')\n data_out = te.compute(\n (batch, out_channels, out_height, out_width),\n lambda b, c, h, w: te.sum(\n data[b, dc, h + dh, w + dw].astype(out_dtype) *\n kernel[dc,\n c,\n kernel_height - 1 - dh,\n kernel_width - 1 - dw].astype(out_dtype),\n axis=[dc, dh, dw]), tag=\"conv2d_transpose_nchw\")\n\n return data_out\n\[email protected]_topi_schedule(\"conv2d_transpose_nchw.cuda\")\ndef schedule_conv2d_transpose_nchw(cfg, outs):\n \"\"\"TOPI Schedule callback for conv2d transpose operator.\n\n Parameters\n ----------\n cfg: ConfigEntity\n The parameters for this template\n\n outs: Array of Tensor\n The computation graph description of conv2d transpose\n in the format of an array of tensors.\n\n Returns\n -------\n s: Schedule\n The computation schedule for conv2d transpose.\n \"\"\"\n outs = [outs] if isinstance(outs, te.tensor.Tensor) else outs\n s = te.create_schedule([x.op for x in outs])\n\n def _fallback_schedule(N, F, Y, X):\n # pylint: disable=unused-argument\n # split N (batch dimension)\n if N > 1:\n cfg[\"tile_n\"] = SplitEntity([-1, 1, 1, 4])\n else:\n cfg[\"tile_n\"] = SplitEntity([1, 1, 1, 1])\n # split F (output channel dimension)\n if F > 1:\n cfg[\"tile_f\"] = SplitEntity([-1, 1, 64, 1])\n # split Y (height dimension)\n y_split_factor = 1\n for candidate in range(5, 17):\n if Y % candidate == 0:\n y_split_factor = candidate\n break\n cfg[\"tile_y\"] = SplitEntity([-1, 1, 1, y_split_factor])\n # split X (width dimension)\n x_split_factor = 1\n for candidate in range(5, 17):\n if X % candidate == 0:\n x_split_factor = candidate\n break\n cfg[\"tile_x\"] = SplitEntity([-1, x_split_factor, 1, 1])\n # split RC (input channel dimension, which is a reduction axis)\n cfg[\"tile_rc\"] = SplitEntity([-1, 1, 16])\n # other configurations\n cfg[\"fuse_yx\"] = OtherOptionEntity(False)\n cfg[\"unroll_explicit\"] = OtherOptionEntity(True)\n cfg[\"auto_unroll_max_step\"] = OtherOptionEntity(1500)\n\n def _callback(op):\n if op.tag == 'conv2d_transpose_nchw':\n pad_data = op.input_tensors[0]\n kernel = op.input_tensors[1]\n conv = op.output(0)\n\n ##### space definition begin #####\n n, f, y, x = s[conv].op.axis\n rc = s[conv].op.reduce_axis[0]\n cfg.define_split(\"tile_n\", cfg.axis(n), num_outputs=4)\n cfg.define_split(\"tile_f\", cfg.axis(f), num_outputs=4)\n cfg.define_split(\"tile_y\", cfg.axis(y), num_outputs=4)\n cfg.define_split(\"tile_x\", cfg.axis(x), num_outputs=4)\n cfg.define_split(\"tile_rc\", cfg.axis(rc), num_outputs=3)\n cfg.define_knob(\"auto_unroll_max_step\", [64, 512, 1500])\n\n target = tvm.target.Target.current()\n if target.kind.name in ['nvptx', 'rocm']:\n cfg.define_knob(\"unroll_explicit\", [1])\n else:\n cfg.define_knob(\"unroll_explicit\", [0, 1])\n\n if cfg.is_fallback:\n N, F, Y, X = get_const_tuple(conv.shape)\n _fallback_schedule(N, F, Y, X)\n\n ##### space definition end #####\n\n if isinstance(kernel.op, tvm.te.ComputeOp) and 'dilate' in kernel.op.tag:\n s[kernel].compute_inline()\n\n if conv.op in s.outputs:\n output = conv\n OL = s.cache_write(conv, 'local')\n else:\n output = s.outputs[0].output(0)\n s[conv].set_scope('local')\n OL = conv\n\n # create cache stage\n s[pad_data].set_scope('shared')\n AA = pad_data\n WW = s.cache_read(kernel, 'shared', [OL])\n\n # tile and bind spatial axes\n n, f, y, x = s[output].op.axis\n kernel_scope, n = s[output].split(n, nparts=1)\n bn, vn, tn, ni = cfg[\"tile_n\"].apply(s, output, n)\n bf, vf, tf, fi = cfg[\"tile_f\"].apply(s, output, f)\n by, vy, ty, yi = cfg[\"tile_y\"].apply(s, output, y)\n bx, vx, tx, xi = cfg[\"tile_x\"].apply(s, output, x)\n\n s[output].reorder(bn, bf, by, bx, vn, vf, vy, vx, tn, tf, ty, tx, ni, fi, yi, xi)\n s[output].bind(bn, te.thread_axis(\"blockIdx.z\"))\n s[output].bind(bf, te.thread_axis(\"blockIdx.y\"))\n s[output].bind(s[output].fuse(by, bx), te.thread_axis(\"blockIdx.x\"))\n s[output].bind(vn, te.thread_axis(\"vthread\"))\n s[output].bind(vf, te.thread_axis(\"vthread\"))\n s[output].bind(vy, te.thread_axis(\"vthread\"))\n s[output].bind(vx, te.thread_axis(\"vthread\"))\n\n cfg.define_knob(\"fuse_yx\", [0, 1]) # fuse ty,tx or tn,tf\n\n if cfg[\"fuse_yx\"].val:\n s[output].bind(tn, te.thread_axis(\"threadIdx.z\"))\n s[output].bind(tf, te.thread_axis(\"threadIdx.y\"))\n tyx = s[output].fuse(ty, tx)\n s[output].bind(s[output].fuse(ty, tx), te.thread_axis(\"threadIdx.x\"))\n s[OL].compute_at(s[output], tyx)\n\n # number of threads\n n_tz = cfg[\"tile_n\"].size[2]\n n_ty = cfg[\"tile_f\"].size[2]\n n_tx = cfg[\"tile_y\"].size[2] * cfg[\"tile_x\"].size[2]\n else:\n s[output].bind(s[output].fuse(tn, tf), te.thread_axis(\"threadIdx.z\"))\n s[output].bind(ty, te.thread_axis(\"threadIdx.y\"))\n s[output].bind(tx, te.thread_axis(\"threadIdx.x\"))\n s[OL].compute_at(s[output], tx)\n\n # number of threads\n n_tz = cfg[\"tile_n\"].size[2] * cfg[\"tile_f\"].size[2]\n n_ty = cfg[\"tile_y\"].size[2]\n n_tx = cfg[\"tile_x\"].size[2]\n\n # tile reduction axes\n n, f, y, x = s[OL].op.axis\n rc, ry, rx = s[OL].op.reduce_axis\n rco, rcm, rci = cfg['tile_rc'].apply(s, OL, rc)\n s[OL].reorder(rco, rcm, ry, rx, rci, n, f, y, x)\n\n s[AA].compute_at(s[OL], rx)\n s[WW].compute_at(s[OL], rx)\n\n # cooperative fetching\n for load in [AA, WW]:\n n, f, y, x = s[load].op.axis\n fused = s[load].fuse(f, y, x)\n tz, fused = s[load].split(fused, nparts=n_tz)\n ty, fused = s[load].split(fused, nparts=n_ty)\n tx, fused = s[load].split(fused, nparts=n_tx)\n s[load].bind(tz, te.thread_axis(\"threadIdx.z\"))\n s[load].bind(ty, te.thread_axis(\"threadIdx.y\"))\n s[load].bind(tx, te.thread_axis(\"threadIdx.x\"))\n\n s[output].pragma(kernel_scope, 'auto_unroll_max_step', cfg['auto_unroll_max_step'].val)\n s[output].pragma(kernel_scope, 'unroll_explicit', cfg['unroll_explicit'].val)\n\n traverse_inline(s, outs[0].op, _callback)\n\n return s\n", "path": "python/tvm/topi/cuda/conv2d_transpose_nchw.py"}], "after_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n# pylint: disable=invalid-name\n\"\"\"Conv2d transpose template for cuda backend\"\"\"\n\nimport tvm\nfrom tvm import te\nfrom tvm import autotvm\nfrom tvm.autotvm.task.space import SplitEntity, OtherOptionEntity\nfrom .. import nn\nfrom ..util import get_const_tuple, traverse_inline\n\n\n\[email protected]_topi_compute(\"conv2d_transpose_nchw.cuda\")\ndef conv2d_transpose_nchw(cfg, data, kernel, stride, padding, out_dtype,\n output_padding):\n \"\"\"Transposed 2D convolution nchw forward operator.\n\n Parameters\n ----------\n cfg: ConfigEntity\n The config for this template\n Input : tvm.te.Tensor\n 4-D with shape [batch, in_channel, in_height, in_width]\n Filter : tvm.te.Tensor\n 4-D with shape [in_channel, num_filter, filter_height, filter_width]\n strides : tuple of two ints\n The spatial stride along height and width\n padding : int or str\n Padding size, or ['VALID', 'SAME']\n out_dtype: str\n The output type. This is used in mixed precision\n output_padding : tuple of two ints\n Used to disambiguate output shape.\n\n Returns\n -------\n Output : tvm.te.Tensor\n 4-D with shape [batch, out_channel, out_height, out_width]\n \"\"\"\n batch, inp_channels, inp_height, inp_width = get_const_tuple(data.shape)\n _, out_channels, kernel_height, kernel_width = get_const_tuple(kernel.shape)\n stride_height, stride_width = stride\n outpad_height, outpad_width = output_padding\n assert outpad_height < stride_height and outpad_width < stride_width\n cfg.stride = stride\n pad_top, pad_left, pad_bottom, pad_right = nn.get_pad_tuple(\n padding, (kernel_height, kernel_width))\n\n out_width = (inp_width - 1) * stride_width + \\\n kernel_width - pad_left - pad_right + outpad_width\n pad_left = kernel_width - 1 - pad_left\n pad_right = kernel_width - 1 - pad_right + outpad_width\n dilated_width = stride_width * (inp_width - 1) + 1\n\n out_height = (inp_height - 1) * stride_height + \\\n kernel_height - pad_top - pad_bottom + outpad_height\n pad_top = kernel_height - 1 - pad_top\n pad_bottom = kernel_height - 1 - pad_bottom + outpad_height\n dilated_height = stride_height * (inp_height - 1) + 1\n\n # compute pad\n data = te.compute(\n (batch, inp_channels,\n pad_top + dilated_height + pad_bottom,\n pad_left + dilated_width + pad_right),\n lambda n, c, y, x: tvm.tir.if_then_else(\n tvm.tir.all(x >= pad_left,\n x < pad_left + dilated_width,\n tvm.tir.indexmod(x - pad_left, stride_width).equal(0),\n y >= pad_top,\n y < pad_top + dilated_height,\n tvm.tir.indexmod(y - pad_top, stride_height).equal(0)),\n data[n, c,\n tvm.tir.indexdiv(y - pad_top, stride_height),\n tvm.tir.indexdiv(x - pad_left, stride_width)],\n tvm.tir.const(0., \"float32\")),\n name='data_pad')\n\n # compute transposed conv\n dc = te.reduce_axis((0, inp_channels), name='dc')\n dh = te.reduce_axis((0, kernel_height), name='dh')\n dw = te.reduce_axis((0, kernel_width), name='dw')\n data_out = te.compute(\n (batch, out_channels, out_height, out_width),\n lambda b, c, h, w: te.sum(\n data[b, dc, h + dh, w + dw].astype(out_dtype) *\n kernel[dc,\n c,\n kernel_height - 1 - dh,\n kernel_width - 1 - dw].astype(out_dtype),\n axis=[dc, dh, dw]), tag=\"conv2d_transpose_nchw\")\n\n return data_out\n\[email protected]_topi_schedule(\"conv2d_transpose_nchw.cuda\")\ndef schedule_conv2d_transpose_nchw(cfg, outs):\n \"\"\"TOPI Schedule callback for conv2d transpose operator.\n\n Parameters\n ----------\n cfg: ConfigEntity\n The parameters for this template\n\n outs: Array of Tensor\n The computation graph description of conv2d transpose\n in the format of an array of tensors.\n\n Returns\n -------\n s: Schedule\n The computation schedule for conv2d transpose.\n \"\"\"\n outs = [outs] if isinstance(outs, te.tensor.Tensor) else outs\n s = te.create_schedule([x.op for x in outs])\n\n def _fallback_schedule(N, F, Y, X):\n # pylint: disable=unused-argument\n # split N (batch dimension)\n if N > 1:\n cfg[\"tile_n\"] = SplitEntity([-1, 1, 1, 4])\n else:\n cfg[\"tile_n\"] = SplitEntity([1, 1, 1, 1])\n # split F (output channel dimension)\n if F > 1:\n cfg[\"tile_f\"] = SplitEntity([-1, 1, 64, 1])\n # split Y (height dimension)\n y_split_factor = 1\n for candidate in range(5, 17):\n if Y % candidate == 0:\n y_split_factor = candidate\n break\n cfg[\"tile_y\"] = SplitEntity([-1, 1, 1, y_split_factor])\n # split X (width dimension)\n x_split_factor = 1\n for candidate in range(5, 17):\n if X % candidate == 0:\n x_split_factor = candidate\n break\n cfg[\"tile_x\"] = SplitEntity([-1, x_split_factor, 1, 1])\n # split RC (input channel dimension, which is a reduction axis)\n cfg[\"tile_rc\"] = SplitEntity([-1, 1, 16])\n # other configurations\n cfg[\"fuse_yx\"] = OtherOptionEntity(False)\n cfg[\"unroll_explicit\"] = OtherOptionEntity(True)\n cfg[\"auto_unroll_max_step\"] = OtherOptionEntity(1500)\n\n def _callback(op):\n if op.tag == 'conv2d_transpose_nchw':\n pad_data = op.input_tensors[0]\n kernel = op.input_tensors[1]\n conv = op.output(0)\n\n ##### space definition begin #####\n n, f, y, x = s[conv].op.axis\n rc = s[conv].op.reduce_axis[0]\n cfg.define_split(\"tile_n\", cfg.axis(n), num_outputs=4)\n cfg.define_split(\"tile_f\", cfg.axis(f), num_outputs=4)\n cfg.define_split(\"tile_y\", cfg.axis(y), num_outputs=4)\n cfg.define_split(\"tile_x\", cfg.axis(x), num_outputs=4)\n cfg.define_split(\"tile_rc\", cfg.axis(rc), num_outputs=3)\n cfg.define_knob(\"auto_unroll_max_step\", [64, 512, 1500])\n\n target = tvm.target.Target.current()\n if target.kind.name in ['nvptx', 'rocm']:\n cfg.define_knob(\"unroll_explicit\", [1])\n else:\n cfg.define_knob(\"unroll_explicit\", [0, 1])\n\n if cfg.is_fallback:\n N, F, Y, X = get_const_tuple(conv.shape)\n _fallback_schedule(N, F, Y, X)\n\n ##### space definition end #####\n\n if isinstance(kernel.op, tvm.te.ComputeOp) and 'dilate' in kernel.op.tag:\n s[kernel].compute_inline()\n\n if conv.op in s.outputs:\n output = conv\n OL = s.cache_write(conv, 'local')\n else:\n output = s.outputs[0].output(0)\n s[conv].set_scope('local')\n OL = conv\n\n # create cache stage\n s[pad_data].set_scope('shared')\n AA = pad_data\n WW = s.cache_read(kernel, 'shared', [OL])\n\n # tile and bind spatial axes\n n, f, y, x = s[output].op.axis\n kernel_scope, n = s[output].split(n, nparts=1)\n bn, vn, tn, ni = cfg[\"tile_n\"].apply(s, output, n)\n bf, vf, tf, fi = cfg[\"tile_f\"].apply(s, output, f)\n by, vy, ty, yi = cfg[\"tile_y\"].apply(s, output, y)\n bx, vx, tx, xi = cfg[\"tile_x\"].apply(s, output, x)\n\n s[output].reorder(bn, bf, by, bx, vn, vf, vy, vx, tn, tf, ty, tx, ni, fi, yi, xi)\n s[output].bind(bn, te.thread_axis(\"blockIdx.z\"))\n s[output].bind(bf, te.thread_axis(\"blockIdx.y\"))\n s[output].bind(s[output].fuse(by, bx), te.thread_axis(\"blockIdx.x\"))\n s[output].bind(vn, te.thread_axis(\"vthread\"))\n s[output].bind(vf, te.thread_axis(\"vthread\"))\n s[output].bind(vy, te.thread_axis(\"vthread\"))\n s[output].bind(vx, te.thread_axis(\"vthread\"))\n\n cfg.define_knob(\"fuse_yx\", [0, 1]) # fuse ty,tx or tn,tf\n\n if cfg[\"fuse_yx\"].val:\n s[output].bind(tn, te.thread_axis(\"threadIdx.z\"))\n s[output].bind(tf, te.thread_axis(\"threadIdx.y\"))\n tyx = s[output].fuse(ty, tx)\n s[output].bind(s[output].fuse(ty, tx), te.thread_axis(\"threadIdx.x\"))\n s[OL].compute_at(s[output], tyx)\n\n # number of threads\n n_tz = cfg[\"tile_n\"].size[2]\n n_ty = cfg[\"tile_f\"].size[2]\n n_tx = cfg[\"tile_y\"].size[2] * cfg[\"tile_x\"].size[2]\n else:\n s[output].bind(s[output].fuse(tn, tf), te.thread_axis(\"threadIdx.z\"))\n s[output].bind(ty, te.thread_axis(\"threadIdx.y\"))\n s[output].bind(tx, te.thread_axis(\"threadIdx.x\"))\n s[OL].compute_at(s[output], tx)\n\n # number of threads\n n_tz = cfg[\"tile_n\"].size[2] * cfg[\"tile_f\"].size[2]\n n_ty = cfg[\"tile_y\"].size[2]\n n_tx = cfg[\"tile_x\"].size[2]\n\n # tile reduction axes\n n, f, y, x = s[OL].op.axis\n rc, ry, rx = s[OL].op.reduce_axis\n rco, rcm, rci = cfg['tile_rc'].apply(s, OL, rc)\n s[OL].reorder(rco, rcm, ry, rx, rci, n, f, y, x)\n\n s[AA].compute_at(s[OL], rx)\n s[WW].compute_at(s[OL], rx)\n\n # cooperative fetching\n for load in [AA, WW]:\n n, f, y, x = s[load].op.axis\n fused = s[load].fuse(f, y, x)\n tz, fused = s[load].split(fused, nparts=n_tz)\n ty, fused = s[load].split(fused, nparts=n_ty)\n tx, fused = s[load].split(fused, nparts=n_tx)\n s[load].bind(tz, te.thread_axis(\"threadIdx.z\"))\n s[load].bind(ty, te.thread_axis(\"threadIdx.y\"))\n s[load].bind(tx, te.thread_axis(\"threadIdx.x\"))\n\n s[output].pragma(kernel_scope, 'auto_unroll_max_step', cfg['auto_unroll_max_step'].val)\n s[output].pragma(kernel_scope, 'unroll_explicit', cfg['unroll_explicit'].val)\n\n traverse_inline(s, outs[0].op, _callback)\n\n return s\n", "path": "python/tvm/topi/cuda/conv2d_transpose_nchw.py"}]} | 3,973 | 283 |
gh_patches_debug_10387 | rasdani/github-patches | git_diff | WordPress__openverse-api-727 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Possibly make `thumbnail` null for audio files without artwork
## Description
<!-- Concisely describe the bug. -->
Currently the frontend tries to fetch thumbnails for all audio files regardless of whether the audio file in question has one or not.
I noticed that the API returns the thumbnail URL for all tracks. That makes sense, but could we improve this to be `null` for audio tracks without artwork? Then we could check the field in the frontend before making a network request.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `api/catalog/api/serializers/audio_serializers.py`
Content:
```
1 from rest_framework import serializers
2
3 from elasticsearch_dsl.response import Hit
4
5 from catalog.api.constants.field_order import field_position_map
6 from catalog.api.constants.field_values import AUDIO_CATEGORIES, LENGTHS
7 from catalog.api.docs.media_docs import fields_to_md
8 from catalog.api.models import Audio, AudioReport, AudioSet
9 from catalog.api.serializers.fields import (
10 EnumCharField,
11 SchemableHyperlinkedIdentityField,
12 )
13 from catalog.api.serializers.media_serializers import (
14 MediaReportRequestSerializer,
15 MediaSearchRequestSerializer,
16 MediaSearchSerializer,
17 MediaSerializer,
18 get_hyperlinks_serializer,
19 get_search_request_source_serializer,
20 )
21
22
23 #######################
24 # Request serializers #
25 #######################
26
27
28 AudioSearchRequestSourceSerializer = get_search_request_source_serializer("audio")
29
30
31 class AudioSearchRequestSerializer(
32 AudioSearchRequestSourceSerializer,
33 MediaSearchRequestSerializer,
34 ):
35 """Parse and validate search query string parameters."""
36
37 fields_names = [
38 *MediaSearchRequestSerializer.fields_names,
39 *AudioSearchRequestSourceSerializer.field_names,
40 "category",
41 "length",
42 ]
43 """
44 Keep the fields names in sync with the actual fields below as this list is
45 used to generate Swagger documentation.
46 """
47
48 category = EnumCharField(
49 plural="categories",
50 enum_class=AUDIO_CATEGORIES,
51 required=False,
52 )
53 length = EnumCharField(
54 plural="lengths",
55 enum_class=LENGTHS,
56 required=False,
57 )
58
59
60 class AudioReportRequestSerializer(MediaReportRequestSerializer):
61 class Meta(MediaReportRequestSerializer.Meta):
62 model = AudioReport
63
64
65 ########################
66 # Response serializers #
67 ########################
68
69
70 class AudioSetSerializer(serializers.ModelSerializer):
71 """An audio set, rendered as a part of the ``AudioSerializer`` output."""
72
73 class Meta:
74 model = AudioSet
75 fields = [
76 "title",
77 "foreign_landing_url",
78 "creator",
79 "creator_url",
80 "url",
81 "filesize",
82 "filetype",
83 ]
84
85
86 AudioHyperlinksSerializer = get_hyperlinks_serializer("audio")
87
88
89 class AudioSerializer(AudioHyperlinksSerializer, MediaSerializer):
90 """A single audio file. Used in search results."""
91
92 class Meta:
93 model = Audio
94 fields = sorted( # keep this list ordered logically
95 [
96 *MediaSerializer.Meta.fields,
97 *AudioHyperlinksSerializer.field_names,
98 "genres",
99 "alt_files",
100 "audio_set",
101 "duration",
102 "bit_rate",
103 "sample_rate",
104 "waveform", # hyperlink to the endpoint that generates the waveform
105 "peaks", # waveform peaks, if they have already been generated
106 ],
107 key=lambda val: field_position_map.get(val, 999),
108 )
109 """
110 Keep the fields names in sync with the actual fields below as this list is
111 used to generate Swagger documentation.
112 """
113
114 audio_set = AudioSetSerializer(
115 allow_null=True,
116 help_text="Reference to set of which this track is a part.",
117 read_only=True,
118 )
119
120 waveform = SchemableHyperlinkedIdentityField(
121 read_only=True,
122 view_name="audio-waveform",
123 lookup_field="identifier",
124 help_text="A direct link to the waveform peaks.",
125 )
126
127 # Add-on data
128 peaks = serializers.SerializerMethodField(
129 help_text="The list of peaks used to generate the waveform for the audio."
130 )
131
132 @staticmethod
133 def get_peaks(obj) -> list[int]:
134 if isinstance(obj, Hit):
135 obj = Audio.objects.get(identifier=obj.identifier)
136 return obj.get_waveform()
137
138
139 class AudioSearchSerializer(MediaSearchSerializer):
140 """
141 The full audio search response.
142 This serializer is purely representational and not actually used to
143 serialize the response.
144 """
145
146 results = AudioSerializer(
147 many=True,
148 help_text=(
149 "An array of audios and their details such as "
150 f"{fields_to_md(AudioSerializer.Meta.fields)}."
151 ),
152 )
153
154
155 ##########################
156 # Additional serializers #
157 ##########################
158
159
160 class AudioWaveformSerializer(serializers.Serializer):
161 len = serializers.SerializerMethodField()
162 points = serializers.ListField(
163 child=serializers.FloatField(min_value=0, max_value=1)
164 )
165
166 @staticmethod
167 def get_len(obj) -> int:
168 return len(obj.get("points", []))
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/api/catalog/api/serializers/audio_serializers.py b/api/catalog/api/serializers/audio_serializers.py
--- a/api/catalog/api/serializers/audio_serializers.py
+++ b/api/catalog/api/serializers/audio_serializers.py
@@ -135,6 +135,18 @@
obj = Audio.objects.get(identifier=obj.identifier)
return obj.get_waveform()
+ def to_representation(self, instance):
+ # Get the original representation
+ output = super().to_representation(instance)
+
+ if isinstance(instance, Hit):
+ # TODO: Remove when updating ES indexes
+ audio = Audio.objects.get(identifier=instance.identifier)
+ if not audio.thumbnail:
+ output["thumbnail"] = None
+
+ return output
+
class AudioSearchSerializer(MediaSearchSerializer):
"""
| {"golden_diff": "diff --git a/api/catalog/api/serializers/audio_serializers.py b/api/catalog/api/serializers/audio_serializers.py\n--- a/api/catalog/api/serializers/audio_serializers.py\n+++ b/api/catalog/api/serializers/audio_serializers.py\n@@ -135,6 +135,18 @@\n obj = Audio.objects.get(identifier=obj.identifier)\n return obj.get_waveform()\n \n+ def to_representation(self, instance):\n+ # Get the original representation\n+ output = super().to_representation(instance)\n+\n+ if isinstance(instance, Hit):\n+ # TODO: Remove when updating ES indexes\n+ audio = Audio.objects.get(identifier=instance.identifier)\n+ if not audio.thumbnail:\n+ output[\"thumbnail\"] = None\n+\n+ return output\n+\n \n class AudioSearchSerializer(MediaSearchSerializer):\n \"\"\"\n", "issue": "Possibly make `thumbnail` null for audio files without artwork\n## Description\r\n<!-- Concisely describe the bug. -->\r\n\r\nCurrently the frontend tries to fetch thumbnails for all audio files regardless of whether the audio file in question has one or not. \r\nI noticed that the API returns the thumbnail URL for all tracks. That makes sense, but could we improve this to be `null` for audio tracks without artwork? Then we could check the field in the frontend before making a network request.\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom elasticsearch_dsl.response import Hit\n\nfrom catalog.api.constants.field_order import field_position_map\nfrom catalog.api.constants.field_values import AUDIO_CATEGORIES, LENGTHS\nfrom catalog.api.docs.media_docs import fields_to_md\nfrom catalog.api.models import Audio, AudioReport, AudioSet\nfrom catalog.api.serializers.fields import (\n EnumCharField,\n SchemableHyperlinkedIdentityField,\n)\nfrom catalog.api.serializers.media_serializers import (\n MediaReportRequestSerializer,\n MediaSearchRequestSerializer,\n MediaSearchSerializer,\n MediaSerializer,\n get_hyperlinks_serializer,\n get_search_request_source_serializer,\n)\n\n\n#######################\n# Request serializers #\n#######################\n\n\nAudioSearchRequestSourceSerializer = get_search_request_source_serializer(\"audio\")\n\n\nclass AudioSearchRequestSerializer(\n AudioSearchRequestSourceSerializer,\n MediaSearchRequestSerializer,\n):\n \"\"\"Parse and validate search query string parameters.\"\"\"\n\n fields_names = [\n *MediaSearchRequestSerializer.fields_names,\n *AudioSearchRequestSourceSerializer.field_names,\n \"category\",\n \"length\",\n ]\n \"\"\"\n Keep the fields names in sync with the actual fields below as this list is\n used to generate Swagger documentation.\n \"\"\"\n\n category = EnumCharField(\n plural=\"categories\",\n enum_class=AUDIO_CATEGORIES,\n required=False,\n )\n length = EnumCharField(\n plural=\"lengths\",\n enum_class=LENGTHS,\n required=False,\n )\n\n\nclass AudioReportRequestSerializer(MediaReportRequestSerializer):\n class Meta(MediaReportRequestSerializer.Meta):\n model = AudioReport\n\n\n########################\n# Response serializers #\n########################\n\n\nclass AudioSetSerializer(serializers.ModelSerializer):\n \"\"\"An audio set, rendered as a part of the ``AudioSerializer`` output.\"\"\"\n\n class Meta:\n model = AudioSet\n fields = [\n \"title\",\n \"foreign_landing_url\",\n \"creator\",\n \"creator_url\",\n \"url\",\n \"filesize\",\n \"filetype\",\n ]\n\n\nAudioHyperlinksSerializer = get_hyperlinks_serializer(\"audio\")\n\n\nclass AudioSerializer(AudioHyperlinksSerializer, MediaSerializer):\n \"\"\"A single audio file. Used in search results.\"\"\"\n\n class Meta:\n model = Audio\n fields = sorted( # keep this list ordered logically\n [\n *MediaSerializer.Meta.fields,\n *AudioHyperlinksSerializer.field_names,\n \"genres\",\n \"alt_files\",\n \"audio_set\",\n \"duration\",\n \"bit_rate\",\n \"sample_rate\",\n \"waveform\", # hyperlink to the endpoint that generates the waveform\n \"peaks\", # waveform peaks, if they have already been generated\n ],\n key=lambda val: field_position_map.get(val, 999),\n )\n \"\"\"\n Keep the fields names in sync with the actual fields below as this list is\n used to generate Swagger documentation.\n \"\"\"\n\n audio_set = AudioSetSerializer(\n allow_null=True,\n help_text=\"Reference to set of which this track is a part.\",\n read_only=True,\n )\n\n waveform = SchemableHyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-waveform\",\n lookup_field=\"identifier\",\n help_text=\"A direct link to the waveform peaks.\",\n )\n\n # Add-on data\n peaks = serializers.SerializerMethodField(\n help_text=\"The list of peaks used to generate the waveform for the audio.\"\n )\n\n @staticmethod\n def get_peaks(obj) -> list[int]:\n if isinstance(obj, Hit):\n obj = Audio.objects.get(identifier=obj.identifier)\n return obj.get_waveform()\n\n\nclass AudioSearchSerializer(MediaSearchSerializer):\n \"\"\"\n The full audio search response.\n This serializer is purely representational and not actually used to\n serialize the response.\n \"\"\"\n\n results = AudioSerializer(\n many=True,\n help_text=(\n \"An array of audios and their details such as \"\n f\"{fields_to_md(AudioSerializer.Meta.fields)}.\"\n ),\n )\n\n\n##########################\n# Additional serializers #\n##########################\n\n\nclass AudioWaveformSerializer(serializers.Serializer):\n len = serializers.SerializerMethodField()\n points = serializers.ListField(\n child=serializers.FloatField(min_value=0, max_value=1)\n )\n\n @staticmethod\n def get_len(obj) -> int:\n return len(obj.get(\"points\", []))\n", "path": "api/catalog/api/serializers/audio_serializers.py"}], "after_files": [{"content": "from rest_framework import serializers\n\nfrom elasticsearch_dsl.response import Hit\n\nfrom catalog.api.constants.field_order import field_position_map\nfrom catalog.api.constants.field_values import AUDIO_CATEGORIES, LENGTHS\nfrom catalog.api.docs.media_docs import fields_to_md\nfrom catalog.api.models import Audio, AudioReport, AudioSet\nfrom catalog.api.serializers.fields import (\n EnumCharField,\n SchemableHyperlinkedIdentityField,\n)\nfrom catalog.api.serializers.media_serializers import (\n MediaReportRequestSerializer,\n MediaSearchRequestSerializer,\n MediaSearchSerializer,\n MediaSerializer,\n get_hyperlinks_serializer,\n get_search_request_source_serializer,\n)\n\n\n#######################\n# Request serializers #\n#######################\n\n\nAudioSearchRequestSourceSerializer = get_search_request_source_serializer(\"audio\")\n\n\nclass AudioSearchRequestSerializer(\n AudioSearchRequestSourceSerializer,\n MediaSearchRequestSerializer,\n):\n \"\"\"Parse and validate search query string parameters.\"\"\"\n\n fields_names = [\n *MediaSearchRequestSerializer.fields_names,\n *AudioSearchRequestSourceSerializer.field_names,\n \"category\",\n \"length\",\n ]\n \"\"\"\n Keep the fields names in sync with the actual fields below as this list is\n used to generate Swagger documentation.\n \"\"\"\n\n category = EnumCharField(\n plural=\"categories\",\n enum_class=AUDIO_CATEGORIES,\n required=False,\n )\n length = EnumCharField(\n plural=\"lengths\",\n enum_class=LENGTHS,\n required=False,\n )\n\n\nclass AudioReportRequestSerializer(MediaReportRequestSerializer):\n class Meta(MediaReportRequestSerializer.Meta):\n model = AudioReport\n\n\n########################\n# Response serializers #\n########################\n\n\nclass AudioSetSerializer(serializers.ModelSerializer):\n \"\"\"An audio set, rendered as a part of the ``AudioSerializer`` output.\"\"\"\n\n class Meta:\n model = AudioSet\n fields = [\n \"title\",\n \"foreign_landing_url\",\n \"creator\",\n \"creator_url\",\n \"url\",\n \"filesize\",\n \"filetype\",\n ]\n\n\nAudioHyperlinksSerializer = get_hyperlinks_serializer(\"audio\")\n\n\nclass AudioSerializer(AudioHyperlinksSerializer, MediaSerializer):\n \"\"\"A single audio file. Used in search results.\"\"\"\n\n class Meta:\n model = Audio\n fields = sorted( # keep this list ordered logically\n [\n *MediaSerializer.Meta.fields,\n *AudioHyperlinksSerializer.field_names,\n \"genres\",\n \"alt_files\",\n \"audio_set\",\n \"duration\",\n \"bit_rate\",\n \"sample_rate\",\n \"waveform\", # hyperlink to the endpoint that generates the waveform\n \"peaks\", # waveform peaks, if they have already been generated\n ],\n key=lambda val: field_position_map.get(val, 999),\n )\n \"\"\"\n Keep the fields names in sync with the actual fields below as this list is\n used to generate Swagger documentation.\n \"\"\"\n\n audio_set = AudioSetSerializer(\n allow_null=True,\n help_text=\"Reference to set of which this track is a part.\",\n read_only=True,\n )\n\n waveform = SchemableHyperlinkedIdentityField(\n read_only=True,\n view_name=\"audio-waveform\",\n lookup_field=\"identifier\",\n help_text=\"A direct link to the waveform peaks.\",\n )\n\n # Add-on data\n peaks = serializers.SerializerMethodField(\n help_text=\"The list of peaks used to generate the waveform for the audio.\"\n )\n\n @staticmethod\n def get_peaks(obj) -> list[int]:\n if isinstance(obj, Hit):\n obj = Audio.objects.get(identifier=obj.identifier)\n return obj.get_waveform()\n\n def to_representation(self, instance):\n # Get the original representation\n output = super().to_representation(instance)\n\n if isinstance(instance, Hit):\n # TODO: Remove when updating ES indexes\n audio = Audio.objects.get(identifier=instance.identifier)\n if not audio.thumbnail:\n output[\"thumbnail\"] = None\n\n return output\n\n\nclass AudioSearchSerializer(MediaSearchSerializer):\n \"\"\"\n The full audio search response.\n This serializer is purely representational and not actually used to\n serialize the response.\n \"\"\"\n\n results = AudioSerializer(\n many=True,\n help_text=(\n \"An array of audios and their details such as \"\n f\"{fields_to_md(AudioSerializer.Meta.fields)}.\"\n ),\n )\n\n\n##########################\n# Additional serializers #\n##########################\n\n\nclass AudioWaveformSerializer(serializers.Serializer):\n len = serializers.SerializerMethodField()\n points = serializers.ListField(\n child=serializers.FloatField(min_value=0, max_value=1)\n )\n\n @staticmethod\n def get_len(obj) -> int:\n return len(obj.get(\"points\", []))\n", "path": "api/catalog/api/serializers/audio_serializers.py"}]} | 1,687 | 178 |
gh_patches_debug_29543 | rasdani/github-patches | git_diff | pyro-ppl__numpyro-304 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clean up docs/source/conf.py file
I think we can change the names `Numpyro` -> `NumPyro` there, but I am not sure if the changes will affect the build of the website. So I make this issue.
cc @jpchen @neerajprad
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/conf.py`
Content:
```
1 import os
2 import sys
3
4 import sphinx_rtd_theme
5
6
7 # import pkg_resources
8
9 # -*- coding: utf-8 -*-
10 #
11 # Configuration file for the Sphinx documentation builder.
12 #
13 # This file does only contain a selection of the most common options. For a
14 # full list see the documentation:
15 # http://www.sphinx-doc.org/en/master/config
16
17 # -- Path setup --------------------------------------------------------------
18
19 # If extensions (or modules to document with autodoc) are in another directory,
20 # add these directories to sys.path here. If the directory is relative to the
21 # documentation root, use os.path.abspath to make it absolute, like shown here.
22 #
23 sys.path.insert(0, os.path.abspath('../..'))
24
25
26 os.environ['SPHINX_BUILD'] = '1'
27
28 # HACK: This is to ensure that local functions are documented by sphinx.
29 from numpyro.mcmc import hmc # noqa: E402
30 from numpyro.svi import svi # noqa: E402
31 hmc(None, None)
32 svi(None, None, None, None)
33
34 # -- Project information -----------------------------------------------------
35
36 project = u'Numpyro'
37 copyright = u'2019, Uber Technologies, Inc'
38 author = u'Uber AI Labs'
39
40 # The short X.Y version
41 version = u'0.0'
42 # The full version, including alpha/beta/rc tags
43 release = u'0.0'
44
45
46 # -- General configuration ---------------------------------------------------
47
48 # If your documentation needs a minimal Sphinx version, state it here.
49 #
50 # needs_sphinx = '1.0'
51
52 # Add any Sphinx extension module names here, as strings. They can be
53 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
54 # ones.
55 extensions = [
56 'sphinx.ext.autodoc',
57 'sphinx.ext.doctest',
58 'sphinx.ext.intersphinx',
59 'sphinx.ext.mathjax',
60 'sphinx.ext.viewcode',
61 ]
62
63 # Disable documentation inheritance so as to avoid inheriting
64 # docstrings in a different format, e.g. when the parent class
65 # is a PyTorch class.
66
67 autodoc_inherit_docstrings = False
68
69 # autodoc_default_options = {
70 # 'member-order': 'bysource',
71 # 'show-inheritance': True,
72 # 'special-members': True,
73 # 'undoc-members': True,
74 # 'exclude-members': '__dict__,__module__,__weakref__',
75 # }
76
77 # Add any paths that contain templates here, relative to this directory.
78 templates_path = ['_templates']
79
80 # The suffix(es) of source filenames.
81 # You can specify multiple suffix as a list of string:
82 #
83 # source_suffix = ['.rst', '.md']
84 source_suffix = '.rst'
85
86 # The master toctree document.
87 master_doc = 'index'
88
89 # The language for content autogenerated by Sphinx. Refer to documentation
90 # for a list of supported languages.
91 #
92 # This is also used if you do content translation via gettext catalogs.
93 # Usually you set "language" from the command line for these cases.
94 language = None
95
96 # List of patterns, relative to source directory, that match files and
97 # directories to ignore when looking for source files.
98 # This pattern also affects html_static_path and html_extra_path .
99 exclude_patterns = []
100
101 # The name of the Pygments (syntax highlighting) style to use.
102 pygments_style = 'sphinx'
103
104
105 # do not prepend module name to functions
106 add_module_names = False
107
108 # -- Options for HTML output -------------------------------------------------
109
110 # The theme to use for HTML and HTML Help pages. See the documentation for
111 # a list of builtin themes.
112 #
113 html_theme = "sphinx_rtd_theme"
114 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
115
116 # Theme options are theme-specific and customize the look and feel of a theme
117 # further. For a list of options available for each theme, see the
118 # documentation.
119 #
120 # html_theme_options = {}
121
122 # Add any paths that contain custom static files (such as style sheets) here,
123 # relative to this directory. They are copied after the builtin static files,
124 # so a file named "default.css" will overwrite the builtin "default.css".
125 html_static_path = []
126
127 # Custom sidebar templates, must be a dictionary that maps document names
128 # to template names.
129 #
130 # The default sidebars (for documents that don't match any pattern) are
131 # defined by theme itself. Builtin themes are using these templates by
132 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
133 # 'searchbox.html']``.
134 #
135 # html_sidebars = {}
136
137
138 # -- Options for HTMLHelp output ---------------------------------------------
139
140 # Output file base name for HTML help builder.
141 htmlhelp_basename = 'numpyrodoc'
142
143
144 # -- Options for LaTeX output ------------------------------------------------
145
146 latex_elements = {
147 # The paper size ('letterpaper' or 'a4paper').
148 #
149 # 'papersize': 'letterpaper',
150
151 # The font size ('10pt', '11pt' or '12pt').
152 #
153 # 'pointsize': '10pt',
154
155 # Additional stuff for the LaTeX preamble.
156 #
157 # 'preamble': '',
158
159 # Latex figure (float) alignment
160 #
161 # 'figure_align': 'htbp',
162 }
163
164 # Grouping the document tree into LaTeX files. List of tuples
165 # (source start file, target name, title,
166 # author, documentclass [howto, manual, or own class]).
167 latex_documents = [
168 (master_doc, 'Numpyro.tex', u'Numpyro Documentation', u'Uber AI Labs', 'manual'),
169 ]
170
171 # -- Options for manual page output ------------------------------------------
172
173 # One entry per manual page. List of tuples
174 # (source start file, name, description, authors, manual section).
175 man_pages = [
176 (master_doc, 'Numpyro', u'Numpyro Documentation',
177 [author], 1)
178 ]
179
180 # -- Options for Texinfo output ----------------------------------------------
181
182 # Grouping the document tree into Texinfo files. List of tuples
183 # (source start file, target name, title, author,
184 # dir menu entry, description, category)
185 texinfo_documents = [
186 (master_doc, 'Numpyro', u'Numpyro Documentation',
187 author, 'Numpyro', 'Pyro PPL on Numpy',
188 'Miscellaneous'),
189 ]
190
191
192 # -- Extension configuration -------------------------------------------------
193
194 # -- Options for intersphinx extension ---------------------------------------
195
196 # Example configuration for intersphinx: refer to the Python standard library.
197 intersphinx_mapping = {
198 'python': ('https://docs.python.org/3/', None),
199 'numpy': ('http://docs.scipy.org/doc/numpy/', None),
200 'jax': ('https://jax.readthedocs.io/en/latest/', None),
201 'pyro': ('http://docs.pyro.ai/en/stable/', None),
202 }
203
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -33,7 +33,7 @@
# -- Project information -----------------------------------------------------
-project = u'Numpyro'
+project = u'NumPyro'
copyright = u'2019, Uber Technologies, Inc'
author = u'Uber AI Labs'
@@ -165,7 +165,7 @@
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
- (master_doc, 'Numpyro.tex', u'Numpyro Documentation', u'Uber AI Labs', 'manual'),
+ (master_doc, 'NumPyro.tex', u'NumPyro Documentation', u'Uber AI Labs', 'manual'),
]
# -- Options for manual page output ------------------------------------------
@@ -173,7 +173,7 @@
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
- (master_doc, 'Numpyro', u'Numpyro Documentation',
+ (master_doc, 'NumPyro', u'NumPyro Documentation',
[author], 1)
]
@@ -183,8 +183,8 @@
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
- (master_doc, 'Numpyro', u'Numpyro Documentation',
- author, 'Numpyro', 'Pyro PPL on Numpy',
+ (master_doc, 'NumPyro', u'NumPyro Documentation',
+ author, 'NumPyro', 'Pyro PPL on Numpy',
'Miscellaneous'),
]
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -33,7 +33,7 @@\n \n # -- Project information -----------------------------------------------------\n \n-project = u'Numpyro'\n+project = u'NumPyro'\n copyright = u'2019, Uber Technologies, Inc'\n author = u'Uber AI Labs'\n \n@@ -165,7 +165,7 @@\n # (source start file, target name, title,\n # author, documentclass [howto, manual, or own class]).\n latex_documents = [\n- (master_doc, 'Numpyro.tex', u'Numpyro Documentation', u'Uber AI Labs', 'manual'),\n+ (master_doc, 'NumPyro.tex', u'NumPyro Documentation', u'Uber AI Labs', 'manual'),\n ]\n \n # -- Options for manual page output ------------------------------------------\n@@ -173,7 +173,7 @@\n # One entry per manual page. List of tuples\n # (source start file, name, description, authors, manual section).\n man_pages = [\n- (master_doc, 'Numpyro', u'Numpyro Documentation',\n+ (master_doc, 'NumPyro', u'NumPyro Documentation',\n [author], 1)\n ]\n \n@@ -183,8 +183,8 @@\n # (source start file, target name, title, author,\n # dir menu entry, description, category)\n texinfo_documents = [\n- (master_doc, 'Numpyro', u'Numpyro Documentation',\n- author, 'Numpyro', 'Pyro PPL on Numpy',\n+ (master_doc, 'NumPyro', u'NumPyro Documentation',\n+ author, 'NumPyro', 'Pyro PPL on Numpy',\n 'Miscellaneous'),\n ]\n", "issue": "Clean up docs/source/conf.py file\nI think we can change the names `Numpyro` -> `NumPyro` there, but I am not sure if the changes will affect the build of the website. So I make this issue.\r\n\r\ncc @jpchen @neerajprad \n", "before_files": [{"content": "import os\nimport sys\n\nimport sphinx_rtd_theme\n\n\n# import pkg_resources\n\n# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nsys.path.insert(0, os.path.abspath('../..'))\n\n\nos.environ['SPHINX_BUILD'] = '1'\n\n# HACK: This is to ensure that local functions are documented by sphinx.\nfrom numpyro.mcmc import hmc # noqa: E402\nfrom numpyro.svi import svi # noqa: E402\nhmc(None, None)\nsvi(None, None, None, None)\n\n# -- Project information -----------------------------------------------------\n\nproject = u'Numpyro'\ncopyright = u'2019, Uber Technologies, Inc'\nauthor = u'Uber AI Labs'\n\n# The short X.Y version\nversion = u'0.0'\n# The full version, including alpha/beta/rc tags\nrelease = u'0.0'\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.viewcode',\n]\n\n# Disable documentation inheritance so as to avoid inheriting\n# docstrings in a different format, e.g. when the parent class\n# is a PyTorch class.\n\nautodoc_inherit_docstrings = False\n\n# autodoc_default_options = {\n# 'member-order': 'bysource',\n# 'show-inheritance': True,\n# 'special-members': True,\n# 'undoc-members': True,\n# 'exclude-members': '__dict__,__module__,__weakref__',\n# }\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n\n# do not prepend module name to functions\nadd_module_names = False\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = []\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'numpyrodoc'\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'Numpyro.tex', u'Numpyro Documentation', u'Uber AI Labs', 'manual'),\n]\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'Numpyro', u'Numpyro Documentation',\n [author], 1)\n]\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'Numpyro', u'Numpyro Documentation',\n author, 'Numpyro', 'Pyro PPL on Numpy',\n 'Miscellaneous'),\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3/', None),\n 'numpy': ('http://docs.scipy.org/doc/numpy/', None),\n 'jax': ('https://jax.readthedocs.io/en/latest/', None),\n 'pyro': ('http://docs.pyro.ai/en/stable/', None),\n}\n", "path": "docs/source/conf.py"}], "after_files": [{"content": "import os\nimport sys\n\nimport sphinx_rtd_theme\n\n\n# import pkg_resources\n\n# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nsys.path.insert(0, os.path.abspath('../..'))\n\n\nos.environ['SPHINX_BUILD'] = '1'\n\n# HACK: This is to ensure that local functions are documented by sphinx.\nfrom numpyro.mcmc import hmc # noqa: E402\nfrom numpyro.svi import svi # noqa: E402\nhmc(None, None)\nsvi(None, None, None, None)\n\n# -- Project information -----------------------------------------------------\n\nproject = u'NumPyro'\ncopyright = u'2019, Uber Technologies, Inc'\nauthor = u'Uber AI Labs'\n\n# The short X.Y version\nversion = u'0.0'\n# The full version, including alpha/beta/rc tags\nrelease = u'0.0'\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.viewcode',\n]\n\n# Disable documentation inheritance so as to avoid inheriting\n# docstrings in a different format, e.g. when the parent class\n# is a PyTorch class.\n\nautodoc_inherit_docstrings = False\n\n# autodoc_default_options = {\n# 'member-order': 'bysource',\n# 'show-inheritance': True,\n# 'special-members': True,\n# 'undoc-members': True,\n# 'exclude-members': '__dict__,__module__,__weakref__',\n# }\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n\n# do not prepend module name to functions\nadd_module_names = False\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = []\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'numpyrodoc'\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'NumPyro.tex', u'NumPyro Documentation', u'Uber AI Labs', 'manual'),\n]\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'NumPyro', u'NumPyro Documentation',\n [author], 1)\n]\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'NumPyro', u'NumPyro Documentation',\n author, 'NumPyro', 'Pyro PPL on Numpy',\n 'Miscellaneous'),\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3/', None),\n 'numpy': ('http://docs.scipy.org/doc/numpy/', None),\n 'jax': ('https://jax.readthedocs.io/en/latest/', None),\n 'pyro': ('http://docs.pyro.ai/en/stable/', None),\n}\n", "path": "docs/source/conf.py"}]} | 2,309 | 409 |
gh_patches_debug_20491 | rasdani/github-patches | git_diff | OpenNMT__OpenNMT-py-461 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Explicit check when training with share_embeddings and not share_vocab
Hey, Whenever I run training with share_embedding flat I get the following error:
```RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1502009910772/work/torch/lib/THC/THCCachingHostAllocator.cpp:258```
Any idea what can cause this? how can fix this!
Thank.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `onmt/ModelConstructor.py`
Content:
```
1 """
2 This file is for models creation, which consults options
3 and creates each encoder and decoder accordingly.
4 """
5 import torch.nn as nn
6
7 import onmt
8 import onmt.io
9 import onmt.Models
10 import onmt.modules
11 from onmt.Models import NMTModel, MeanEncoder, RNNEncoder, \
12 StdRNNDecoder, InputFeedRNNDecoder
13 from onmt.modules import Embeddings, ImageEncoder, CopyGenerator, \
14 TransformerEncoder, TransformerDecoder, \
15 CNNEncoder, CNNDecoder, AudioEncoder
16
17
18 def make_embeddings(opt, word_dict, feature_dicts, for_encoder=True):
19 """
20 Make an Embeddings instance.
21 Args:
22 opt: the option in current environment.
23 word_dict(Vocab): words dictionary.
24 feature_dicts([Vocab], optional): a list of feature dictionary.
25 for_encoder(bool): make Embeddings for encoder or decoder?
26 """
27 if for_encoder:
28 embedding_dim = opt.src_word_vec_size
29 else:
30 embedding_dim = opt.tgt_word_vec_size
31
32 word_padding_idx = word_dict.stoi[onmt.io.PAD_WORD]
33 num_word_embeddings = len(word_dict)
34
35 feats_padding_idx = [feat_dict.stoi[onmt.io.PAD_WORD]
36 for feat_dict in feature_dicts]
37 num_feat_embeddings = [len(feat_dict) for feat_dict in
38 feature_dicts]
39
40 return Embeddings(embedding_dim,
41 opt.position_encoding,
42 opt.feat_merge,
43 opt.feat_vec_exponent,
44 opt.feat_vec_size,
45 opt.dropout,
46 word_padding_idx,
47 feats_padding_idx,
48 num_word_embeddings,
49 num_feat_embeddings)
50
51
52 def make_encoder(opt, embeddings):
53 """
54 Various encoder dispatcher function.
55 Args:
56 opt: the option in current environment.
57 embeddings (Embeddings): vocab embeddings for this encoder.
58 """
59 if opt.encoder_type == "transformer":
60 return TransformerEncoder(opt.enc_layers, opt.rnn_size,
61 opt.dropout, embeddings)
62 elif opt.encoder_type == "cnn":
63 return CNNEncoder(opt.enc_layers, opt.rnn_size,
64 opt.cnn_kernel_width,
65 opt.dropout, embeddings)
66 elif opt.encoder_type == "mean":
67 return MeanEncoder(opt.enc_layers, embeddings)
68 else:
69 # "rnn" or "brnn"
70 return RNNEncoder(opt.rnn_type, opt.brnn, opt.enc_layers,
71 opt.rnn_size, opt.dropout, embeddings)
72
73
74 def make_decoder(opt, embeddings):
75 """
76 Various decoder dispatcher function.
77 Args:
78 opt: the option in current environment.
79 embeddings (Embeddings): vocab embeddings for this decoder.
80 """
81 if opt.decoder_type == "transformer":
82 return TransformerDecoder(opt.dec_layers, opt.rnn_size,
83 opt.global_attention, opt.copy_attn,
84 opt.dropout, embeddings)
85 elif opt.decoder_type == "cnn":
86 return CNNDecoder(opt.dec_layers, opt.rnn_size,
87 opt.global_attention, opt.copy_attn,
88 opt.cnn_kernel_width, opt.dropout,
89 embeddings)
90 elif opt.input_feed:
91 return InputFeedRNNDecoder(opt.rnn_type, opt.brnn,
92 opt.dec_layers, opt.rnn_size,
93 opt.global_attention,
94 opt.coverage_attn,
95 opt.context_gate,
96 opt.copy_attn,
97 opt.dropout,
98 embeddings)
99 else:
100 return StdRNNDecoder(opt.rnn_type, opt.brnn,
101 opt.dec_layers, opt.rnn_size,
102 opt.global_attention,
103 opt.coverage_attn,
104 opt.context_gate,
105 opt.copy_attn,
106 opt.dropout,
107 embeddings)
108
109
110 def make_base_model(model_opt, fields, gpu, checkpoint=None):
111 """
112 Args:
113 model_opt: the option loaded from checkpoint.
114 fields: `Field` objects for the model.
115 gpu(bool): whether to use gpu.
116 checkpoint: the model gnerated by train phase, or a resumed snapshot
117 model from a stopped training.
118 Returns:
119 the NMTModel.
120 """
121 assert model_opt.model_type in ["text", "img", "audio"], \
122 ("Unsupported model type %s" % (model_opt.model_type))
123
124 # Make encoder.
125 if model_opt.model_type == "text":
126 src_dict = fields["src"].vocab
127 feature_dicts = onmt.io.collect_feature_vocabs(fields, 'src')
128 src_embeddings = make_embeddings(model_opt, src_dict,
129 feature_dicts)
130 encoder = make_encoder(model_opt, src_embeddings)
131 elif model_opt.model_type == "img":
132 encoder = ImageEncoder(model_opt.enc_layers,
133 model_opt.brnn,
134 model_opt.rnn_size,
135 model_opt.dropout)
136 elif model_opt.model_type == "audio":
137 encoder = AudioEncoder(model_opt.enc_layers,
138 model_opt.brnn,
139 model_opt.rnn_size,
140 model_opt.dropout,
141 model_opt.sample_rate,
142 model_opt.window_size)
143
144 # Make decoder.
145 tgt_dict = fields["tgt"].vocab
146 # TODO: prepare for a future where tgt features are possible.
147 feature_dicts = onmt.io.collect_feature_vocabs(fields, 'tgt')
148 tgt_embeddings = make_embeddings(model_opt, tgt_dict,
149 feature_dicts, for_encoder=False)
150
151 # Share the embedding matrix - preprocess with share_vocab required
152 if model_opt.share_embeddings:
153 tgt_embeddings.word_lut.weight = src_embeddings.word_lut.weight
154
155 decoder = make_decoder(model_opt, tgt_embeddings)
156
157 # Make NMTModel(= encoder + decoder).
158 model = NMTModel(encoder, decoder)
159 model.model_type = model_opt.model_type
160
161 # Make Generator.
162 if not model_opt.copy_attn:
163 generator = nn.Sequential(
164 nn.Linear(model_opt.rnn_size, len(fields["tgt"].vocab)),
165 nn.LogSoftmax())
166 if model_opt.share_decoder_embeddings:
167 generator[0].weight = decoder.embeddings.word_lut.weight
168 else:
169 generator = CopyGenerator(model_opt, fields["src"].vocab,
170 fields["tgt"].vocab)
171
172 # Load the model states from checkpoint or initialize them.
173 if checkpoint is not None:
174 print('Loading model parameters.')
175 model.load_state_dict(checkpoint['model'])
176 generator.load_state_dict(checkpoint['generator'])
177 else:
178 if model_opt.param_init != 0.0:
179 print('Intializing model parameters.')
180 for p in model.parameters():
181 p.data.uniform_(-model_opt.param_init, model_opt.param_init)
182 for p in generator.parameters():
183 p.data.uniform_(-model_opt.param_init, model_opt.param_init)
184 if hasattr(model.encoder, 'embeddings'):
185 model.encoder.embeddings.load_pretrained_vectors(
186 model_opt.pre_word_vecs_enc, model_opt.fix_word_vecs_enc)
187 if hasattr(model.decoder, 'embeddings'):
188 model.decoder.embeddings.load_pretrained_vectors(
189 model_opt.pre_word_vecs_dec, model_opt.fix_word_vecs_dec)
190
191 # Add generator to model (this registers it as parameter of model).
192 model.generator = generator
193
194 # Make the whole model leverage GPU if indicated to do so.
195 if gpu:
196 model.cuda()
197 else:
198 model.cpu()
199
200 return model
201
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/onmt/ModelConstructor.py b/onmt/ModelConstructor.py
--- a/onmt/ModelConstructor.py
+++ b/onmt/ModelConstructor.py
@@ -143,13 +143,17 @@
# Make decoder.
tgt_dict = fields["tgt"].vocab
- # TODO: prepare for a future where tgt features are possible.
feature_dicts = onmt.io.collect_feature_vocabs(fields, 'tgt')
tgt_embeddings = make_embeddings(model_opt, tgt_dict,
feature_dicts, for_encoder=False)
- # Share the embedding matrix - preprocess with share_vocab required
+ # Share the embedding matrix - preprocess with share_vocab required.
if model_opt.share_embeddings:
+ # src/tgt vocab should be the same if `-share_vocab` is specified.
+ if src_dict != tgt_dict:
+ raise AssertionError('The `-share_vocab` should be set during '
+ 'preprocess if you use share_embeddings!')
+
tgt_embeddings.word_lut.weight = src_embeddings.word_lut.weight
decoder = make_decoder(model_opt, tgt_embeddings)
| {"golden_diff": "diff --git a/onmt/ModelConstructor.py b/onmt/ModelConstructor.py\n--- a/onmt/ModelConstructor.py\n+++ b/onmt/ModelConstructor.py\n@@ -143,13 +143,17 @@\n \n # Make decoder.\n tgt_dict = fields[\"tgt\"].vocab\n- # TODO: prepare for a future where tgt features are possible.\n feature_dicts = onmt.io.collect_feature_vocabs(fields, 'tgt')\n tgt_embeddings = make_embeddings(model_opt, tgt_dict,\n feature_dicts, for_encoder=False)\n \n- # Share the embedding matrix - preprocess with share_vocab required\n+ # Share the embedding matrix - preprocess with share_vocab required.\n if model_opt.share_embeddings:\n+ # src/tgt vocab should be the same if `-share_vocab` is specified.\n+ if src_dict != tgt_dict:\n+ raise AssertionError('The `-share_vocab` should be set during '\n+ 'preprocess if you use share_embeddings!')\n+\n tgt_embeddings.word_lut.weight = src_embeddings.word_lut.weight\n \n decoder = make_decoder(model_opt, tgt_embeddings)\n", "issue": "Explicit check when training with share_embeddings and not share_vocab\nHey, Whenever I run training with share_embedding flat I get the following error:\r\n\r\n```RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1502009910772/work/torch/lib/THC/THCCachingHostAllocator.cpp:258```\r\n\r\nAny idea what can cause this? how can fix this!\r\n\r\nThank.\n", "before_files": [{"content": "\"\"\"\nThis file is for models creation, which consults options\nand creates each encoder and decoder accordingly.\n\"\"\"\nimport torch.nn as nn\n\nimport onmt\nimport onmt.io\nimport onmt.Models\nimport onmt.modules\nfrom onmt.Models import NMTModel, MeanEncoder, RNNEncoder, \\\n StdRNNDecoder, InputFeedRNNDecoder\nfrom onmt.modules import Embeddings, ImageEncoder, CopyGenerator, \\\n TransformerEncoder, TransformerDecoder, \\\n CNNEncoder, CNNDecoder, AudioEncoder\n\n\ndef make_embeddings(opt, word_dict, feature_dicts, for_encoder=True):\n \"\"\"\n Make an Embeddings instance.\n Args:\n opt: the option in current environment.\n word_dict(Vocab): words dictionary.\n feature_dicts([Vocab], optional): a list of feature dictionary.\n for_encoder(bool): make Embeddings for encoder or decoder?\n \"\"\"\n if for_encoder:\n embedding_dim = opt.src_word_vec_size\n else:\n embedding_dim = opt.tgt_word_vec_size\n\n word_padding_idx = word_dict.stoi[onmt.io.PAD_WORD]\n num_word_embeddings = len(word_dict)\n\n feats_padding_idx = [feat_dict.stoi[onmt.io.PAD_WORD]\n for feat_dict in feature_dicts]\n num_feat_embeddings = [len(feat_dict) for feat_dict in\n feature_dicts]\n\n return Embeddings(embedding_dim,\n opt.position_encoding,\n opt.feat_merge,\n opt.feat_vec_exponent,\n opt.feat_vec_size,\n opt.dropout,\n word_padding_idx,\n feats_padding_idx,\n num_word_embeddings,\n num_feat_embeddings)\n\n\ndef make_encoder(opt, embeddings):\n \"\"\"\n Various encoder dispatcher function.\n Args:\n opt: the option in current environment.\n embeddings (Embeddings): vocab embeddings for this encoder.\n \"\"\"\n if opt.encoder_type == \"transformer\":\n return TransformerEncoder(opt.enc_layers, opt.rnn_size,\n opt.dropout, embeddings)\n elif opt.encoder_type == \"cnn\":\n return CNNEncoder(opt.enc_layers, opt.rnn_size,\n opt.cnn_kernel_width,\n opt.dropout, embeddings)\n elif opt.encoder_type == \"mean\":\n return MeanEncoder(opt.enc_layers, embeddings)\n else:\n # \"rnn\" or \"brnn\"\n return RNNEncoder(opt.rnn_type, opt.brnn, opt.enc_layers,\n opt.rnn_size, opt.dropout, embeddings)\n\n\ndef make_decoder(opt, embeddings):\n \"\"\"\n Various decoder dispatcher function.\n Args:\n opt: the option in current environment.\n embeddings (Embeddings): vocab embeddings for this decoder.\n \"\"\"\n if opt.decoder_type == \"transformer\":\n return TransformerDecoder(opt.dec_layers, opt.rnn_size,\n opt.global_attention, opt.copy_attn,\n opt.dropout, embeddings)\n elif opt.decoder_type == \"cnn\":\n return CNNDecoder(opt.dec_layers, opt.rnn_size,\n opt.global_attention, opt.copy_attn,\n opt.cnn_kernel_width, opt.dropout,\n embeddings)\n elif opt.input_feed:\n return InputFeedRNNDecoder(opt.rnn_type, opt.brnn,\n opt.dec_layers, opt.rnn_size,\n opt.global_attention,\n opt.coverage_attn,\n opt.context_gate,\n opt.copy_attn,\n opt.dropout,\n embeddings)\n else:\n return StdRNNDecoder(opt.rnn_type, opt.brnn,\n opt.dec_layers, opt.rnn_size,\n opt.global_attention,\n opt.coverage_attn,\n opt.context_gate,\n opt.copy_attn,\n opt.dropout,\n embeddings)\n\n\ndef make_base_model(model_opt, fields, gpu, checkpoint=None):\n \"\"\"\n Args:\n model_opt: the option loaded from checkpoint.\n fields: `Field` objects for the model.\n gpu(bool): whether to use gpu.\n checkpoint: the model gnerated by train phase, or a resumed snapshot\n model from a stopped training.\n Returns:\n the NMTModel.\n \"\"\"\n assert model_opt.model_type in [\"text\", \"img\", \"audio\"], \\\n (\"Unsupported model type %s\" % (model_opt.model_type))\n\n # Make encoder.\n if model_opt.model_type == \"text\":\n src_dict = fields[\"src\"].vocab\n feature_dicts = onmt.io.collect_feature_vocabs(fields, 'src')\n src_embeddings = make_embeddings(model_opt, src_dict,\n feature_dicts)\n encoder = make_encoder(model_opt, src_embeddings)\n elif model_opt.model_type == \"img\":\n encoder = ImageEncoder(model_opt.enc_layers,\n model_opt.brnn,\n model_opt.rnn_size,\n model_opt.dropout)\n elif model_opt.model_type == \"audio\":\n encoder = AudioEncoder(model_opt.enc_layers,\n model_opt.brnn,\n model_opt.rnn_size,\n model_opt.dropout,\n model_opt.sample_rate,\n model_opt.window_size)\n\n # Make decoder.\n tgt_dict = fields[\"tgt\"].vocab\n # TODO: prepare for a future where tgt features are possible.\n feature_dicts = onmt.io.collect_feature_vocabs(fields, 'tgt')\n tgt_embeddings = make_embeddings(model_opt, tgt_dict,\n feature_dicts, for_encoder=False)\n\n # Share the embedding matrix - preprocess with share_vocab required\n if model_opt.share_embeddings:\n tgt_embeddings.word_lut.weight = src_embeddings.word_lut.weight\n\n decoder = make_decoder(model_opt, tgt_embeddings)\n\n # Make NMTModel(= encoder + decoder).\n model = NMTModel(encoder, decoder)\n model.model_type = model_opt.model_type\n\n # Make Generator.\n if not model_opt.copy_attn:\n generator = nn.Sequential(\n nn.Linear(model_opt.rnn_size, len(fields[\"tgt\"].vocab)),\n nn.LogSoftmax())\n if model_opt.share_decoder_embeddings:\n generator[0].weight = decoder.embeddings.word_lut.weight\n else:\n generator = CopyGenerator(model_opt, fields[\"src\"].vocab,\n fields[\"tgt\"].vocab)\n\n # Load the model states from checkpoint or initialize them.\n if checkpoint is not None:\n print('Loading model parameters.')\n model.load_state_dict(checkpoint['model'])\n generator.load_state_dict(checkpoint['generator'])\n else:\n if model_opt.param_init != 0.0:\n print('Intializing model parameters.')\n for p in model.parameters():\n p.data.uniform_(-model_opt.param_init, model_opt.param_init)\n for p in generator.parameters():\n p.data.uniform_(-model_opt.param_init, model_opt.param_init)\n if hasattr(model.encoder, 'embeddings'):\n model.encoder.embeddings.load_pretrained_vectors(\n model_opt.pre_word_vecs_enc, model_opt.fix_word_vecs_enc)\n if hasattr(model.decoder, 'embeddings'):\n model.decoder.embeddings.load_pretrained_vectors(\n model_opt.pre_word_vecs_dec, model_opt.fix_word_vecs_dec)\n\n # Add generator to model (this registers it as parameter of model).\n model.generator = generator\n\n # Make the whole model leverage GPU if indicated to do so.\n if gpu:\n model.cuda()\n else:\n model.cpu()\n\n return model\n", "path": "onmt/ModelConstructor.py"}], "after_files": [{"content": "\"\"\"\nThis file is for models creation, which consults options\nand creates each encoder and decoder accordingly.\n\"\"\"\nimport torch.nn as nn\n\nimport onmt\nimport onmt.io\nimport onmt.Models\nimport onmt.modules\nfrom onmt.Models import NMTModel, MeanEncoder, RNNEncoder, \\\n StdRNNDecoder, InputFeedRNNDecoder\nfrom onmt.modules import Embeddings, ImageEncoder, CopyGenerator, \\\n TransformerEncoder, TransformerDecoder, \\\n CNNEncoder, CNNDecoder, AudioEncoder\n\n\ndef make_embeddings(opt, word_dict, feature_dicts, for_encoder=True):\n \"\"\"\n Make an Embeddings instance.\n Args:\n opt: the option in current environment.\n word_dict(Vocab): words dictionary.\n feature_dicts([Vocab], optional): a list of feature dictionary.\n for_encoder(bool): make Embeddings for encoder or decoder?\n \"\"\"\n if for_encoder:\n embedding_dim = opt.src_word_vec_size\n else:\n embedding_dim = opt.tgt_word_vec_size\n\n word_padding_idx = word_dict.stoi[onmt.io.PAD_WORD]\n num_word_embeddings = len(word_dict)\n\n feats_padding_idx = [feat_dict.stoi[onmt.io.PAD_WORD]\n for feat_dict in feature_dicts]\n num_feat_embeddings = [len(feat_dict) for feat_dict in\n feature_dicts]\n\n return Embeddings(embedding_dim,\n opt.position_encoding,\n opt.feat_merge,\n opt.feat_vec_exponent,\n opt.feat_vec_size,\n opt.dropout,\n word_padding_idx,\n feats_padding_idx,\n num_word_embeddings,\n num_feat_embeddings)\n\n\ndef make_encoder(opt, embeddings):\n \"\"\"\n Various encoder dispatcher function.\n Args:\n opt: the option in current environment.\n embeddings (Embeddings): vocab embeddings for this encoder.\n \"\"\"\n if opt.encoder_type == \"transformer\":\n return TransformerEncoder(opt.enc_layers, opt.rnn_size,\n opt.dropout, embeddings)\n elif opt.encoder_type == \"cnn\":\n return CNNEncoder(opt.enc_layers, opt.rnn_size,\n opt.cnn_kernel_width,\n opt.dropout, embeddings)\n elif opt.encoder_type == \"mean\":\n return MeanEncoder(opt.enc_layers, embeddings)\n else:\n # \"rnn\" or \"brnn\"\n return RNNEncoder(opt.rnn_type, opt.brnn, opt.enc_layers,\n opt.rnn_size, opt.dropout, embeddings)\n\n\ndef make_decoder(opt, embeddings):\n \"\"\"\n Various decoder dispatcher function.\n Args:\n opt: the option in current environment.\n embeddings (Embeddings): vocab embeddings for this decoder.\n \"\"\"\n if opt.decoder_type == \"transformer\":\n return TransformerDecoder(opt.dec_layers, opt.rnn_size,\n opt.global_attention, opt.copy_attn,\n opt.dropout, embeddings)\n elif opt.decoder_type == \"cnn\":\n return CNNDecoder(opt.dec_layers, opt.rnn_size,\n opt.global_attention, opt.copy_attn,\n opt.cnn_kernel_width, opt.dropout,\n embeddings)\n elif opt.input_feed:\n return InputFeedRNNDecoder(opt.rnn_type, opt.brnn,\n opt.dec_layers, opt.rnn_size,\n opt.global_attention,\n opt.coverage_attn,\n opt.context_gate,\n opt.copy_attn,\n opt.dropout,\n embeddings)\n else:\n return StdRNNDecoder(opt.rnn_type, opt.brnn,\n opt.dec_layers, opt.rnn_size,\n opt.global_attention,\n opt.coverage_attn,\n opt.context_gate,\n opt.copy_attn,\n opt.dropout,\n embeddings)\n\n\ndef make_base_model(model_opt, fields, gpu, checkpoint=None):\n \"\"\"\n Args:\n model_opt: the option loaded from checkpoint.\n fields: `Field` objects for the model.\n gpu(bool): whether to use gpu.\n checkpoint: the model gnerated by train phase, or a resumed snapshot\n model from a stopped training.\n Returns:\n the NMTModel.\n \"\"\"\n assert model_opt.model_type in [\"text\", \"img\", \"audio\"], \\\n (\"Unsupported model type %s\" % (model_opt.model_type))\n\n # Make encoder.\n if model_opt.model_type == \"text\":\n src_dict = fields[\"src\"].vocab\n feature_dicts = onmt.io.collect_feature_vocabs(fields, 'src')\n src_embeddings = make_embeddings(model_opt, src_dict,\n feature_dicts)\n encoder = make_encoder(model_opt, src_embeddings)\n elif model_opt.model_type == \"img\":\n encoder = ImageEncoder(model_opt.enc_layers,\n model_opt.brnn,\n model_opt.rnn_size,\n model_opt.dropout)\n elif model_opt.model_type == \"audio\":\n encoder = AudioEncoder(model_opt.enc_layers,\n model_opt.brnn,\n model_opt.rnn_size,\n model_opt.dropout,\n model_opt.sample_rate,\n model_opt.window_size)\n\n # Make decoder.\n tgt_dict = fields[\"tgt\"].vocab\n feature_dicts = onmt.io.collect_feature_vocabs(fields, 'tgt')\n tgt_embeddings = make_embeddings(model_opt, tgt_dict,\n feature_dicts, for_encoder=False)\n\n # Share the embedding matrix - preprocess with share_vocab required.\n if model_opt.share_embeddings:\n # src/tgt vocab should be the same if `-share_vocab` is specified.\n if src_dict != tgt_dict:\n raise AssertionError('The `-share_vocab` should be set during '\n 'preprocess if you use share_embeddings!')\n\n tgt_embeddings.word_lut.weight = src_embeddings.word_lut.weight\n\n decoder = make_decoder(model_opt, tgt_embeddings)\n\n # Make NMTModel(= encoder + decoder).\n model = NMTModel(encoder, decoder)\n model.model_type = model_opt.model_type\n\n # Make Generator.\n if not model_opt.copy_attn:\n generator = nn.Sequential(\n nn.Linear(model_opt.rnn_size, len(fields[\"tgt\"].vocab)),\n nn.LogSoftmax())\n if model_opt.share_decoder_embeddings:\n generator[0].weight = decoder.embeddings.word_lut.weight\n else:\n generator = CopyGenerator(model_opt, fields[\"src\"].vocab,\n fields[\"tgt\"].vocab)\n\n # Load the model states from checkpoint or initialize them.\n if checkpoint is not None:\n print('Loading model parameters.')\n model.load_state_dict(checkpoint['model'])\n generator.load_state_dict(checkpoint['generator'])\n else:\n if model_opt.param_init != 0.0:\n print('Intializing model parameters.')\n for p in model.parameters():\n p.data.uniform_(-model_opt.param_init, model_opt.param_init)\n for p in generator.parameters():\n p.data.uniform_(-model_opt.param_init, model_opt.param_init)\n if hasattr(model.encoder, 'embeddings'):\n model.encoder.embeddings.load_pretrained_vectors(\n model_opt.pre_word_vecs_enc, model_opt.fix_word_vecs_enc)\n if hasattr(model.decoder, 'embeddings'):\n model.decoder.embeddings.load_pretrained_vectors(\n model_opt.pre_word_vecs_dec, model_opt.fix_word_vecs_dec)\n\n # Add generator to model (this registers it as parameter of model).\n model.generator = generator\n\n # Make the whole model leverage GPU if indicated to do so.\n if gpu:\n model.cuda()\n else:\n model.cpu()\n\n return model\n", "path": "onmt/ModelConstructor.py"}]} | 2,377 | 237 |
gh_patches_debug_26809 | rasdani/github-patches | git_diff | edgedb__edgedb-7143 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Group by - ProtocolError: cannot decode Object: expected 3 elements, got 2110812590
Hi
```
# this is working
group EntryItem
by .account;
# this is not working
group EntryItem
using accountCode := .account
by accountCode;
-> ProtocolError: cannot decode Object: expected 3 elements, got 2110812590
```
- EdgeDB Version: 2.9
- EdgeDB CLI Version: 2.2.6+7eabbf9
- OS Version: macOS 12.1
Schema:
```
module default {
type Account {
required link book -> Book;
required property code -> str {
constraint exclusive;
};
property displayCode := .code[0:5] ++ ' ' ++ .code[5:];
required property name -> str;
required link currency -> Currency;
constraint exclusive on ((.book, .code));
}
type EntryItem {
required link entry -> Entry;
required property lineNumber -> int16;
required link account -> Account;
...
constraint exclusive on ((.entry, .lineNumber))
}
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `edb/edgeql/desugar_group.py`
Content:
```
1 #
2 # This source file is part of the EdgeDB open source project.
3 #
4 # Copyright 2008-present MagicStack Inc. and the EdgeDB authors.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17 #
18
19 """Desugar GROUP queries into internal FOR GROUP queries.
20
21 This code is called by both the model and the real implementation,
22 though if that starts becoming a problem it should just be abandoned.
23 """
24
25 from __future__ import annotations
26
27
28 from typing import Optional, Tuple, AbstractSet, Dict, List
29
30 from edb.common import ast
31 from edb.common import ordered
32 from edb.common.compiler import AliasGenerator
33
34 from edb.edgeql import ast as qlast
35 from edb.edgeql.compiler import astutils
36
37
38 def key_name(s: str) -> str:
39 return s.split('~')[0]
40
41
42 def name_path(name: str) -> qlast.Path:
43 return qlast.Path(steps=[qlast.ObjectRef(name=name)])
44
45
46 def make_free_object(els: Dict[str, qlast.Expr]) -> qlast.Shape:
47 return qlast.Shape(
48 expr=None,
49 elements=[
50 qlast.ShapeElement(
51 expr=qlast.Path(steps=[qlast.Ptr(name=name)]),
52 compexpr=expr
53 )
54 for name, expr in els.items()
55 ],
56 )
57
58
59 def collect_grouping_atoms(
60 els: List[qlast.GroupingElement],
61 ) -> AbstractSet[str]:
62 atoms: ordered.OrderedSet[str] = ordered.OrderedSet()
63
64 def _collect_atom(el: qlast.GroupingAtom) -> None:
65 if isinstance(el, qlast.GroupingIdentList):
66 for at in el.elements:
67 _collect_atom(at)
68
69 else:
70 assert isinstance(el, qlast.ObjectRef)
71 atoms.add(el.name)
72
73 def _collect_el(el: qlast.GroupingElement) -> None:
74 if isinstance(el, qlast.GroupingSets):
75 for sub in el.sets:
76 _collect_el(sub)
77 elif isinstance(el, qlast.GroupingOperation):
78 for at in el.elements:
79 _collect_atom(at)
80 elif isinstance(el, qlast.GroupingSimple):
81 _collect_atom(el.element)
82 else:
83 raise AssertionError('Unknown GroupingElement')
84
85 for el in els:
86 _collect_el(el)
87
88 return atoms
89
90
91 def desugar_group(
92 node: qlast.GroupQuery,
93 aliases: AliasGenerator,
94 ) -> qlast.InternalGroupQuery:
95 assert not isinstance(node, qlast.InternalGroupQuery)
96 alias_map: Dict[str, Tuple[str, qlast.Expr]] = {}
97
98 def rewrite_atom(el: qlast.GroupingAtom) -> qlast.GroupingAtom:
99 if isinstance(el, qlast.ObjectRef):
100 return el
101 elif isinstance(el, qlast.Path):
102 assert isinstance(el.steps[0], qlast.Ptr)
103 ptrname = el.steps[0].name
104 if ptrname not in alias_map:
105 alias = aliases.get(ptrname)
106 alias_map[ptrname] = (alias, el)
107 alias = alias_map[ptrname][0]
108 return qlast.ObjectRef(name=alias)
109 else:
110 return qlast.GroupingIdentList(
111 span=el.span,
112 elements=tuple(rewrite_atom(at) for at in el.elements),
113 )
114
115 def rewrite(el: qlast.GroupingElement) -> qlast.GroupingElement:
116 if isinstance(el, qlast.GroupingSimple):
117 return qlast.GroupingSimple(
118 span=el.span, element=rewrite_atom(el.element))
119 elif isinstance(el, qlast.GroupingSets):
120 return qlast.GroupingSets(
121 span=el.span, sets=[rewrite(s) for s in el.sets])
122 elif isinstance(el, qlast.GroupingOperation):
123 return qlast.GroupingOperation(
124 span=el.span,
125 oper=el.oper,
126 elements=[rewrite_atom(a) for a in el.elements])
127 raise AssertionError
128
129 for using_clause in (node.using or ()):
130 alias_map[using_clause.alias] = (using_clause.alias, using_clause.expr)
131
132 using = node.using[:] if node.using else []
133 by = [rewrite(by_el) for by_el in node.by]
134 for alias, path in alias_map.values():
135 using.append(qlast.AliasedExpr(alias=alias, expr=path))
136
137 actual_keys = collect_grouping_atoms(by)
138
139 g_alias = aliases.get('g')
140 grouping_alias = aliases.get('grouping')
141 output_dict = {
142 'key': make_free_object({
143 name: name_path(alias)
144 for name, (alias, _) in alias_map.items()
145 if alias in actual_keys
146 }),
147 'grouping': qlast.FunctionCall(
148 func='array_unpack',
149 args=[name_path(grouping_alias)],
150 ),
151 'elements': name_path(g_alias),
152 }
153 output_shape = make_free_object(output_dict)
154
155 return qlast.InternalGroupQuery(
156 span=node.span,
157 aliases=node.aliases,
158 subject_alias=node.subject_alias,
159 subject=node.subject,
160 # rewritten parts!
161 using=using,
162 by=by,
163 group_alias=g_alias,
164 grouping_alias=grouping_alias,
165 result=output_shape,
166 from_desugaring=True,
167 )
168
169
170 def _count_alias_uses(
171 node: qlast.Expr,
172 alias: str,
173 ) -> int:
174 uses = 0
175 for child in ast.find_children(node, qlast.Path):
176 match child:
177 case astutils.alias_view((alias2, _)) if alias == alias2:
178 uses += 1
179 return uses
180
181
182 def try_group_rewrite(
183 node: qlast.Query,
184 aliases: AliasGenerator,
185 ) -> Optional[qlast.Query]:
186 """
187 Try to apply some syntactic rewrites of GROUP expressions so we
188 can generate better code.
189
190 The two key desugarings are:
191
192 * Sink a shape into the internal group result
193
194 SELECT (GROUP ...) <shape>
195 [filter-clause] [order-clause] [other clauses]
196 =>
197 SELECT (
198 FOR GROUP ...
199 UNION <igroup-body> <shape>
200 [filter-clause]
201 [order-clause]
202 ) [other clauses]
203
204 * Convert a FOR over a group into just an internal group (and
205 a trivial FOR)
206
207 FOR g in (GROUP ...) UNION <body>
208 =>
209 FOR GROUP ...
210 UNION (
211 FOR g IN (<group-body>)
212 UNION <body>
213 )
214 """
215
216 # Inline trivial uses of aliases bound to a group and then
217 # immediately used, so that we can apply the other optimizations.
218 match node:
219 case qlast.SelectQuery(
220 aliases=[
221 *_,
222 qlast.AliasedExpr(alias=alias, expr=qlast.GroupQuery() as grp)
223 ] as qaliases,
224 result=qlast.Shape(
225 expr=astutils.alias_view((alias2, [])),
226 elements=elements,
227 ) as result,
228 ) if alias == alias2 and _count_alias_uses(result, alias) == 1:
229 node = node.replace(
230 aliases=qaliases[:-1],
231 result=qlast.Shape(expr=grp, elements=elements),
232 )
233
234 case qlast.ForQuery(
235 aliases=[
236 *_,
237 qlast.AliasedExpr(alias=alias, expr=qlast.GroupQuery() as grp)
238 ] as qaliases,
239 iterator=astutils.alias_view((alias2, [])),
240 result=result,
241 ) if alias == alias2 and _count_alias_uses(result, alias) == 0:
242 node = node.replace(
243 aliases=qaliases[:-1],
244 iterator=grp,
245 )
246
247 # Sink shapes into the GROUP
248 if (
249 isinstance(node, qlast.SelectQuery)
250 and isinstance(node.result, qlast.Shape)
251 and isinstance(node.result.expr, qlast.GroupQuery)
252 ):
253 igroup = desugar_group(node.result.expr, aliases)
254 igroup = igroup.replace(result=qlast.Shape(
255 expr=igroup.result, elements=node.result.elements))
256
257 # FILTER gets sunk into the body of the FOR GROUP
258 if node.where or node.orderby:
259 igroup = igroup.replace(
260 # We need to move the result_alias in case
261 # the FILTER depends on it.
262 result_alias=node.result_alias,
263 where=node.where,
264 orderby=node.orderby,
265 )
266
267 return node.replace(
268 result=igroup, result_alias=None, where=None, orderby=None)
269
270 # Eliminate FORs over GROUPs
271 if (
272 isinstance(node, qlast.ForQuery)
273 and isinstance(node.iterator, qlast.GroupQuery)
274 ):
275 igroup = desugar_group(node.iterator, aliases)
276 new_result = qlast.ForQuery(
277 iterator_alias=node.iterator_alias,
278 iterator=igroup.result,
279 result=node.result,
280 )
281 return igroup.replace(result=new_result, aliases=node.aliases)
282
283 return None
284
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/edb/edgeql/desugar_group.py b/edb/edgeql/desugar_group.py
--- a/edb/edgeql/desugar_group.py
+++ b/edb/edgeql/desugar_group.py
@@ -27,6 +27,8 @@
from typing import Optional, Tuple, AbstractSet, Dict, List
+from edb import errors
+
from edb.common import ast
from edb.common import ordered
from edb.common.compiler import AliasGenerator
@@ -126,11 +128,22 @@
elements=[rewrite_atom(a) for a in el.elements])
raise AssertionError
+ # The rewrite calls on the grouping elements populate alias_map
+ # with any bindings for pointers the by clause refers to directly.
+ by = [rewrite(by_el) for by_el in node.by]
+
for using_clause in (node.using or ()):
+ if using_clause.alias in alias_map:
+ # TODO: This would be a great place to allow multiple spans!
+ raise errors.QueryError(
+ f"USING clause binds a variable '{using_clause.alias}' "
+ f"but a property with that name is used directly in the BY "
+ f"clause",
+ span=alias_map[using_clause.alias][1].span,
+ )
alias_map[using_clause.alias] = (using_clause.alias, using_clause.expr)
- using = node.using[:] if node.using else []
- by = [rewrite(by_el) for by_el in node.by]
+ using = []
for alias, path in alias_map.values():
using.append(qlast.AliasedExpr(alias=alias, expr=path))
| {"golden_diff": "diff --git a/edb/edgeql/desugar_group.py b/edb/edgeql/desugar_group.py\n--- a/edb/edgeql/desugar_group.py\n+++ b/edb/edgeql/desugar_group.py\n@@ -27,6 +27,8 @@\n \n from typing import Optional, Tuple, AbstractSet, Dict, List\n \n+from edb import errors\n+\n from edb.common import ast\n from edb.common import ordered\n from edb.common.compiler import AliasGenerator\n@@ -126,11 +128,22 @@\n elements=[rewrite_atom(a) for a in el.elements])\n raise AssertionError\n \n+ # The rewrite calls on the grouping elements populate alias_map\n+ # with any bindings for pointers the by clause refers to directly.\n+ by = [rewrite(by_el) for by_el in node.by]\n+\n for using_clause in (node.using or ()):\n+ if using_clause.alias in alias_map:\n+ # TODO: This would be a great place to allow multiple spans!\n+ raise errors.QueryError(\n+ f\"USING clause binds a variable '{using_clause.alias}' \"\n+ f\"but a property with that name is used directly in the BY \"\n+ f\"clause\",\n+ span=alias_map[using_clause.alias][1].span,\n+ )\n alias_map[using_clause.alias] = (using_clause.alias, using_clause.expr)\n \n- using = node.using[:] if node.using else []\n- by = [rewrite(by_el) for by_el in node.by]\n+ using = []\n for alias, path in alias_map.values():\n using.append(qlast.AliasedExpr(alias=alias, expr=path))\n", "issue": "Group by - ProtocolError: cannot decode Object: expected 3 elements, got 2110812590\nHi\r\n\r\n```\r\n# this is working\r\ngroup EntryItem\r\nby .account;\r\n\r\n# this is not working\r\ngroup EntryItem\r\nusing accountCode := .account\r\nby accountCode;\r\n-> ProtocolError: cannot decode Object: expected 3 elements, got 2110812590\r\n```\r\n\r\n- EdgeDB Version: 2.9\r\n- EdgeDB CLI Version: 2.2.6+7eabbf9\r\n- OS Version: macOS 12.1\r\n\r\nSchema:\r\n\r\n```\r\nmodule default {\r\n type Account {\r\n required link book -> Book;\r\n required property code -> str {\r\n constraint exclusive;\r\n };\r\n\r\n property displayCode := .code[0:5] ++ ' ' ++ .code[5:];\r\n required property name -> str;\r\n required link currency -> Currency;\r\n\r\n constraint exclusive on ((.book, .code));\r\n }\r\n\r\n type EntryItem {\r\n required link entry -> Entry;\r\n required property lineNumber -> int16;\r\n\r\n required link account -> Account;\r\n ...\r\n\r\n constraint exclusive on ((.entry, .lineNumber))\r\n }\r\n}\r\n```\r\n\n", "before_files": [{"content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2008-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\"\"\"Desugar GROUP queries into internal FOR GROUP queries.\n\nThis code is called by both the model and the real implementation,\nthough if that starts becoming a problem it should just be abandoned.\n\"\"\"\n\nfrom __future__ import annotations\n\n\nfrom typing import Optional, Tuple, AbstractSet, Dict, List\n\nfrom edb.common import ast\nfrom edb.common import ordered\nfrom edb.common.compiler import AliasGenerator\n\nfrom edb.edgeql import ast as qlast\nfrom edb.edgeql.compiler import astutils\n\n\ndef key_name(s: str) -> str:\n return s.split('~')[0]\n\n\ndef name_path(name: str) -> qlast.Path:\n return qlast.Path(steps=[qlast.ObjectRef(name=name)])\n\n\ndef make_free_object(els: Dict[str, qlast.Expr]) -> qlast.Shape:\n return qlast.Shape(\n expr=None,\n elements=[\n qlast.ShapeElement(\n expr=qlast.Path(steps=[qlast.Ptr(name=name)]),\n compexpr=expr\n )\n for name, expr in els.items()\n ],\n )\n\n\ndef collect_grouping_atoms(\n els: List[qlast.GroupingElement],\n) -> AbstractSet[str]:\n atoms: ordered.OrderedSet[str] = ordered.OrderedSet()\n\n def _collect_atom(el: qlast.GroupingAtom) -> None:\n if isinstance(el, qlast.GroupingIdentList):\n for at in el.elements:\n _collect_atom(at)\n\n else:\n assert isinstance(el, qlast.ObjectRef)\n atoms.add(el.name)\n\n def _collect_el(el: qlast.GroupingElement) -> None:\n if isinstance(el, qlast.GroupingSets):\n for sub in el.sets:\n _collect_el(sub)\n elif isinstance(el, qlast.GroupingOperation):\n for at in el.elements:\n _collect_atom(at)\n elif isinstance(el, qlast.GroupingSimple):\n _collect_atom(el.element)\n else:\n raise AssertionError('Unknown GroupingElement')\n\n for el in els:\n _collect_el(el)\n\n return atoms\n\n\ndef desugar_group(\n node: qlast.GroupQuery,\n aliases: AliasGenerator,\n) -> qlast.InternalGroupQuery:\n assert not isinstance(node, qlast.InternalGroupQuery)\n alias_map: Dict[str, Tuple[str, qlast.Expr]] = {}\n\n def rewrite_atom(el: qlast.GroupingAtom) -> qlast.GroupingAtom:\n if isinstance(el, qlast.ObjectRef):\n return el\n elif isinstance(el, qlast.Path):\n assert isinstance(el.steps[0], qlast.Ptr)\n ptrname = el.steps[0].name\n if ptrname not in alias_map:\n alias = aliases.get(ptrname)\n alias_map[ptrname] = (alias, el)\n alias = alias_map[ptrname][0]\n return qlast.ObjectRef(name=alias)\n else:\n return qlast.GroupingIdentList(\n span=el.span,\n elements=tuple(rewrite_atom(at) for at in el.elements),\n )\n\n def rewrite(el: qlast.GroupingElement) -> qlast.GroupingElement:\n if isinstance(el, qlast.GroupingSimple):\n return qlast.GroupingSimple(\n span=el.span, element=rewrite_atom(el.element))\n elif isinstance(el, qlast.GroupingSets):\n return qlast.GroupingSets(\n span=el.span, sets=[rewrite(s) for s in el.sets])\n elif isinstance(el, qlast.GroupingOperation):\n return qlast.GroupingOperation(\n span=el.span,\n oper=el.oper,\n elements=[rewrite_atom(a) for a in el.elements])\n raise AssertionError\n\n for using_clause in (node.using or ()):\n alias_map[using_clause.alias] = (using_clause.alias, using_clause.expr)\n\n using = node.using[:] if node.using else []\n by = [rewrite(by_el) for by_el in node.by]\n for alias, path in alias_map.values():\n using.append(qlast.AliasedExpr(alias=alias, expr=path))\n\n actual_keys = collect_grouping_atoms(by)\n\n g_alias = aliases.get('g')\n grouping_alias = aliases.get('grouping')\n output_dict = {\n 'key': make_free_object({\n name: name_path(alias)\n for name, (alias, _) in alias_map.items()\n if alias in actual_keys\n }),\n 'grouping': qlast.FunctionCall(\n func='array_unpack',\n args=[name_path(grouping_alias)],\n ),\n 'elements': name_path(g_alias),\n }\n output_shape = make_free_object(output_dict)\n\n return qlast.InternalGroupQuery(\n span=node.span,\n aliases=node.aliases,\n subject_alias=node.subject_alias,\n subject=node.subject,\n # rewritten parts!\n using=using,\n by=by,\n group_alias=g_alias,\n grouping_alias=grouping_alias,\n result=output_shape,\n from_desugaring=True,\n )\n\n\ndef _count_alias_uses(\n node: qlast.Expr,\n alias: str,\n) -> int:\n uses = 0\n for child in ast.find_children(node, qlast.Path):\n match child:\n case astutils.alias_view((alias2, _)) if alias == alias2:\n uses += 1\n return uses\n\n\ndef try_group_rewrite(\n node: qlast.Query,\n aliases: AliasGenerator,\n) -> Optional[qlast.Query]:\n \"\"\"\n Try to apply some syntactic rewrites of GROUP expressions so we\n can generate better code.\n\n The two key desugarings are:\n\n * Sink a shape into the internal group result\n\n SELECT (GROUP ...) <shape>\n [filter-clause] [order-clause] [other clauses]\n =>\n SELECT (\n FOR GROUP ...\n UNION <igroup-body> <shape>\n [filter-clause]\n [order-clause]\n ) [other clauses]\n\n * Convert a FOR over a group into just an internal group (and\n a trivial FOR)\n\n FOR g in (GROUP ...) UNION <body>\n =>\n FOR GROUP ...\n UNION (\n FOR g IN (<group-body>)\n UNION <body>\n )\n \"\"\"\n\n # Inline trivial uses of aliases bound to a group and then\n # immediately used, so that we can apply the other optimizations.\n match node:\n case qlast.SelectQuery(\n aliases=[\n *_,\n qlast.AliasedExpr(alias=alias, expr=qlast.GroupQuery() as grp)\n ] as qaliases,\n result=qlast.Shape(\n expr=astutils.alias_view((alias2, [])),\n elements=elements,\n ) as result,\n ) if alias == alias2 and _count_alias_uses(result, alias) == 1:\n node = node.replace(\n aliases=qaliases[:-1],\n result=qlast.Shape(expr=grp, elements=elements),\n )\n\n case qlast.ForQuery(\n aliases=[\n *_,\n qlast.AliasedExpr(alias=alias, expr=qlast.GroupQuery() as grp)\n ] as qaliases,\n iterator=astutils.alias_view((alias2, [])),\n result=result,\n ) if alias == alias2 and _count_alias_uses(result, alias) == 0:\n node = node.replace(\n aliases=qaliases[:-1],\n iterator=grp,\n )\n\n # Sink shapes into the GROUP\n if (\n isinstance(node, qlast.SelectQuery)\n and isinstance(node.result, qlast.Shape)\n and isinstance(node.result.expr, qlast.GroupQuery)\n ):\n igroup = desugar_group(node.result.expr, aliases)\n igroup = igroup.replace(result=qlast.Shape(\n expr=igroup.result, elements=node.result.elements))\n\n # FILTER gets sunk into the body of the FOR GROUP\n if node.where or node.orderby:\n igroup = igroup.replace(\n # We need to move the result_alias in case\n # the FILTER depends on it.\n result_alias=node.result_alias,\n where=node.where,\n orderby=node.orderby,\n )\n\n return node.replace(\n result=igroup, result_alias=None, where=None, orderby=None)\n\n # Eliminate FORs over GROUPs\n if (\n isinstance(node, qlast.ForQuery)\n and isinstance(node.iterator, qlast.GroupQuery)\n ):\n igroup = desugar_group(node.iterator, aliases)\n new_result = qlast.ForQuery(\n iterator_alias=node.iterator_alias,\n iterator=igroup.result,\n result=node.result,\n )\n return igroup.replace(result=new_result, aliases=node.aliases)\n\n return None\n", "path": "edb/edgeql/desugar_group.py"}], "after_files": [{"content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2008-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\"\"\"Desugar GROUP queries into internal FOR GROUP queries.\n\nThis code is called by both the model and the real implementation,\nthough if that starts becoming a problem it should just be abandoned.\n\"\"\"\n\nfrom __future__ import annotations\n\n\nfrom typing import Optional, Tuple, AbstractSet, Dict, List\n\nfrom edb import errors\n\nfrom edb.common import ast\nfrom edb.common import ordered\nfrom edb.common.compiler import AliasGenerator\n\nfrom edb.edgeql import ast as qlast\nfrom edb.edgeql.compiler import astutils\n\n\ndef key_name(s: str) -> str:\n return s.split('~')[0]\n\n\ndef name_path(name: str) -> qlast.Path:\n return qlast.Path(steps=[qlast.ObjectRef(name=name)])\n\n\ndef make_free_object(els: Dict[str, qlast.Expr]) -> qlast.Shape:\n return qlast.Shape(\n expr=None,\n elements=[\n qlast.ShapeElement(\n expr=qlast.Path(steps=[qlast.Ptr(name=name)]),\n compexpr=expr\n )\n for name, expr in els.items()\n ],\n )\n\n\ndef collect_grouping_atoms(\n els: List[qlast.GroupingElement],\n) -> AbstractSet[str]:\n atoms: ordered.OrderedSet[str] = ordered.OrderedSet()\n\n def _collect_atom(el: qlast.GroupingAtom) -> None:\n if isinstance(el, qlast.GroupingIdentList):\n for at in el.elements:\n _collect_atom(at)\n\n else:\n assert isinstance(el, qlast.ObjectRef)\n atoms.add(el.name)\n\n def _collect_el(el: qlast.GroupingElement) -> None:\n if isinstance(el, qlast.GroupingSets):\n for sub in el.sets:\n _collect_el(sub)\n elif isinstance(el, qlast.GroupingOperation):\n for at in el.elements:\n _collect_atom(at)\n elif isinstance(el, qlast.GroupingSimple):\n _collect_atom(el.element)\n else:\n raise AssertionError('Unknown GroupingElement')\n\n for el in els:\n _collect_el(el)\n\n return atoms\n\n\ndef desugar_group(\n node: qlast.GroupQuery,\n aliases: AliasGenerator,\n) -> qlast.InternalGroupQuery:\n assert not isinstance(node, qlast.InternalGroupQuery)\n alias_map: Dict[str, Tuple[str, qlast.Expr]] = {}\n\n def rewrite_atom(el: qlast.GroupingAtom) -> qlast.GroupingAtom:\n if isinstance(el, qlast.ObjectRef):\n return el\n elif isinstance(el, qlast.Path):\n assert isinstance(el.steps[0], qlast.Ptr)\n ptrname = el.steps[0].name\n if ptrname not in alias_map:\n alias = aliases.get(ptrname)\n alias_map[ptrname] = (alias, el)\n alias = alias_map[ptrname][0]\n return qlast.ObjectRef(name=alias)\n else:\n return qlast.GroupingIdentList(\n span=el.span,\n elements=tuple(rewrite_atom(at) for at in el.elements),\n )\n\n def rewrite(el: qlast.GroupingElement) -> qlast.GroupingElement:\n if isinstance(el, qlast.GroupingSimple):\n return qlast.GroupingSimple(\n span=el.span, element=rewrite_atom(el.element))\n elif isinstance(el, qlast.GroupingSets):\n return qlast.GroupingSets(\n span=el.span, sets=[rewrite(s) for s in el.sets])\n elif isinstance(el, qlast.GroupingOperation):\n return qlast.GroupingOperation(\n span=el.span,\n oper=el.oper,\n elements=[rewrite_atom(a) for a in el.elements])\n raise AssertionError\n\n # The rewrite calls on the grouping elements populate alias_map\n # with any bindings for pointers the by clause refers to directly.\n by = [rewrite(by_el) for by_el in node.by]\n\n for using_clause in (node.using or ()):\n if using_clause.alias in alias_map:\n # TODO: This would be a great place to allow multiple spans!\n raise errors.QueryError(\n f\"USING clause binds a variable '{using_clause.alias}' \"\n f\"but a property with that name is used directly in the BY \"\n f\"clause\",\n span=alias_map[using_clause.alias][1].span,\n )\n alias_map[using_clause.alias] = (using_clause.alias, using_clause.expr)\n\n using = []\n for alias, path in alias_map.values():\n using.append(qlast.AliasedExpr(alias=alias, expr=path))\n\n actual_keys = collect_grouping_atoms(by)\n\n g_alias = aliases.get('g')\n grouping_alias = aliases.get('grouping')\n output_dict = {\n 'key': make_free_object({\n name: name_path(alias)\n for name, (alias, _) in alias_map.items()\n if alias in actual_keys\n }),\n 'grouping': qlast.FunctionCall(\n func='array_unpack',\n args=[name_path(grouping_alias)],\n ),\n 'elements': name_path(g_alias),\n }\n output_shape = make_free_object(output_dict)\n\n return qlast.InternalGroupQuery(\n span=node.span,\n aliases=node.aliases,\n subject_alias=node.subject_alias,\n subject=node.subject,\n # rewritten parts!\n using=using,\n by=by,\n group_alias=g_alias,\n grouping_alias=grouping_alias,\n result=output_shape,\n from_desugaring=True,\n )\n\n\ndef _count_alias_uses(\n node: qlast.Expr,\n alias: str,\n) -> int:\n uses = 0\n for child in ast.find_children(node, qlast.Path):\n match child:\n case astutils.alias_view((alias2, _)) if alias == alias2:\n uses += 1\n return uses\n\n\ndef try_group_rewrite(\n node: qlast.Query,\n aliases: AliasGenerator,\n) -> Optional[qlast.Query]:\n \"\"\"\n Try to apply some syntactic rewrites of GROUP expressions so we\n can generate better code.\n\n The two key desugarings are:\n\n * Sink a shape into the internal group result\n\n SELECT (GROUP ...) <shape>\n [filter-clause] [order-clause] [other clauses]\n =>\n SELECT (\n FOR GROUP ...\n UNION <igroup-body> <shape>\n [filter-clause]\n [order-clause]\n ) [other clauses]\n\n * Convert a FOR over a group into just an internal group (and\n a trivial FOR)\n\n FOR g in (GROUP ...) UNION <body>\n =>\n FOR GROUP ...\n UNION (\n FOR g IN (<group-body>)\n UNION <body>\n )\n \"\"\"\n\n # Inline trivial uses of aliases bound to a group and then\n # immediately used, so that we can apply the other optimizations.\n match node:\n case qlast.SelectQuery(\n aliases=[\n *_,\n qlast.AliasedExpr(alias=alias, expr=qlast.GroupQuery() as grp)\n ] as qaliases,\n result=qlast.Shape(\n expr=astutils.alias_view((alias2, [])),\n elements=elements,\n ) as result,\n ) if alias == alias2 and _count_alias_uses(result, alias) == 1:\n node = node.replace(\n aliases=qaliases[:-1],\n result=qlast.Shape(expr=grp, elements=elements),\n )\n\n case qlast.ForQuery(\n aliases=[\n *_,\n qlast.AliasedExpr(alias=alias, expr=qlast.GroupQuery() as grp)\n ] as qaliases,\n iterator=astutils.alias_view((alias2, [])),\n result=result,\n ) if alias == alias2 and _count_alias_uses(result, alias) == 0:\n node = node.replace(\n aliases=qaliases[:-1],\n iterator=grp,\n )\n\n # Sink shapes into the GROUP\n if (\n isinstance(node, qlast.SelectQuery)\n and isinstance(node.result, qlast.Shape)\n and isinstance(node.result.expr, qlast.GroupQuery)\n ):\n igroup = desugar_group(node.result.expr, aliases)\n igroup = igroup.replace(result=qlast.Shape(\n expr=igroup.result, elements=node.result.elements))\n\n # FILTER gets sunk into the body of the FOR GROUP\n if node.where or node.orderby:\n igroup = igroup.replace(\n # We need to move the result_alias in case\n # the FILTER depends on it.\n result_alias=node.result_alias,\n where=node.where,\n orderby=node.orderby,\n )\n\n return node.replace(\n result=igroup, result_alias=None, where=None, orderby=None)\n\n # Eliminate FORs over GROUPs\n if (\n isinstance(node, qlast.ForQuery)\n and isinstance(node.iterator, qlast.GroupQuery)\n ):\n igroup = desugar_group(node.iterator, aliases)\n new_result = qlast.ForQuery(\n iterator_alias=node.iterator_alias,\n iterator=igroup.result,\n result=node.result,\n )\n return igroup.replace(result=new_result, aliases=node.aliases)\n\n return None\n", "path": "edb/edgeql/desugar_group.py"}]} | 3,294 | 372 |
gh_patches_debug_30873 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-5355 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `colossalai/tensor/param_op_hook.py`
Content:
```
1 from abc import ABC, abstractmethod
2 from contextlib import contextmanager
3 from typing import Any, List, Tuple
4
5 import torch
6 from torch.utils._pytree import TreeSpec, tree_flatten, tree_unflatten
7
8
9 class ColoParamOpHook(ABC):
10 """
11 Hook which is triggered by each operation when operands contain ColoParameter.
12 To customize it, you must inherit this abstract class, and implement ``pre_forward``,
13 ``post_forward``, ``pre_backward`` and ``post_backward``.
14 These four methods apply a list of ColoParameter as input args.
15 """
16
17 @abstractmethod
18 def pre_forward(self, params: List[torch.Tensor]) -> None:
19 pass
20
21 @abstractmethod
22 def post_forward(self, params: List[torch.Tensor]) -> None:
23 pass
24
25 @abstractmethod
26 def pre_backward(self, params: List[torch.Tensor]) -> None:
27 pass
28
29 @abstractmethod
30 def post_backward(self, params: List[torch.Tensor]) -> None:
31 pass
32
33
34 class ColoParamOpHookManager:
35 """
36 Manage your param op hooks. It only has static methods.
37 The only static method you should call is ``use_hooks(*hooks)``.
38 """
39
40 hooks: Tuple[ColoParamOpHook, ...] = tuple()
41
42 @staticmethod
43 @contextmanager
44 def use_hooks(*hooks: ColoParamOpHook):
45 """Change the param op hooks you use. Nested calling is allowed.
46
47 Example:
48 >>> with ColoParamOpHookManager.use_hooks(*hooks):
49 >>> do_something()
50 >>> with ColoParamOpHookManager.use_hooks():
51 >>> // clear hooks
52 >>> do_something()
53 """
54 try:
55 old_param_op_hooks = ColoParamOpHookManager.hooks
56 ColoParamOpHookManager.hooks = hooks
57 yield
58 finally:
59 ColoParamOpHookManager.hooks = old_param_op_hooks
60
61 @staticmethod
62 def _trigger_pre_forward(params: List[torch.Tensor]) -> None:
63 for hook in ColoParamOpHookManager.hooks:
64 hook.pre_forward(params)
65
66 @staticmethod
67 def _trigger_post_forward(params: List[torch.Tensor]) -> None:
68 for hook in ColoParamOpHookManager.hooks:
69 hook.post_forward(params)
70
71 @staticmethod
72 def _trigger_pre_backward(params: List[torch.Tensor]) -> None:
73 for hook in ColoParamOpHookManager.hooks:
74 hook.pre_backward(params)
75
76 @staticmethod
77 def _trigger_post_backward(params: List[torch.Tensor]) -> None:
78 for hook in ColoParamOpHookManager.hooks:
79 hook.post_backward(params)
80
81 @staticmethod
82 def pre_op(params: List[torch.Tensor], *args: Any) -> list:
83 ColoParamOpHookManager._trigger_pre_forward(params)
84 # auto grad function can only recognize torch.Tensor, thus we have to flatten the input
85 # if one of the input requires grad, all the output will be treated as requires grad
86 # and will have grad fn even the corresponding input does not require grad
87 # we have to extract tensors requiring grad into flat list and then merge them back
88 grad_args, other_args, grad_flags, spec = _flatten_grad_args(args)
89 new_grad_args = PreFwdPostBwd.apply(params, *grad_args)
90 return _merge_args(new_grad_args, other_args, grad_flags, spec)
91
92 @staticmethod
93 def post_op(params: List[torch.Tensor], arg: Any) -> Any:
94 ColoParamOpHookManager._trigger_post_forward(params)
95 return PostFwdPreBwd.apply(params, arg)
96
97 @staticmethod
98 def has_hook() -> bool:
99 return len(ColoParamOpHookManager.hooks) > 0
100
101
102 class PreFwdPostBwd(torch.autograd.Function):
103 @staticmethod
104 def forward(ctx, params, *args):
105 ctx.params = params
106 return args
107
108 @staticmethod
109 def backward(ctx, *grads):
110 ColoParamOpHookManager._trigger_post_backward(ctx.params)
111 return (None,) + grads
112
113
114 class PostFwdPreBwd(torch.autograd.Function):
115 @staticmethod
116 def forward(ctx, params, args):
117 ctx.params = params
118 return args
119
120 @staticmethod
121 def backward(ctx, *grads):
122 ColoParamOpHookManager._trigger_pre_backward(ctx.params)
123 return (None,) + grads
124
125
126 def _is_grad_tensor(obj) -> bool:
127 if torch.is_tensor(obj):
128 if obj.grad_fn is not None or obj.requires_grad:
129 return True
130 return False
131
132
133 def _flatten_grad_args(args) -> Tuple[list, list, List[bool], TreeSpec]:
134 flat_args, spec = tree_flatten(args)
135 grad_args = []
136 other_args = []
137 grad_flags = []
138 for arg in flat_args:
139 flag = _is_grad_tensor(arg)
140 grad_flags.append(flag)
141 if flag:
142 grad_args.append(arg)
143 else:
144 other_args.append(arg)
145 assert len(grad_args) > 0
146 return grad_args, other_args, grad_flags, spec
147
148
149 def _merge_args(grad_args, other_args, grad_flags, spec):
150 grad_iter = iter(grad_args)
151 other_iter = iter(other_args)
152 flat_args = [next(grad_iter) if flag else next(other_iter) for flag in grad_flags]
153 return tree_unflatten(flat_args, spec)
154
```
Path: `colossalai/tensor/colo_parameter.py`
Content:
```
1 from typing import Optional
2
3 import torch
4
5 from colossalai.tensor.colo_tensor import ColoTensor
6 from colossalai.tensor.param_op_hook import ColoParamOpHookManager
7
8 from .colo_tensor import _convert_output
9
10 WHITE_LIST_FUNCS = {torch.Tensor.__getitem__, torch.Tensor.is_floating_point}
11
12
13 def is_no_hook_op(func) -> bool:
14 return func.__name__.startswith("__") and func not in WHITE_LIST_FUNCS
15
16
17 def filter_colo_parameters(*args, **kwargs):
18 param_list = []
19
20 def get_colo_parameters(element) -> None:
21 if isinstance(element, list) or isinstance(element, tuple):
22 for e in element:
23 get_colo_parameters(e)
24 elif isinstance(element, dict):
25 raise RuntimeError("Found Dict: ColoParameter can't deal with complicated arguments.")
26 elif isinstance(element, ColoParameter):
27 param_list.append(element)
28 return
29
30 for a in args:
31 get_colo_parameters(a)
32 for v in kwargs.values():
33 get_colo_parameters(v)
34
35 return param_list
36
37
38 def replace_args(args, kwargs, new_args):
39 args = new_args[: len(args)]
40 for k, v in zip(kwargs.keys(), new_args[len(args) :]):
41 kwargs[k] = v
42 return tuple(args), kwargs
43
44
45 class ColoParameter(ColoTensor, torch.nn.Parameter):
46 r"""A kind of ColoTensor to be considered as a module parameter."""
47
48 def __new__(cls, data: Optional[torch.Tensor] = None, requires_grad: bool = True) -> "ColoParameter":
49 if data is None:
50 data = torch.empty(0)
51 return torch.Tensor._make_subclass(cls, data, requires_grad)
52
53 @classmethod
54 def __torch_function__(cls, func, types, args=..., kwargs=None):
55 if kwargs is None:
56 kwargs = {}
57 if ColoParamOpHookManager.has_hook() and not is_no_hook_op(func):
58 params = filter_colo_parameters(*args, **kwargs)
59 if len(params) > 0:
60 with torch._C.DisableTorchFunction():
61 new_args = ColoParamOpHookManager.pre_op(params, *args, *kwargs.values())
62 args, kwargs = replace_args(args, kwargs, new_args)
63 ret = super().__torch_function__(func, types, args, kwargs)
64 with torch._C.DisableTorchFunction():
65 ret = ColoParamOpHookManager.post_op(params, ret)
66 return _convert_output(ret, func)
67 return super().__torch_function__(func, types, args, kwargs)
68
69 def __deepcopy__(self, memo):
70 if id(self) in memo:
71 return memo[id(self)]
72 else:
73 with torch._C.DisableTorchFunction():
74 data = self.data.clone()
75 tensor = ColoParameter(data, self.requires_grad)
76 memo[id(self)] = tensor
77 return tensor
78
79 def __reduce_ex__(self, proto):
80 # Adapted from torch._utils._rebuild_parameter
81 # def _rebuild_colo_parameter(data, requires_grad, backward_hooks):
82 # colo_param = ColoParameter(data, requires_grad)
83 # colo_param._backward_hooks = backward_hooks
84 # return colo_param
85
86 # return (
87 # _rebuild_colo_parameter,
88 # (self.data, self.requires_grad, OrderedDict())
89 # )
90
91 # TODO(jzy) we don't support object reflection now.
92 # distspec cannot be pickled or rebuilt because it's tightly connected to runtime attribute `process_group`.
93 raise NotImplementedError
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/colossalai/tensor/colo_parameter.py b/colossalai/tensor/colo_parameter.py
--- a/colossalai/tensor/colo_parameter.py
+++ b/colossalai/tensor/colo_parameter.py
@@ -7,11 +7,12 @@
from .colo_tensor import _convert_output
-WHITE_LIST_FUNCS = {torch.Tensor.__getitem__, torch.Tensor.is_floating_point}
+WHITE_LIST_FUNCS = {torch.Tensor.__getitem__}
+NO_HOOK_FUNCS = {torch.Tensor.is_floating_point}
def is_no_hook_op(func) -> bool:
- return func.__name__.startswith("__") and func not in WHITE_LIST_FUNCS
+ return (func.__name__.startswith("__") and func not in WHITE_LIST_FUNCS) or func in NO_HOOK_FUNCS
def filter_colo_parameters(*args, **kwargs):
diff --git a/colossalai/tensor/param_op_hook.py b/colossalai/tensor/param_op_hook.py
--- a/colossalai/tensor/param_op_hook.py
+++ b/colossalai/tensor/param_op_hook.py
@@ -92,7 +92,10 @@
@staticmethod
def post_op(params: List[torch.Tensor], arg: Any) -> Any:
ColoParamOpHookManager._trigger_post_forward(params)
- return PostFwdPreBwd.apply(params, arg)
+ # incase the output is a tuple, we have to flatten it
+ grad_args, other_args, grad_flags, spec = _flatten_grad_args(arg)
+ new_grad_args = PostFwdPreBwd.apply(params, *grad_args)
+ return _merge_args(new_grad_args, other_args, grad_flags, spec)
@staticmethod
def has_hook() -> bool:
@@ -113,7 +116,7 @@
class PostFwdPreBwd(torch.autograd.Function):
@staticmethod
- def forward(ctx, params, args):
+ def forward(ctx, params, *args):
ctx.params = params
return args
@@ -142,7 +145,6 @@
grad_args.append(arg)
else:
other_args.append(arg)
- assert len(grad_args) > 0
return grad_args, other_args, grad_flags, spec
| {"golden_diff": "diff --git a/colossalai/tensor/colo_parameter.py b/colossalai/tensor/colo_parameter.py\n--- a/colossalai/tensor/colo_parameter.py\n+++ b/colossalai/tensor/colo_parameter.py\n@@ -7,11 +7,12 @@\n \n from .colo_tensor import _convert_output\n \n-WHITE_LIST_FUNCS = {torch.Tensor.__getitem__, torch.Tensor.is_floating_point}\n+WHITE_LIST_FUNCS = {torch.Tensor.__getitem__}\n+NO_HOOK_FUNCS = {torch.Tensor.is_floating_point}\n \n \n def is_no_hook_op(func) -> bool:\n- return func.__name__.startswith(\"__\") and func not in WHITE_LIST_FUNCS\n+ return (func.__name__.startswith(\"__\") and func not in WHITE_LIST_FUNCS) or func in NO_HOOK_FUNCS\n \n \n def filter_colo_parameters(*args, **kwargs):\ndiff --git a/colossalai/tensor/param_op_hook.py b/colossalai/tensor/param_op_hook.py\n--- a/colossalai/tensor/param_op_hook.py\n+++ b/colossalai/tensor/param_op_hook.py\n@@ -92,7 +92,10 @@\n @staticmethod\n def post_op(params: List[torch.Tensor], arg: Any) -> Any:\n ColoParamOpHookManager._trigger_post_forward(params)\n- return PostFwdPreBwd.apply(params, arg)\n+ # incase the output is a tuple, we have to flatten it\n+ grad_args, other_args, grad_flags, spec = _flatten_grad_args(arg)\n+ new_grad_args = PostFwdPreBwd.apply(params, *grad_args)\n+ return _merge_args(new_grad_args, other_args, grad_flags, spec)\n \n @staticmethod\n def has_hook() -> bool:\n@@ -113,7 +116,7 @@\n \n class PostFwdPreBwd(torch.autograd.Function):\n @staticmethod\n- def forward(ctx, params, args):\n+ def forward(ctx, params, *args):\n ctx.params = params\n return args\n \n@@ -142,7 +145,6 @@\n grad_args.append(arg)\n else:\n other_args.append(arg)\n- assert len(grad_args) > 0\n return grad_args, other_args, grad_flags, spec\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from abc import ABC, abstractmethod\nfrom contextlib import contextmanager\nfrom typing import Any, List, Tuple\n\nimport torch\nfrom torch.utils._pytree import TreeSpec, tree_flatten, tree_unflatten\n\n\nclass ColoParamOpHook(ABC):\n \"\"\"\n Hook which is triggered by each operation when operands contain ColoParameter.\n To customize it, you must inherit this abstract class, and implement ``pre_forward``,\n ``post_forward``, ``pre_backward`` and ``post_backward``.\n These four methods apply a list of ColoParameter as input args.\n \"\"\"\n\n @abstractmethod\n def pre_forward(self, params: List[torch.Tensor]) -> None:\n pass\n\n @abstractmethod\n def post_forward(self, params: List[torch.Tensor]) -> None:\n pass\n\n @abstractmethod\n def pre_backward(self, params: List[torch.Tensor]) -> None:\n pass\n\n @abstractmethod\n def post_backward(self, params: List[torch.Tensor]) -> None:\n pass\n\n\nclass ColoParamOpHookManager:\n \"\"\"\n Manage your param op hooks. It only has static methods.\n The only static method you should call is ``use_hooks(*hooks)``.\n \"\"\"\n\n hooks: Tuple[ColoParamOpHook, ...] = tuple()\n\n @staticmethod\n @contextmanager\n def use_hooks(*hooks: ColoParamOpHook):\n \"\"\"Change the param op hooks you use. Nested calling is allowed.\n\n Example:\n >>> with ColoParamOpHookManager.use_hooks(*hooks):\n >>> do_something()\n >>> with ColoParamOpHookManager.use_hooks():\n >>> // clear hooks\n >>> do_something()\n \"\"\"\n try:\n old_param_op_hooks = ColoParamOpHookManager.hooks\n ColoParamOpHookManager.hooks = hooks\n yield\n finally:\n ColoParamOpHookManager.hooks = old_param_op_hooks\n\n @staticmethod\n def _trigger_pre_forward(params: List[torch.Tensor]) -> None:\n for hook in ColoParamOpHookManager.hooks:\n hook.pre_forward(params)\n\n @staticmethod\n def _trigger_post_forward(params: List[torch.Tensor]) -> None:\n for hook in ColoParamOpHookManager.hooks:\n hook.post_forward(params)\n\n @staticmethod\n def _trigger_pre_backward(params: List[torch.Tensor]) -> None:\n for hook in ColoParamOpHookManager.hooks:\n hook.pre_backward(params)\n\n @staticmethod\n def _trigger_post_backward(params: List[torch.Tensor]) -> None:\n for hook in ColoParamOpHookManager.hooks:\n hook.post_backward(params)\n\n @staticmethod\n def pre_op(params: List[torch.Tensor], *args: Any) -> list:\n ColoParamOpHookManager._trigger_pre_forward(params)\n # auto grad function can only recognize torch.Tensor, thus we have to flatten the input\n # if one of the input requires grad, all the output will be treated as requires grad\n # and will have grad fn even the corresponding input does not require grad\n # we have to extract tensors requiring grad into flat list and then merge them back\n grad_args, other_args, grad_flags, spec = _flatten_grad_args(args)\n new_grad_args = PreFwdPostBwd.apply(params, *grad_args)\n return _merge_args(new_grad_args, other_args, grad_flags, spec)\n\n @staticmethod\n def post_op(params: List[torch.Tensor], arg: Any) -> Any:\n ColoParamOpHookManager._trigger_post_forward(params)\n return PostFwdPreBwd.apply(params, arg)\n\n @staticmethod\n def has_hook() -> bool:\n return len(ColoParamOpHookManager.hooks) > 0\n\n\nclass PreFwdPostBwd(torch.autograd.Function):\n @staticmethod\n def forward(ctx, params, *args):\n ctx.params = params\n return args\n\n @staticmethod\n def backward(ctx, *grads):\n ColoParamOpHookManager._trigger_post_backward(ctx.params)\n return (None,) + grads\n\n\nclass PostFwdPreBwd(torch.autograd.Function):\n @staticmethod\n def forward(ctx, params, args):\n ctx.params = params\n return args\n\n @staticmethod\n def backward(ctx, *grads):\n ColoParamOpHookManager._trigger_pre_backward(ctx.params)\n return (None,) + grads\n\n\ndef _is_grad_tensor(obj) -> bool:\n if torch.is_tensor(obj):\n if obj.grad_fn is not None or obj.requires_grad:\n return True\n return False\n\n\ndef _flatten_grad_args(args) -> Tuple[list, list, List[bool], TreeSpec]:\n flat_args, spec = tree_flatten(args)\n grad_args = []\n other_args = []\n grad_flags = []\n for arg in flat_args:\n flag = _is_grad_tensor(arg)\n grad_flags.append(flag)\n if flag:\n grad_args.append(arg)\n else:\n other_args.append(arg)\n assert len(grad_args) > 0\n return grad_args, other_args, grad_flags, spec\n\n\ndef _merge_args(grad_args, other_args, grad_flags, spec):\n grad_iter = iter(grad_args)\n other_iter = iter(other_args)\n flat_args = [next(grad_iter) if flag else next(other_iter) for flag in grad_flags]\n return tree_unflatten(flat_args, spec)\n", "path": "colossalai/tensor/param_op_hook.py"}, {"content": "from typing import Optional\n\nimport torch\n\nfrom colossalai.tensor.colo_tensor import ColoTensor\nfrom colossalai.tensor.param_op_hook import ColoParamOpHookManager\n\nfrom .colo_tensor import _convert_output\n\nWHITE_LIST_FUNCS = {torch.Tensor.__getitem__, torch.Tensor.is_floating_point}\n\n\ndef is_no_hook_op(func) -> bool:\n return func.__name__.startswith(\"__\") and func not in WHITE_LIST_FUNCS\n\n\ndef filter_colo_parameters(*args, **kwargs):\n param_list = []\n\n def get_colo_parameters(element) -> None:\n if isinstance(element, list) or isinstance(element, tuple):\n for e in element:\n get_colo_parameters(e)\n elif isinstance(element, dict):\n raise RuntimeError(\"Found Dict: ColoParameter can't deal with complicated arguments.\")\n elif isinstance(element, ColoParameter):\n param_list.append(element)\n return\n\n for a in args:\n get_colo_parameters(a)\n for v in kwargs.values():\n get_colo_parameters(v)\n\n return param_list\n\n\ndef replace_args(args, kwargs, new_args):\n args = new_args[: len(args)]\n for k, v in zip(kwargs.keys(), new_args[len(args) :]):\n kwargs[k] = v\n return tuple(args), kwargs\n\n\nclass ColoParameter(ColoTensor, torch.nn.Parameter):\n r\"\"\"A kind of ColoTensor to be considered as a module parameter.\"\"\"\n\n def __new__(cls, data: Optional[torch.Tensor] = None, requires_grad: bool = True) -> \"ColoParameter\":\n if data is None:\n data = torch.empty(0)\n return torch.Tensor._make_subclass(cls, data, requires_grad)\n\n @classmethod\n def __torch_function__(cls, func, types, args=..., kwargs=None):\n if kwargs is None:\n kwargs = {}\n if ColoParamOpHookManager.has_hook() and not is_no_hook_op(func):\n params = filter_colo_parameters(*args, **kwargs)\n if len(params) > 0:\n with torch._C.DisableTorchFunction():\n new_args = ColoParamOpHookManager.pre_op(params, *args, *kwargs.values())\n args, kwargs = replace_args(args, kwargs, new_args)\n ret = super().__torch_function__(func, types, args, kwargs)\n with torch._C.DisableTorchFunction():\n ret = ColoParamOpHookManager.post_op(params, ret)\n return _convert_output(ret, func)\n return super().__torch_function__(func, types, args, kwargs)\n\n def __deepcopy__(self, memo):\n if id(self) in memo:\n return memo[id(self)]\n else:\n with torch._C.DisableTorchFunction():\n data = self.data.clone()\n tensor = ColoParameter(data, self.requires_grad)\n memo[id(self)] = tensor\n return tensor\n\n def __reduce_ex__(self, proto):\n # Adapted from torch._utils._rebuild_parameter\n # def _rebuild_colo_parameter(data, requires_grad, backward_hooks):\n # colo_param = ColoParameter(data, requires_grad)\n # colo_param._backward_hooks = backward_hooks\n # return colo_param\n\n # return (\n # _rebuild_colo_parameter,\n # (self.data, self.requires_grad, OrderedDict())\n # )\n\n # TODO(jzy) we don't support object reflection now.\n # distspec cannot be pickled or rebuilt because it's tightly connected to runtime attribute `process_group`.\n raise NotImplementedError\n", "path": "colossalai/tensor/colo_parameter.py"}], "after_files": [{"content": "from abc import ABC, abstractmethod\nfrom contextlib import contextmanager\nfrom typing import Any, List, Tuple\n\nimport torch\nfrom torch.utils._pytree import TreeSpec, tree_flatten, tree_unflatten\n\n\nclass ColoParamOpHook(ABC):\n \"\"\"\n Hook which is triggered by each operation when operands contain ColoParameter.\n To customize it, you must inherit this abstract class, and implement ``pre_forward``,\n ``post_forward``, ``pre_backward`` and ``post_backward``.\n These four methods apply a list of ColoParameter as input args.\n \"\"\"\n\n @abstractmethod\n def pre_forward(self, params: List[torch.Tensor]) -> None:\n pass\n\n @abstractmethod\n def post_forward(self, params: List[torch.Tensor]) -> None:\n pass\n\n @abstractmethod\n def pre_backward(self, params: List[torch.Tensor]) -> None:\n pass\n\n @abstractmethod\n def post_backward(self, params: List[torch.Tensor]) -> None:\n pass\n\n\nclass ColoParamOpHookManager:\n \"\"\"\n Manage your param op hooks. It only has static methods.\n The only static method you should call is ``use_hooks(*hooks)``.\n \"\"\"\n\n hooks: Tuple[ColoParamOpHook, ...] = tuple()\n\n @staticmethod\n @contextmanager\n def use_hooks(*hooks: ColoParamOpHook):\n \"\"\"Change the param op hooks you use. Nested calling is allowed.\n\n Example:\n >>> with ColoParamOpHookManager.use_hooks(*hooks):\n >>> do_something()\n >>> with ColoParamOpHookManager.use_hooks():\n >>> // clear hooks\n >>> do_something()\n \"\"\"\n try:\n old_param_op_hooks = ColoParamOpHookManager.hooks\n ColoParamOpHookManager.hooks = hooks\n yield\n finally:\n ColoParamOpHookManager.hooks = old_param_op_hooks\n\n @staticmethod\n def _trigger_pre_forward(params: List[torch.Tensor]) -> None:\n for hook in ColoParamOpHookManager.hooks:\n hook.pre_forward(params)\n\n @staticmethod\n def _trigger_post_forward(params: List[torch.Tensor]) -> None:\n for hook in ColoParamOpHookManager.hooks:\n hook.post_forward(params)\n\n @staticmethod\n def _trigger_pre_backward(params: List[torch.Tensor]) -> None:\n for hook in ColoParamOpHookManager.hooks:\n hook.pre_backward(params)\n\n @staticmethod\n def _trigger_post_backward(params: List[torch.Tensor]) -> None:\n for hook in ColoParamOpHookManager.hooks:\n hook.post_backward(params)\n\n @staticmethod\n def pre_op(params: List[torch.Tensor], *args: Any) -> list:\n ColoParamOpHookManager._trigger_pre_forward(params)\n # auto grad function can only recognize torch.Tensor, thus we have to flatten the input\n # if one of the input requires grad, all the output will be treated as requires grad\n # and will have grad fn even the corresponding input does not require grad\n # we have to extract tensors requiring grad into flat list and then merge them back\n grad_args, other_args, grad_flags, spec = _flatten_grad_args(args)\n new_grad_args = PreFwdPostBwd.apply(params, *grad_args)\n return _merge_args(new_grad_args, other_args, grad_flags, spec)\n\n @staticmethod\n def post_op(params: List[torch.Tensor], arg: Any) -> Any:\n ColoParamOpHookManager._trigger_post_forward(params)\n # incase the output is a tuple, we have to flatten it\n grad_args, other_args, grad_flags, spec = _flatten_grad_args(arg)\n new_grad_args = PostFwdPreBwd.apply(params, *grad_args)\n return _merge_args(new_grad_args, other_args, grad_flags, spec)\n\n @staticmethod\n def has_hook() -> bool:\n return len(ColoParamOpHookManager.hooks) > 0\n\n\nclass PreFwdPostBwd(torch.autograd.Function):\n @staticmethod\n def forward(ctx, params, *args):\n ctx.params = params\n return args\n\n @staticmethod\n def backward(ctx, *grads):\n ColoParamOpHookManager._trigger_post_backward(ctx.params)\n return (None,) + grads\n\n\nclass PostFwdPreBwd(torch.autograd.Function):\n @staticmethod\n def forward(ctx, params, *args):\n ctx.params = params\n return args\n\n @staticmethod\n def backward(ctx, *grads):\n ColoParamOpHookManager._trigger_pre_backward(ctx.params)\n return (None,) + grads\n\n\ndef _is_grad_tensor(obj) -> bool:\n if torch.is_tensor(obj):\n if obj.grad_fn is not None or obj.requires_grad:\n return True\n return False\n\n\ndef _flatten_grad_args(args) -> Tuple[list, list, List[bool], TreeSpec]:\n flat_args, spec = tree_flatten(args)\n grad_args = []\n other_args = []\n grad_flags = []\n for arg in flat_args:\n flag = _is_grad_tensor(arg)\n grad_flags.append(flag)\n if flag:\n grad_args.append(arg)\n else:\n other_args.append(arg)\n return grad_args, other_args, grad_flags, spec\n\n\ndef _merge_args(grad_args, other_args, grad_flags, spec):\n grad_iter = iter(grad_args)\n other_iter = iter(other_args)\n flat_args = [next(grad_iter) if flag else next(other_iter) for flag in grad_flags]\n return tree_unflatten(flat_args, spec)\n", "path": "colossalai/tensor/param_op_hook.py"}, {"content": "from typing import Optional\n\nimport torch\n\nfrom colossalai.tensor.colo_tensor import ColoTensor\nfrom colossalai.tensor.param_op_hook import ColoParamOpHookManager\n\nfrom .colo_tensor import _convert_output\n\nWHITE_LIST_FUNCS = {torch.Tensor.__getitem__}\nNO_HOOK_FUNCS = {torch.Tensor.is_floating_point}\n\n\ndef is_no_hook_op(func) -> bool:\n return (func.__name__.startswith(\"__\") and func not in WHITE_LIST_FUNCS) or func in NO_HOOK_FUNCS\n\n\ndef filter_colo_parameters(*args, **kwargs):\n param_list = []\n\n def get_colo_parameters(element) -> None:\n if isinstance(element, list) or isinstance(element, tuple):\n for e in element:\n get_colo_parameters(e)\n elif isinstance(element, dict):\n raise RuntimeError(\"Found Dict: ColoParameter can't deal with complicated arguments.\")\n elif isinstance(element, ColoParameter):\n param_list.append(element)\n return\n\n for a in args:\n get_colo_parameters(a)\n for v in kwargs.values():\n get_colo_parameters(v)\n\n return param_list\n\n\ndef replace_args(args, kwargs, new_args):\n args = new_args[: len(args)]\n for k, v in zip(kwargs.keys(), new_args[len(args) :]):\n kwargs[k] = v\n return tuple(args), kwargs\n\n\nclass ColoParameter(ColoTensor, torch.nn.Parameter):\n r\"\"\"A kind of ColoTensor to be considered as a module parameter.\"\"\"\n\n def __new__(cls, data: Optional[torch.Tensor] = None, requires_grad: bool = True) -> \"ColoParameter\":\n if data is None:\n data = torch.empty(0)\n return torch.Tensor._make_subclass(cls, data, requires_grad)\n\n @classmethod\n def __torch_function__(cls, func, types, args=..., kwargs=None):\n if kwargs is None:\n kwargs = {}\n if ColoParamOpHookManager.has_hook() and not is_no_hook_op(func):\n params = filter_colo_parameters(*args, **kwargs)\n if len(params) > 0:\n with torch._C.DisableTorchFunction():\n new_args = ColoParamOpHookManager.pre_op(params, *args, *kwargs.values())\n args, kwargs = replace_args(args, kwargs, new_args)\n ret = super().__torch_function__(func, types, args, kwargs)\n with torch._C.DisableTorchFunction():\n ret = ColoParamOpHookManager.post_op(params, ret)\n return _convert_output(ret, func)\n return super().__torch_function__(func, types, args, kwargs)\n\n def __deepcopy__(self, memo):\n if id(self) in memo:\n return memo[id(self)]\n else:\n with torch._C.DisableTorchFunction():\n data = self.data.clone()\n tensor = ColoParameter(data, self.requires_grad)\n memo[id(self)] = tensor\n return tensor\n\n def __reduce_ex__(self, proto):\n # Adapted from torch._utils._rebuild_parameter\n # def _rebuild_colo_parameter(data, requires_grad, backward_hooks):\n # colo_param = ColoParameter(data, requires_grad)\n # colo_param._backward_hooks = backward_hooks\n # return colo_param\n\n # return (\n # _rebuild_colo_parameter,\n # (self.data, self.requires_grad, OrderedDict())\n # )\n\n # TODO(jzy) we don't support object reflection now.\n # distspec cannot be pickled or rebuilt because it's tightly connected to runtime attribute `process_group`.\n raise NotImplementedError\n", "path": "colossalai/tensor/colo_parameter.py"}]} | 2,779 | 508 |
gh_patches_debug_25250 | rasdani/github-patches | git_diff | pre-commit__pre-commit-193 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
^C^C during installation may leave pre-commit in a bad state
There's code which handles the first ^C, however I think the second one (during execution of the finally block) may not be handled well. I probably need to make the cleanup atomic somehow...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/repository.py`
Content:
```
1 from __future__ import unicode_literals
2
3 from cached_property import cached_property
4
5 from pre_commit.languages.all import languages
6 from pre_commit.manifest import Manifest
7 from pre_commit.prefixed_command_runner import PrefixedCommandRunner
8
9
10 class Repository(object):
11 def __init__(self, repo_config, repo_path_getter):
12 self.repo_config = repo_config
13 self.repo_path_getter = repo_path_getter
14 self.__installed = False
15
16 @classmethod
17 def create(cls, config, store):
18 repo_path_getter = store.get_repo_path_getter(
19 config['repo'], config['sha']
20 )
21 return cls(config, repo_path_getter)
22
23 @cached_property
24 def repo_url(self):
25 return self.repo_config['repo']
26
27 @cached_property
28 def sha(self):
29 return self.repo_config['sha']
30
31 @cached_property
32 def languages(self):
33 return set(
34 (hook['language'], hook['language_version'])
35 for _, hook in self.hooks
36 )
37
38 @cached_property
39 def hooks(self):
40 # TODO: merging in manifest dicts is a smell imo
41 return tuple(
42 (hook['id'], dict(self.manifest.hooks[hook['id']], **hook))
43 for hook in self.repo_config['hooks']
44 )
45
46 @cached_property
47 def manifest(self):
48 return Manifest(self.repo_path_getter)
49
50 @cached_property
51 def cmd_runner(self):
52 return PrefixedCommandRunner(self.repo_path_getter.repo_path)
53
54 def require_installed(self):
55 if self.__installed:
56 return
57
58 self.install()
59 self.__installed = True
60
61 def install(self):
62 """Install the hook repository."""
63 for language_name, language_version in self.languages:
64 language = languages[language_name]
65 if (
66 language.ENVIRONMENT_DIR is None or
67 self.cmd_runner.exists(language.ENVIRONMENT_DIR)
68 ):
69 # The language is already installed
70 continue
71 language.install_environment(self.cmd_runner, language_version)
72
73 def run_hook(self, hook, file_args):
74 """Run a hook.
75
76 Args:
77 hook - Hook dictionary
78 file_args - List of files to run
79 """
80 self.require_installed()
81 return languages[hook['language']].run_hook(
82 self.cmd_runner, hook, file_args,
83 )
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -1,5 +1,7 @@
from __future__ import unicode_literals
+import shutil
+
from cached_property import cached_property
from pre_commit.languages.all import languages
@@ -64,11 +66,21 @@
language = languages[language_name]
if (
language.ENVIRONMENT_DIR is None or
- self.cmd_runner.exists(language.ENVIRONMENT_DIR)
+ self.cmd_runner.exists(language.ENVIRONMENT_DIR, '.installed')
):
# The language is already installed
continue
+ # There's potentially incomplete cleanup from previous runs
+ # Clean it up!
+ if self.cmd_runner.exists(language.ENVIRONMENT_DIR):
+ shutil.rmtree(self.cmd_runner.path(language.ENVIRONMENT_DIR))
+
language.install_environment(self.cmd_runner, language_version)
+ # Touch the .installed file (atomic) to indicate we've installed
+ open(
+ self.cmd_runner.path(language.ENVIRONMENT_DIR, '.installed'),
+ 'w',
+ ).close()
def run_hook(self, hook, file_args):
"""Run a hook.
| {"golden_diff": "diff --git a/pre_commit/repository.py b/pre_commit/repository.py\n--- a/pre_commit/repository.py\n+++ b/pre_commit/repository.py\n@@ -1,5 +1,7 @@\n from __future__ import unicode_literals\n \n+import shutil\n+\n from cached_property import cached_property\n \n from pre_commit.languages.all import languages\n@@ -64,11 +66,21 @@\n language = languages[language_name]\n if (\n language.ENVIRONMENT_DIR is None or\n- self.cmd_runner.exists(language.ENVIRONMENT_DIR)\n+ self.cmd_runner.exists(language.ENVIRONMENT_DIR, '.installed')\n ):\n # The language is already installed\n continue\n+ # There's potentially incomplete cleanup from previous runs\n+ # Clean it up!\n+ if self.cmd_runner.exists(language.ENVIRONMENT_DIR):\n+ shutil.rmtree(self.cmd_runner.path(language.ENVIRONMENT_DIR))\n+\n language.install_environment(self.cmd_runner, language_version)\n+ # Touch the .installed file (atomic) to indicate we've installed\n+ open(\n+ self.cmd_runner.path(language.ENVIRONMENT_DIR, '.installed'),\n+ 'w',\n+ ).close()\n \n def run_hook(self, hook, file_args):\n \"\"\"Run a hook.\n", "issue": "^C^C during installation may leave pre-commit in a bad state\nThere's code which handles the first ^C, however I think the second one (during execution of the finally block) may not be handled well. I probably need to make the cleanup atomic somehow...\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom cached_property import cached_property\n\nfrom pre_commit.languages.all import languages\nfrom pre_commit.manifest import Manifest\nfrom pre_commit.prefixed_command_runner import PrefixedCommandRunner\n\n\nclass Repository(object):\n def __init__(self, repo_config, repo_path_getter):\n self.repo_config = repo_config\n self.repo_path_getter = repo_path_getter\n self.__installed = False\n\n @classmethod\n def create(cls, config, store):\n repo_path_getter = store.get_repo_path_getter(\n config['repo'], config['sha']\n )\n return cls(config, repo_path_getter)\n\n @cached_property\n def repo_url(self):\n return self.repo_config['repo']\n\n @cached_property\n def sha(self):\n return self.repo_config['sha']\n\n @cached_property\n def languages(self):\n return set(\n (hook['language'], hook['language_version'])\n for _, hook in self.hooks\n )\n\n @cached_property\n def hooks(self):\n # TODO: merging in manifest dicts is a smell imo\n return tuple(\n (hook['id'], dict(self.manifest.hooks[hook['id']], **hook))\n for hook in self.repo_config['hooks']\n )\n\n @cached_property\n def manifest(self):\n return Manifest(self.repo_path_getter)\n\n @cached_property\n def cmd_runner(self):\n return PrefixedCommandRunner(self.repo_path_getter.repo_path)\n\n def require_installed(self):\n if self.__installed:\n return\n\n self.install()\n self.__installed = True\n\n def install(self):\n \"\"\"Install the hook repository.\"\"\"\n for language_name, language_version in self.languages:\n language = languages[language_name]\n if (\n language.ENVIRONMENT_DIR is None or\n self.cmd_runner.exists(language.ENVIRONMENT_DIR)\n ):\n # The language is already installed\n continue\n language.install_environment(self.cmd_runner, language_version)\n\n def run_hook(self, hook, file_args):\n \"\"\"Run a hook.\n\n Args:\n hook - Hook dictionary\n file_args - List of files to run\n \"\"\"\n self.require_installed()\n return languages[hook['language']].run_hook(\n self.cmd_runner, hook, file_args,\n )\n", "path": "pre_commit/repository.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport shutil\n\nfrom cached_property import cached_property\n\nfrom pre_commit.languages.all import languages\nfrom pre_commit.manifest import Manifest\nfrom pre_commit.prefixed_command_runner import PrefixedCommandRunner\n\n\nclass Repository(object):\n def __init__(self, repo_config, repo_path_getter):\n self.repo_config = repo_config\n self.repo_path_getter = repo_path_getter\n self.__installed = False\n\n @classmethod\n def create(cls, config, store):\n repo_path_getter = store.get_repo_path_getter(\n config['repo'], config['sha']\n )\n return cls(config, repo_path_getter)\n\n @cached_property\n def repo_url(self):\n return self.repo_config['repo']\n\n @cached_property\n def sha(self):\n return self.repo_config['sha']\n\n @cached_property\n def languages(self):\n return set(\n (hook['language'], hook['language_version'])\n for _, hook in self.hooks\n )\n\n @cached_property\n def hooks(self):\n # TODO: merging in manifest dicts is a smell imo\n return tuple(\n (hook['id'], dict(self.manifest.hooks[hook['id']], **hook))\n for hook in self.repo_config['hooks']\n )\n\n @cached_property\n def manifest(self):\n return Manifest(self.repo_path_getter)\n\n @cached_property\n def cmd_runner(self):\n return PrefixedCommandRunner(self.repo_path_getter.repo_path)\n\n def require_installed(self):\n if self.__installed:\n return\n\n self.install()\n self.__installed = True\n\n def install(self):\n \"\"\"Install the hook repository.\"\"\"\n for language_name, language_version in self.languages:\n language = languages[language_name]\n if (\n language.ENVIRONMENT_DIR is None or\n self.cmd_runner.exists(language.ENVIRONMENT_DIR, '.installed')\n ):\n # The language is already installed\n continue\n # There's potentially incomplete cleanup from previous runs\n # Clean it up!\n if self.cmd_runner.exists(language.ENVIRONMENT_DIR):\n shutil.rmtree(self.cmd_runner.path(language.ENVIRONMENT_DIR))\n\n language.install_environment(self.cmd_runner, language_version)\n # Touch the .installed file (atomic) to indicate we've installed\n open(\n self.cmd_runner.path(language.ENVIRONMENT_DIR, '.installed'),\n 'w',\n ).close()\n\n def run_hook(self, hook, file_args):\n \"\"\"Run a hook.\n\n Args:\n hook - Hook dictionary\n file_args - List of files to run\n \"\"\"\n self.require_installed()\n return languages[hook['language']].run_hook(\n self.cmd_runner, hook, file_args,\n )\n", "path": "pre_commit/repository.py"}]} | 974 | 263 |
gh_patches_debug_13090 | rasdani/github-patches | git_diff | nltk__nltk-782 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BLEU score returns 1 (perfect match) instead of zero
Hi, taken from bleu implementation:
``` python
@staticmethod
def compute(candidate, references, weights):
candidate = [c.lower() for c in candidate]
references = [[r.lower() for r in reference] for reference in references]
p_ns = (BLEU.modified_precision(candidate, references, i) for i, _ in enumerate(weights, start=1))
s = math.fsum(w * math.log(p_n) for w, p_n in zip(weights, p_ns) if p_n)
bp = BLEU.brevity_penalty(candidate, references)
return bp * math.exp(s)
```
This function incorrectly returns BLEU score 1 when the candidate has no alignment to any of the references. In this case, `p_ns` will be all zeros because there is no overlap for any n-grams, which will make `s` zero, which will return `1` at the end.
There should be a special case check for the case when there is zero alignment and return zero correctly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nltk/align/bleu.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Natural Language Toolkit: BLEU
3 #
4 # Copyright (C) 2001-2013 NLTK Project
5 # Authors: Chin Yee Lee, Hengfeng Li, Ruxin Hou, Calvin Tanujaya Lim
6 # URL: <http://nltk.org/>
7 # For license information, see LICENSE.TXT
8
9 from __future__ import division
10
11 import math
12
13 from nltk import word_tokenize
14 from nltk.compat import Counter
15 from nltk.util import ngrams
16
17
18 class BLEU(object):
19 """
20 This class implements the BLEU method, which is used to evaluate
21 the quality of machine translation. [1]
22
23 Consider an example:
24
25 >>> weights = [0.25, 0.25, 0.25, 0.25]
26 >>> candidate1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'which',
27 ... 'ensures', 'that', 'the', 'military', 'always',
28 ... 'obeys', 'the', 'commands', 'of', 'the', 'party']
29
30 >>> candidate2 = ['It', 'is', 'to', 'insure', 'the', 'troops',
31 ... 'forever', 'hearing', 'the', 'activity', 'guidebook',
32 ... 'that', 'party', 'direct']
33
34 >>> reference1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'that',
35 ... 'ensures', 'that', 'the', 'military', 'will', 'forever',
36 ... 'heed', 'Party', 'commands']
37
38 >>> reference2 = ['It', 'is', 'the', 'guiding', 'principle', 'which',
39 ... 'guarantees', 'the', 'military', 'forces', 'always',
40 ... 'being', 'under', 'the', 'command', 'of', 'the',
41 ... 'Party']
42
43 >>> reference3 = ['It', 'is', 'the', 'practical', 'guide', 'for', 'the',
44 ... 'army', 'always', 'to', 'heed', 'the', 'directions',
45 ... 'of', 'the', 'party']
46
47 The BLEU method mainly consists of two parts:
48
49 Part 1 - modified n-gram precision
50
51 The normal precision method may lead to some wrong translations with
52 high-precision, e.g., the translation, in which a word of reference
53 repeats several times, has very high precision. So in the modified
54 n-gram precision, a reference word will be considered exhausted after
55 a matching candidate word is identified.
56
57 Unigrams:
58
59 >>> BLEU.modified_precision(
60 ... candidate1,
61 ... [reference1, reference2, reference3],
62 ... n=1,
63 ... )
64 0.94...
65
66 >>> BLEU.modified_precision(
67 ... candidate2,
68 ... [reference1, reference2, reference3],
69 ... n=1,
70 ... )
71 0.57...
72
73 Bigrmas:
74
75 >>> BLEU.modified_precision(
76 ... candidate1,
77 ... [reference1, reference2, reference3],
78 ... n=2,
79 ... )
80 0.58...
81
82 >>> BLEU.modified_precision(
83 ... candidate2,
84 ... [reference1, reference2, reference3],
85 ... n=2,
86 ... )
87 0.07...
88
89
90 Part 2 - brevity penalty
91
92 As the modified n-gram precision still has the problem from the short
93 length sentence, brevity penalty is used to modify the overall BLEU
94 score according to length.
95
96 >>> BLEU.compute(candidate1, [reference1, reference2, reference3], weights)
97 0.504...
98
99 >>> BLEU.compute(candidate2, [reference1, reference2, reference3], weights)
100 0.457...
101
102 2. Test with two corpus that one is a reference and another is
103 an output from translation system:
104
105 >>> weights = [0.25, 0.25, 0.25, 0.25]
106 >>> ref_file = open('newstest2012-ref.en') # doctest: +SKIP
107 >>> candidate_file = open('newstest2012.fr-en.cmu-avenue') # doctest: +SKIP
108
109 >>> total = 0.0
110 >>> count = 0
111
112 >>> for candi_raw in candidate_file: # doctest: +SKIP
113 ... ref_raw = ref_file.readline()
114 ... ref_tokens = word_tokenize(ref_raw)
115 ... candi_tokens = word_tokenize(candi_raw)
116 ... total = BLEU.compute(candi_tokens, [ref_tokens], weights)
117 ... count += 1
118
119 >>> total / count # doctest: +SKIP
120 2.787504437460048e-05
121
122 [1] Papineni, Kishore, et al. "BLEU: a method for automatic evaluation of
123 machine translation." Proceedings of the 40th annual meeting on
124 association for computational linguistics. Association for Computational
125 Linguistics, 2002.
126
127 """
128
129 @staticmethod
130 def compute(candidate, references, weights):
131 candidate = [c.lower() for c in candidate]
132 references = [[r.lower() for r in reference] for reference in references]
133
134 p_ns = (BLEU.modified_precision(candidate, references, i) for i, _ in enumerate(weights, start=1))
135 s = math.fsum(w * math.log(p_n) for w, p_n in zip(weights, p_ns) if p_n)
136
137 bp = BLEU.brevity_penalty(candidate, references)
138 return bp * math.exp(s)
139
140 @staticmethod
141 def modified_precision(candidate, references, n):
142 """ Calculate modified ngram precision.
143
144 >>> BLEU.modified_precision(
145 ... 'the the the the the the the'.split(),
146 ... ['the cat is on the mat'.split(), 'there is a cat on the mat'.split()],
147 ... n=1,
148 ... )
149 0.28...
150
151 >>> BLEU.modified_precision(
152 ... 'the the the the the the the'.split(),
153 ... ['the cat is on the mat'.split(), 'there is a cat on the mat'.split()],
154 ... n=2,
155 ... )
156 0.0
157
158 >>> BLEU.modified_precision(
159 ... 'of the'.split(),
160 ... [
161 ... 'It is a guide to action that ensures that the military will forever heed Party commands.'.split(),
162 ... 'It is the guiding principle which guarantees the military forces always being under the command of the Party.'.split(),
163 ... 'It is the practical guide for the army always to heed the directions of the party'.split(),
164 ... ],
165 ... n=1,
166 ... )
167 1.0
168
169 >>> BLEU.modified_precision(
170 ... 'of the'.split(),
171 ... [
172 ... 'It is a guide to action that ensures that the military will forever heed Party commands.'.split(),
173 ... 'It is the guiding principle which guarantees the military forces always being under the command of the Party.'.split(),
174 ... 'It is the practical guide for the army always to heed the directions of the party'.split(),
175 ... ],
176 ... n=2,
177 ... )
178 1.0
179
180 """
181 counts = Counter(ngrams(candidate, n))
182
183 if not counts:
184 return 0
185
186 max_counts = {}
187 for reference in references:
188 reference_counts = Counter(ngrams(reference, n))
189 for ngram in counts:
190 max_counts[ngram] = max(max_counts.get(ngram, 0), reference_counts[ngram])
191
192 clipped_counts = dict((ngram, min(count, max_counts[ngram])) for ngram, count in counts.items())
193
194 return sum(clipped_counts.values()) / sum(counts.values())
195
196 @staticmethod
197 def brevity_penalty(candidate, references):
198 c = len(candidate)
199 r = min(abs(len(r) - c) for r in references)
200
201 if c > r:
202 return 1
203 else:
204 return math.exp(1 - r / c)
205
206 # run doctests
207 if __name__ == "__main__":
208 import doctest
209 doctest.testmod()
210
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nltk/align/bleu.py b/nltk/align/bleu.py
--- a/nltk/align/bleu.py
+++ b/nltk/align/bleu.py
@@ -132,7 +132,13 @@
references = [[r.lower() for r in reference] for reference in references]
p_ns = (BLEU.modified_precision(candidate, references, i) for i, _ in enumerate(weights, start=1))
- s = math.fsum(w * math.log(p_n) for w, p_n in zip(weights, p_ns) if p_n)
+ p_ns_nonzero = list(filter(None, p_ns))
+
+ if not p_ns_nonzero:
+ # There is zero aliment, so the score is 0
+ return 0
+
+ s = math.fsum(w * math.log(p_n) for w, p_n in zip(weights, p_ns_nonzero))
bp = BLEU.brevity_penalty(candidate, references)
return bp * math.exp(s)
| {"golden_diff": "diff --git a/nltk/align/bleu.py b/nltk/align/bleu.py\n--- a/nltk/align/bleu.py\n+++ b/nltk/align/bleu.py\n@@ -132,7 +132,13 @@\n references = [[r.lower() for r in reference] for reference in references]\n \n p_ns = (BLEU.modified_precision(candidate, references, i) for i, _ in enumerate(weights, start=1))\n- s = math.fsum(w * math.log(p_n) for w, p_n in zip(weights, p_ns) if p_n)\n+ p_ns_nonzero = list(filter(None, p_ns))\n+\n+ if not p_ns_nonzero:\n+ # There is zero aliment, so the score is 0\n+ return 0\n+\n+ s = math.fsum(w * math.log(p_n) for w, p_n in zip(weights, p_ns_nonzero))\n \n bp = BLEU.brevity_penalty(candidate, references)\n return bp * math.exp(s)\n", "issue": "BLEU score returns 1 (perfect match) instead of zero\nHi, taken from bleu implementation:\n\n``` python\n @staticmethod\n def compute(candidate, references, weights):\n candidate = [c.lower() for c in candidate]\n references = [[r.lower() for r in reference] for reference in references]\n\n p_ns = (BLEU.modified_precision(candidate, references, i) for i, _ in enumerate(weights, start=1))\n s = math.fsum(w * math.log(p_n) for w, p_n in zip(weights, p_ns) if p_n)\n\n bp = BLEU.brevity_penalty(candidate, references)\n return bp * math.exp(s)\n```\n\nThis function incorrectly returns BLEU score 1 when the candidate has no alignment to any of the references. In this case, `p_ns` will be all zeros because there is no overlap for any n-grams, which will make `s` zero, which will return `1` at the end.\n\nThere should be a special case check for the case when there is zero alignment and return zero correctly. \n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Natural Language Toolkit: BLEU\n#\n# Copyright (C) 2001-2013 NLTK Project\n# Authors: Chin Yee Lee, Hengfeng Li, Ruxin Hou, Calvin Tanujaya Lim\n# URL: <http://nltk.org/>\n# For license information, see LICENSE.TXT\n\nfrom __future__ import division\n\nimport math\n\nfrom nltk import word_tokenize\nfrom nltk.compat import Counter\nfrom nltk.util import ngrams\n\n\nclass BLEU(object):\n \"\"\"\n This class implements the BLEU method, which is used to evaluate\n the quality of machine translation. [1]\n\n Consider an example:\n\n >>> weights = [0.25, 0.25, 0.25, 0.25]\n >>> candidate1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'which',\n ... 'ensures', 'that', 'the', 'military', 'always',\n ... 'obeys', 'the', 'commands', 'of', 'the', 'party']\n\n >>> candidate2 = ['It', 'is', 'to', 'insure', 'the', 'troops',\n ... 'forever', 'hearing', 'the', 'activity', 'guidebook',\n ... 'that', 'party', 'direct']\n\n >>> reference1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'that',\n ... 'ensures', 'that', 'the', 'military', 'will', 'forever',\n ... 'heed', 'Party', 'commands']\n\n >>> reference2 = ['It', 'is', 'the', 'guiding', 'principle', 'which',\n ... 'guarantees', 'the', 'military', 'forces', 'always',\n ... 'being', 'under', 'the', 'command', 'of', 'the',\n ... 'Party']\n\n >>> reference3 = ['It', 'is', 'the', 'practical', 'guide', 'for', 'the',\n ... 'army', 'always', 'to', 'heed', 'the', 'directions',\n ... 'of', 'the', 'party']\n\n The BLEU method mainly consists of two parts:\n\n Part 1 - modified n-gram precision\n\n The normal precision method may lead to some wrong translations with\n high-precision, e.g., the translation, in which a word of reference\n repeats several times, has very high precision. So in the modified\n n-gram precision, a reference word will be considered exhausted after\n a matching candidate word is identified.\n\n Unigrams:\n\n >>> BLEU.modified_precision(\n ... candidate1,\n ... [reference1, reference2, reference3],\n ... n=1,\n ... )\n 0.94...\n\n >>> BLEU.modified_precision(\n ... candidate2,\n ... [reference1, reference2, reference3],\n ... n=1,\n ... )\n 0.57...\n\n Bigrmas:\n\n >>> BLEU.modified_precision(\n ... candidate1,\n ... [reference1, reference2, reference3],\n ... n=2,\n ... )\n 0.58...\n\n >>> BLEU.modified_precision(\n ... candidate2,\n ... [reference1, reference2, reference3],\n ... n=2,\n ... )\n 0.07...\n\n\n Part 2 - brevity penalty\n\n As the modified n-gram precision still has the problem from the short\n length sentence, brevity penalty is used to modify the overall BLEU\n score according to length.\n\n >>> BLEU.compute(candidate1, [reference1, reference2, reference3], weights)\n 0.504...\n\n >>> BLEU.compute(candidate2, [reference1, reference2, reference3], weights)\n 0.457...\n\n 2. Test with two corpus that one is a reference and another is\n an output from translation system:\n\n >>> weights = [0.25, 0.25, 0.25, 0.25]\n >>> ref_file = open('newstest2012-ref.en') # doctest: +SKIP\n >>> candidate_file = open('newstest2012.fr-en.cmu-avenue') # doctest: +SKIP\n\n >>> total = 0.0\n >>> count = 0\n\n >>> for candi_raw in candidate_file: # doctest: +SKIP\n ...\t\tref_raw = ref_file.readline()\n ...\t\tref_tokens = word_tokenize(ref_raw)\n ...\t\tcandi_tokens = word_tokenize(candi_raw)\n ...\t\ttotal = BLEU.compute(candi_tokens, [ref_tokens], weights)\n ...\t\tcount += 1\n\n >>> total / count # doctest: +SKIP\n 2.787504437460048e-05\n\n [1] Papineni, Kishore, et al. \"BLEU: a method for automatic evaluation of\n machine translation.\" Proceedings of the 40th annual meeting on\n association for computational linguistics. Association for Computational\n Linguistics, 2002.\n\n \"\"\"\n\n @staticmethod\n def compute(candidate, references, weights):\n candidate = [c.lower() for c in candidate]\n references = [[r.lower() for r in reference] for reference in references]\n\n p_ns = (BLEU.modified_precision(candidate, references, i) for i, _ in enumerate(weights, start=1))\n s = math.fsum(w * math.log(p_n) for w, p_n in zip(weights, p_ns) if p_n)\n\n bp = BLEU.brevity_penalty(candidate, references)\n return bp * math.exp(s)\n\n @staticmethod\n def modified_precision(candidate, references, n):\n \"\"\" Calculate modified ngram precision.\n\n >>> BLEU.modified_precision(\n ... 'the the the the the the the'.split(),\n ... ['the cat is on the mat'.split(), 'there is a cat on the mat'.split()],\n ... n=1,\n ... )\n 0.28...\n\n >>> BLEU.modified_precision(\n ... 'the the the the the the the'.split(),\n ... ['the cat is on the mat'.split(), 'there is a cat on the mat'.split()],\n ... n=2,\n ... )\n 0.0\n\n >>> BLEU.modified_precision(\n ... 'of the'.split(),\n ... [\n ... 'It is a guide to action that ensures that the military will forever heed Party commands.'.split(),\n ... 'It is the guiding principle which guarantees the military forces always being under the command of the Party.'.split(),\n ... 'It is the practical guide for the army always to heed the directions of the party'.split(),\n ... ],\n ... n=1,\n ... )\n 1.0\n\n >>> BLEU.modified_precision(\n ... 'of the'.split(),\n ... [\n ... 'It is a guide to action that ensures that the military will forever heed Party commands.'.split(),\n ... 'It is the guiding principle which guarantees the military forces always being under the command of the Party.'.split(),\n ... 'It is the practical guide for the army always to heed the directions of the party'.split(),\n ... ],\n ... n=2,\n ... )\n 1.0\n\n \"\"\"\n counts = Counter(ngrams(candidate, n))\n\n if not counts:\n return 0\n\n max_counts = {}\n for reference in references:\n reference_counts = Counter(ngrams(reference, n))\n for ngram in counts:\n max_counts[ngram] = max(max_counts.get(ngram, 0), reference_counts[ngram])\n\n clipped_counts = dict((ngram, min(count, max_counts[ngram])) for ngram, count in counts.items())\n\n return sum(clipped_counts.values()) / sum(counts.values())\n\n @staticmethod\n def brevity_penalty(candidate, references):\n c = len(candidate)\n r = min(abs(len(r) - c) for r in references)\n\n if c > r:\n return 1\n else:\n return math.exp(1 - r / c)\n\n# run doctests\nif __name__ == \"__main__\":\n import doctest\n doctest.testmod()\n", "path": "nltk/align/bleu.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Natural Language Toolkit: BLEU\n#\n# Copyright (C) 2001-2013 NLTK Project\n# Authors: Chin Yee Lee, Hengfeng Li, Ruxin Hou, Calvin Tanujaya Lim\n# URL: <http://nltk.org/>\n# For license information, see LICENSE.TXT\n\nfrom __future__ import division\n\nimport math\n\nfrom nltk import word_tokenize\nfrom nltk.compat import Counter\nfrom nltk.util import ngrams\n\n\nclass BLEU(object):\n \"\"\"\n This class implements the BLEU method, which is used to evaluate\n the quality of machine translation. [1]\n\n Consider an example:\n\n >>> weights = [0.25, 0.25, 0.25, 0.25]\n >>> candidate1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'which',\n ... 'ensures', 'that', 'the', 'military', 'always',\n ... 'obeys', 'the', 'commands', 'of', 'the', 'party']\n\n >>> candidate2 = ['It', 'is', 'to', 'insure', 'the', 'troops',\n ... 'forever', 'hearing', 'the', 'activity', 'guidebook',\n ... 'that', 'party', 'direct']\n\n >>> reference1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'that',\n ... 'ensures', 'that', 'the', 'military', 'will', 'forever',\n ... 'heed', 'Party', 'commands']\n\n >>> reference2 = ['It', 'is', 'the', 'guiding', 'principle', 'which',\n ... 'guarantees', 'the', 'military', 'forces', 'always',\n ... 'being', 'under', 'the', 'command', 'of', 'the',\n ... 'Party']\n\n >>> reference3 = ['It', 'is', 'the', 'practical', 'guide', 'for', 'the',\n ... 'army', 'always', 'to', 'heed', 'the', 'directions',\n ... 'of', 'the', 'party']\n\n The BLEU method mainly consists of two parts:\n\n Part 1 - modified n-gram precision\n\n The normal precision method may lead to some wrong translations with\n high-precision, e.g., the translation, in which a word of reference\n repeats several times, has very high precision. So in the modified\n n-gram precision, a reference word will be considered exhausted after\n a matching candidate word is identified.\n\n Unigrams:\n\n >>> BLEU.modified_precision(\n ... candidate1,\n ... [reference1, reference2, reference3],\n ... n=1,\n ... )\n 0.94...\n\n >>> BLEU.modified_precision(\n ... candidate2,\n ... [reference1, reference2, reference3],\n ... n=1,\n ... )\n 0.57...\n\n Bigrmas:\n\n >>> BLEU.modified_precision(\n ... candidate1,\n ... [reference1, reference2, reference3],\n ... n=2,\n ... )\n 0.58...\n\n >>> BLEU.modified_precision(\n ... candidate2,\n ... [reference1, reference2, reference3],\n ... n=2,\n ... )\n 0.07...\n\n\n Part 2 - brevity penalty\n\n As the modified n-gram precision still has the problem from the short\n length sentence, brevity penalty is used to modify the overall BLEU\n score according to length.\n\n >>> BLEU.compute(candidate1, [reference1, reference2, reference3], weights)\n 0.504...\n\n >>> BLEU.compute(candidate2, [reference1, reference2, reference3], weights)\n 0.457...\n\n 2. Test with two corpus that one is a reference and another is\n an output from translation system:\n\n >>> weights = [0.25, 0.25, 0.25, 0.25]\n >>> ref_file = open('newstest2012-ref.en') # doctest: +SKIP\n >>> candidate_file = open('newstest2012.fr-en.cmu-avenue') # doctest: +SKIP\n\n >>> total = 0.0\n >>> count = 0\n\n >>> for candi_raw in candidate_file: # doctest: +SKIP\n ...\t\tref_raw = ref_file.readline()\n ...\t\tref_tokens = word_tokenize(ref_raw)\n ...\t\tcandi_tokens = word_tokenize(candi_raw)\n ...\t\ttotal = BLEU.compute(candi_tokens, [ref_tokens], weights)\n ...\t\tcount += 1\n\n >>> total / count # doctest: +SKIP\n 2.787504437460048e-05\n\n [1] Papineni, Kishore, et al. \"BLEU: a method for automatic evaluation of\n machine translation.\" Proceedings of the 40th annual meeting on\n association for computational linguistics. Association for Computational\n Linguistics, 2002.\n\n \"\"\"\n\n @staticmethod\n def compute(candidate, references, weights):\n candidate = [c.lower() for c in candidate]\n references = [[r.lower() for r in reference] for reference in references]\n\n p_ns = (BLEU.modified_precision(candidate, references, i) for i, _ in enumerate(weights, start=1))\n p_ns_nonzero = list(filter(None, p_ns))\n\n if not p_ns_nonzero:\n # There is zero aliment, so the score is 0\n return 0\n\n s = math.fsum(w * math.log(p_n) for w, p_n in zip(weights, p_ns_nonzero))\n\n bp = BLEU.brevity_penalty(candidate, references)\n return bp * math.exp(s)\n\n @staticmethod\n def modified_precision(candidate, references, n):\n \"\"\" Calculate modified ngram precision.\n\n >>> BLEU.modified_precision(\n ... 'the the the the the the the'.split(),\n ... ['the cat is on the mat'.split(), 'there is a cat on the mat'.split()],\n ... n=1,\n ... )\n 0.28...\n\n >>> BLEU.modified_precision(\n ... 'the the the the the the the'.split(),\n ... ['the cat is on the mat'.split(), 'there is a cat on the mat'.split()],\n ... n=2,\n ... )\n 0.0\n\n >>> BLEU.modified_precision(\n ... 'of the'.split(),\n ... [\n ... 'It is a guide to action that ensures that the military will forever heed Party commands.'.split(),\n ... 'It is the guiding principle which guarantees the military forces always being under the command of the Party.'.split(),\n ... 'It is the practical guide for the army always to heed the directions of the party'.split(),\n ... ],\n ... n=1,\n ... )\n 1.0\n\n >>> BLEU.modified_precision(\n ... 'of the'.split(),\n ... [\n ... 'It is a guide to action that ensures that the military will forever heed Party commands.'.split(),\n ... 'It is the guiding principle which guarantees the military forces always being under the command of the Party.'.split(),\n ... 'It is the practical guide for the army always to heed the directions of the party'.split(),\n ... ],\n ... n=2,\n ... )\n 1.0\n\n \"\"\"\n counts = Counter(ngrams(candidate, n))\n\n if not counts:\n return 0\n\n max_counts = {}\n for reference in references:\n reference_counts = Counter(ngrams(reference, n))\n for ngram in counts:\n max_counts[ngram] = max(max_counts.get(ngram, 0), reference_counts[ngram])\n\n clipped_counts = dict((ngram, min(count, max_counts[ngram])) for ngram, count in counts.items())\n\n return sum(clipped_counts.values()) / sum(counts.values())\n\n @staticmethod\n def brevity_penalty(candidate, references):\n c = len(candidate)\n r = min(abs(len(r) - c) for r in references)\n\n if c > r:\n return 1\n else:\n return math.exp(1 - r / c)\n\n# run doctests\nif __name__ == \"__main__\":\n import doctest\n doctest.testmod()\n", "path": "nltk/align/bleu.py"}]} | 2,923 | 230 |
gh_patches_debug_15990 | rasdani/github-patches | git_diff | open-mmlab__mmdetection-9151 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
run image_demo.py with cascade_rpn model show error
### Prerequisite
- [X] I have searched [the existing and past issues](https://github.com/open-mmlab/mmdetection/issues) but cannot get the expected help.
- [X] I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
- [X] The bug has not been fixed in the [latest version](https://github.com/open-mmlab/mmdetection).
### 🐞 Describe the bug
``` python demo/image_demo.py demo/demo.jpg configs/cascade_rpn/crpn_r50_caffe_fpn_1x_coco.py checkpoints/cascade_rpn_r50_caffe_fpn_1x_coco-7aa93cef.pth```
```
UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2895.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Traceback (most recent call last):
File "demo/image_demo.py", line 68, in <module>
main(args)
File "demo/image_demo.py", line 38, in main
show_result_pyplot(
File "/home/ngi/IdeaProjects/mmlab/mmdetection/mmdet/apis/inference.py", line 241, in show_result_pyplot
model.show_result(
File "/home/ngi/IdeaProjects/mmlab/mmdetection/mmdet/models/detectors/rpn.py", line 159, in show_result
mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs)
TypeError: imshow_bboxes() got an unexpected keyword argument 'mask_color'
```
### Environment
```
Python: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]
CUDA available: True
GPU 0: GeForce RTX 2080 SUPER
CUDA_HOME: /usr/local/cuda-10.2
NVCC: Cuda compilation tools, release 10.2, V10.2.8
GCC: gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
PyTorch: 1.12.1+cu102
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70
- CuDNN 7.6.5
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.13.1+cu102
OpenCV: 4.6.0
MMCV: 1.6.2
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 10.2
MMDetection: 2.25.2+9d3e162
```
### Additional information
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmdet/models/detectors/rpn.py`
Content:
```
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import warnings
3
4 import mmcv
5 import torch
6 from mmcv.image import tensor2imgs
7
8 from mmdet.core import bbox_mapping
9 from ..builder import DETECTORS, build_backbone, build_head, build_neck
10 from .base import BaseDetector
11
12
13 @DETECTORS.register_module()
14 class RPN(BaseDetector):
15 """Implementation of Region Proposal Network."""
16
17 def __init__(self,
18 backbone,
19 neck,
20 rpn_head,
21 train_cfg,
22 test_cfg,
23 pretrained=None,
24 init_cfg=None):
25 super(RPN, self).__init__(init_cfg)
26 if pretrained:
27 warnings.warn('DeprecationWarning: pretrained is deprecated, '
28 'please use "init_cfg" instead')
29 backbone.pretrained = pretrained
30 self.backbone = build_backbone(backbone)
31 self.neck = build_neck(neck) if neck is not None else None
32 rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None
33 rpn_head.update(train_cfg=rpn_train_cfg)
34 rpn_head.update(test_cfg=test_cfg.rpn)
35 self.rpn_head = build_head(rpn_head)
36 self.train_cfg = train_cfg
37 self.test_cfg = test_cfg
38
39 def extract_feat(self, img):
40 """Extract features.
41
42 Args:
43 img (torch.Tensor): Image tensor with shape (n, c, h ,w).
44
45 Returns:
46 list[torch.Tensor]: Multi-level features that may have
47 different resolutions.
48 """
49 x = self.backbone(img)
50 if self.with_neck:
51 x = self.neck(x)
52 return x
53
54 def forward_dummy(self, img):
55 """Dummy forward function."""
56 x = self.extract_feat(img)
57 rpn_outs = self.rpn_head(x)
58 return rpn_outs
59
60 def forward_train(self,
61 img,
62 img_metas,
63 gt_bboxes=None,
64 gt_bboxes_ignore=None):
65 """
66 Args:
67 img (Tensor): Input images of shape (N, C, H, W).
68 Typically these should be mean centered and std scaled.
69 img_metas (list[dict]): A List of image info dict where each dict
70 has: 'img_shape', 'scale_factor', 'flip', and may also contain
71 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
72 For details on the values of these keys see
73 :class:`mmdet.datasets.pipelines.Collect`.
74 gt_bboxes (list[Tensor]): Each item are the truth boxes for each
75 image in [tl_x, tl_y, br_x, br_y] format.
76 gt_bboxes_ignore (None | list[Tensor]): Specify which bounding
77 boxes can be ignored when computing the loss.
78
79 Returns:
80 dict[str, Tensor]: A dictionary of loss components.
81 """
82 if (isinstance(self.train_cfg.rpn, dict)
83 and self.train_cfg.rpn.get('debug', False)):
84 self.rpn_head.debug_imgs = tensor2imgs(img)
85
86 x = self.extract_feat(img)
87 losses = self.rpn_head.forward_train(x, img_metas, gt_bboxes, None,
88 gt_bboxes_ignore)
89 return losses
90
91 def simple_test(self, img, img_metas, rescale=False):
92 """Test function without test time augmentation.
93
94 Args:
95 imgs (list[torch.Tensor]): List of multiple images
96 img_metas (list[dict]): List of image information.
97 rescale (bool, optional): Whether to rescale the results.
98 Defaults to False.
99
100 Returns:
101 list[np.ndarray]: proposals
102 """
103 x = self.extract_feat(img)
104 # get origin input shape to onnx dynamic input shape
105 if torch.onnx.is_in_onnx_export():
106 img_shape = torch._shape_as_tensor(img)[2:]
107 img_metas[0]['img_shape_for_onnx'] = img_shape
108 proposal_list = self.rpn_head.simple_test_rpn(x, img_metas)
109 if rescale:
110 for proposals, meta in zip(proposal_list, img_metas):
111 proposals[:, :4] /= proposals.new_tensor(meta['scale_factor'])
112 if torch.onnx.is_in_onnx_export():
113 return proposal_list
114
115 return [proposal.cpu().numpy() for proposal in proposal_list]
116
117 def aug_test(self, imgs, img_metas, rescale=False):
118 """Test function with test time augmentation.
119
120 Args:
121 imgs (list[torch.Tensor]): List of multiple images
122 img_metas (list[dict]): List of image information.
123 rescale (bool, optional): Whether to rescale the results.
124 Defaults to False.
125
126 Returns:
127 list[np.ndarray]: proposals
128 """
129 proposal_list = self.rpn_head.aug_test_rpn(
130 self.extract_feats(imgs), img_metas)
131 if not rescale:
132 for proposals, img_meta in zip(proposal_list, img_metas[0]):
133 img_shape = img_meta['img_shape']
134 scale_factor = img_meta['scale_factor']
135 flip = img_meta['flip']
136 flip_direction = img_meta['flip_direction']
137 proposals[:, :4] = bbox_mapping(proposals[:, :4], img_shape,
138 scale_factor, flip,
139 flip_direction)
140 return [proposal.cpu().numpy() for proposal in proposal_list]
141
142 def show_result(self, data, result, top_k=20, **kwargs):
143 """Show RPN proposals on the image.
144
145 Args:
146 data (str or np.ndarray): Image filename or loaded image.
147 result (Tensor or tuple): The results to draw over `img`
148 bbox_result or (bbox_result, segm_result).
149 top_k (int): Plot the first k bboxes only
150 if set positive. Default: 20
151
152 Returns:
153 np.ndarray: The image with bboxes drawn on it.
154 """
155 if kwargs is not None:
156 kwargs.pop('score_thr', None)
157 kwargs.pop('text_color', None)
158 kwargs['colors'] = kwargs.pop('bbox_color', 'green')
159 mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs)
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mmdet/models/detectors/rpn.py b/mmdet/models/detectors/rpn.py
--- a/mmdet/models/detectors/rpn.py
+++ b/mmdet/models/detectors/rpn.py
@@ -1,5 +1,6 @@
# Copyright (c) OpenMMLab. All rights reserved.
import warnings
+from inspect import signature
import mmcv
import torch
@@ -153,7 +154,9 @@
np.ndarray: The image with bboxes drawn on it.
"""
if kwargs is not None:
- kwargs.pop('score_thr', None)
- kwargs.pop('text_color', None)
- kwargs['colors'] = kwargs.pop('bbox_color', 'green')
+ kwargs['colors'] = 'green'
+ sig = signature(mmcv.imshow_bboxes)
+ for k in list(kwargs.keys()):
+ if k not in sig.parameters:
+ kwargs.pop(k)
mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs)
| {"golden_diff": "diff --git a/mmdet/models/detectors/rpn.py b/mmdet/models/detectors/rpn.py\n--- a/mmdet/models/detectors/rpn.py\n+++ b/mmdet/models/detectors/rpn.py\n@@ -1,5 +1,6 @@\n # Copyright (c) OpenMMLab. All rights reserved.\n import warnings\n+from inspect import signature\n \n import mmcv\n import torch\n@@ -153,7 +154,9 @@\n np.ndarray: The image with bboxes drawn on it.\n \"\"\"\n if kwargs is not None:\n- kwargs.pop('score_thr', None)\n- kwargs.pop('text_color', None)\n- kwargs['colors'] = kwargs.pop('bbox_color', 'green')\n+ kwargs['colors'] = 'green'\n+ sig = signature(mmcv.imshow_bboxes)\n+ for k in list(kwargs.keys()):\n+ if k not in sig.parameters:\n+ kwargs.pop(k)\n mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs)\n", "issue": "run image_demo.py with cascade_rpn model show error\n### Prerequisite\n\n- [X] I have searched [the existing and past issues](https://github.com/open-mmlab/mmdetection/issues) but cannot get the expected help.\n- [X] I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.\n- [X] The bug has not been fixed in the [latest version](https://github.com/open-mmlab/mmdetection).\n\n### \ud83d\udc1e Describe the bug\n\n``` python demo/image_demo.py demo/demo.jpg configs/cascade_rpn/crpn_r50_caffe_fpn_1x_coco.py checkpoints/cascade_rpn_r50_caffe_fpn_1x_coco-7aa93cef.pth```\r\n\r\n```\r\nUserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2895.)\r\n return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]\r\nTraceback (most recent call last):\r\n File \"demo/image_demo.py\", line 68, in <module>\r\n main(args)\r\n File \"demo/image_demo.py\", line 38, in main\r\n show_result_pyplot(\r\n File \"/home/ngi/IdeaProjects/mmlab/mmdetection/mmdet/apis/inference.py\", line 241, in show_result_pyplot\r\n model.show_result(\r\n File \"/home/ngi/IdeaProjects/mmlab/mmdetection/mmdet/models/detectors/rpn.py\", line 159, in show_result\r\n mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs)\r\nTypeError: imshow_bboxes() got an unexpected keyword argument 'mask_color'\r\n\r\n```\n\n### Environment\n\n```\r\nPython: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]\r\nCUDA available: True\r\nGPU 0: GeForce RTX 2080 SUPER\r\nCUDA_HOME: /usr/local/cuda-10.2\r\nNVCC: Cuda compilation tools, release 10.2, V10.2.8\r\nGCC: gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609\r\nPyTorch: 1.12.1+cu102\r\nPyTorch compiling details: PyTorch built with:\r\n - GCC 7.3\r\n - C++ Version: 201402\r\n - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications\r\n - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)\r\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\r\n - LAPACK is enabled (usually provided by MKL)\r\n - NNPACK is enabled\r\n - CPU capability usage: AVX2\r\n - CUDA Runtime 10.2\r\n - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70\r\n - CuDNN 7.6.5\r\n - Magma 2.5.2\r\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \r\n\r\nTorchVision: 0.13.1+cu102\r\nOpenCV: 4.6.0\r\nMMCV: 1.6.2\r\nMMCV Compiler: GCC 7.3\r\nMMCV CUDA Compiler: 10.2\r\nMMDetection: 2.25.2+9d3e162\r\n```\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport warnings\n\nimport mmcv\nimport torch\nfrom mmcv.image import tensor2imgs\n\nfrom mmdet.core import bbox_mapping\nfrom ..builder import DETECTORS, build_backbone, build_head, build_neck\nfrom .base import BaseDetector\n\n\[email protected]_module()\nclass RPN(BaseDetector):\n \"\"\"Implementation of Region Proposal Network.\"\"\"\n\n def __init__(self,\n backbone,\n neck,\n rpn_head,\n train_cfg,\n test_cfg,\n pretrained=None,\n init_cfg=None):\n super(RPN, self).__init__(init_cfg)\n if pretrained:\n warnings.warn('DeprecationWarning: pretrained is deprecated, '\n 'please use \"init_cfg\" instead')\n backbone.pretrained = pretrained\n self.backbone = build_backbone(backbone)\n self.neck = build_neck(neck) if neck is not None else None\n rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None\n rpn_head.update(train_cfg=rpn_train_cfg)\n rpn_head.update(test_cfg=test_cfg.rpn)\n self.rpn_head = build_head(rpn_head)\n self.train_cfg = train_cfg\n self.test_cfg = test_cfg\n\n def extract_feat(self, img):\n \"\"\"Extract features.\n\n Args:\n img (torch.Tensor): Image tensor with shape (n, c, h ,w).\n\n Returns:\n list[torch.Tensor]: Multi-level features that may have\n different resolutions.\n \"\"\"\n x = self.backbone(img)\n if self.with_neck:\n x = self.neck(x)\n return x\n\n def forward_dummy(self, img):\n \"\"\"Dummy forward function.\"\"\"\n x = self.extract_feat(img)\n rpn_outs = self.rpn_head(x)\n return rpn_outs\n\n def forward_train(self,\n img,\n img_metas,\n gt_bboxes=None,\n gt_bboxes_ignore=None):\n \"\"\"\n Args:\n img (Tensor): Input images of shape (N, C, H, W).\n Typically these should be mean centered and std scaled.\n img_metas (list[dict]): A List of image info dict where each dict\n has: 'img_shape', 'scale_factor', 'flip', and may also contain\n 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.\n For details on the values of these keys see\n :class:`mmdet.datasets.pipelines.Collect`.\n gt_bboxes (list[Tensor]): Each item are the truth boxes for each\n image in [tl_x, tl_y, br_x, br_y] format.\n gt_bboxes_ignore (None | list[Tensor]): Specify which bounding\n boxes can be ignored when computing the loss.\n\n Returns:\n dict[str, Tensor]: A dictionary of loss components.\n \"\"\"\n if (isinstance(self.train_cfg.rpn, dict)\n and self.train_cfg.rpn.get('debug', False)):\n self.rpn_head.debug_imgs = tensor2imgs(img)\n\n x = self.extract_feat(img)\n losses = self.rpn_head.forward_train(x, img_metas, gt_bboxes, None,\n gt_bboxes_ignore)\n return losses\n\n def simple_test(self, img, img_metas, rescale=False):\n \"\"\"Test function without test time augmentation.\n\n Args:\n imgs (list[torch.Tensor]): List of multiple images\n img_metas (list[dict]): List of image information.\n rescale (bool, optional): Whether to rescale the results.\n Defaults to False.\n\n Returns:\n list[np.ndarray]: proposals\n \"\"\"\n x = self.extract_feat(img)\n # get origin input shape to onnx dynamic input shape\n if torch.onnx.is_in_onnx_export():\n img_shape = torch._shape_as_tensor(img)[2:]\n img_metas[0]['img_shape_for_onnx'] = img_shape\n proposal_list = self.rpn_head.simple_test_rpn(x, img_metas)\n if rescale:\n for proposals, meta in zip(proposal_list, img_metas):\n proposals[:, :4] /= proposals.new_tensor(meta['scale_factor'])\n if torch.onnx.is_in_onnx_export():\n return proposal_list\n\n return [proposal.cpu().numpy() for proposal in proposal_list]\n\n def aug_test(self, imgs, img_metas, rescale=False):\n \"\"\"Test function with test time augmentation.\n\n Args:\n imgs (list[torch.Tensor]): List of multiple images\n img_metas (list[dict]): List of image information.\n rescale (bool, optional): Whether to rescale the results.\n Defaults to False.\n\n Returns:\n list[np.ndarray]: proposals\n \"\"\"\n proposal_list = self.rpn_head.aug_test_rpn(\n self.extract_feats(imgs), img_metas)\n if not rescale:\n for proposals, img_meta in zip(proposal_list, img_metas[0]):\n img_shape = img_meta['img_shape']\n scale_factor = img_meta['scale_factor']\n flip = img_meta['flip']\n flip_direction = img_meta['flip_direction']\n proposals[:, :4] = bbox_mapping(proposals[:, :4], img_shape,\n scale_factor, flip,\n flip_direction)\n return [proposal.cpu().numpy() for proposal in proposal_list]\n\n def show_result(self, data, result, top_k=20, **kwargs):\n \"\"\"Show RPN proposals on the image.\n\n Args:\n data (str or np.ndarray): Image filename or loaded image.\n result (Tensor or tuple): The results to draw over `img`\n bbox_result or (bbox_result, segm_result).\n top_k (int): Plot the first k bboxes only\n if set positive. Default: 20\n\n Returns:\n np.ndarray: The image with bboxes drawn on it.\n \"\"\"\n if kwargs is not None:\n kwargs.pop('score_thr', None)\n kwargs.pop('text_color', None)\n kwargs['colors'] = kwargs.pop('bbox_color', 'green')\n mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs)\n", "path": "mmdet/models/detectors/rpn.py"}], "after_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport warnings\nfrom inspect import signature\n\nimport mmcv\nimport torch\nfrom mmcv.image import tensor2imgs\n\nfrom mmdet.core import bbox_mapping\nfrom ..builder import DETECTORS, build_backbone, build_head, build_neck\nfrom .base import BaseDetector\n\n\[email protected]_module()\nclass RPN(BaseDetector):\n \"\"\"Implementation of Region Proposal Network.\"\"\"\n\n def __init__(self,\n backbone,\n neck,\n rpn_head,\n train_cfg,\n test_cfg,\n pretrained=None,\n init_cfg=None):\n super(RPN, self).__init__(init_cfg)\n if pretrained:\n warnings.warn('DeprecationWarning: pretrained is deprecated, '\n 'please use \"init_cfg\" instead')\n backbone.pretrained = pretrained\n self.backbone = build_backbone(backbone)\n self.neck = build_neck(neck) if neck is not None else None\n rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None\n rpn_head.update(train_cfg=rpn_train_cfg)\n rpn_head.update(test_cfg=test_cfg.rpn)\n self.rpn_head = build_head(rpn_head)\n self.train_cfg = train_cfg\n self.test_cfg = test_cfg\n\n def extract_feat(self, img):\n \"\"\"Extract features.\n\n Args:\n img (torch.Tensor): Image tensor with shape (n, c, h ,w).\n\n Returns:\n list[torch.Tensor]: Multi-level features that may have\n different resolutions.\n \"\"\"\n x = self.backbone(img)\n if self.with_neck:\n x = self.neck(x)\n return x\n\n def forward_dummy(self, img):\n \"\"\"Dummy forward function.\"\"\"\n x = self.extract_feat(img)\n rpn_outs = self.rpn_head(x)\n return rpn_outs\n\n def forward_train(self,\n img,\n img_metas,\n gt_bboxes=None,\n gt_bboxes_ignore=None):\n \"\"\"\n Args:\n img (Tensor): Input images of shape (N, C, H, W).\n Typically these should be mean centered and std scaled.\n img_metas (list[dict]): A List of image info dict where each dict\n has: 'img_shape', 'scale_factor', 'flip', and may also contain\n 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.\n For details on the values of these keys see\n :class:`mmdet.datasets.pipelines.Collect`.\n gt_bboxes (list[Tensor]): Each item are the truth boxes for each\n image in [tl_x, tl_y, br_x, br_y] format.\n gt_bboxes_ignore (None | list[Tensor]): Specify which bounding\n boxes can be ignored when computing the loss.\n\n Returns:\n dict[str, Tensor]: A dictionary of loss components.\n \"\"\"\n if (isinstance(self.train_cfg.rpn, dict)\n and self.train_cfg.rpn.get('debug', False)):\n self.rpn_head.debug_imgs = tensor2imgs(img)\n\n x = self.extract_feat(img)\n losses = self.rpn_head.forward_train(x, img_metas, gt_bboxes, None,\n gt_bboxes_ignore)\n return losses\n\n def simple_test(self, img, img_metas, rescale=False):\n \"\"\"Test function without test time augmentation.\n\n Args:\n imgs (list[torch.Tensor]): List of multiple images\n img_metas (list[dict]): List of image information.\n rescale (bool, optional): Whether to rescale the results.\n Defaults to False.\n\n Returns:\n list[np.ndarray]: proposals\n \"\"\"\n x = self.extract_feat(img)\n # get origin input shape to onnx dynamic input shape\n if torch.onnx.is_in_onnx_export():\n img_shape = torch._shape_as_tensor(img)[2:]\n img_metas[0]['img_shape_for_onnx'] = img_shape\n proposal_list = self.rpn_head.simple_test_rpn(x, img_metas)\n if rescale:\n for proposals, meta in zip(proposal_list, img_metas):\n proposals[:, :4] /= proposals.new_tensor(meta['scale_factor'])\n if torch.onnx.is_in_onnx_export():\n return proposal_list\n\n return [proposal.cpu().numpy() for proposal in proposal_list]\n\n def aug_test(self, imgs, img_metas, rescale=False):\n \"\"\"Test function with test time augmentation.\n\n Args:\n imgs (list[torch.Tensor]): List of multiple images\n img_metas (list[dict]): List of image information.\n rescale (bool, optional): Whether to rescale the results.\n Defaults to False.\n\n Returns:\n list[np.ndarray]: proposals\n \"\"\"\n proposal_list = self.rpn_head.aug_test_rpn(\n self.extract_feats(imgs), img_metas)\n if not rescale:\n for proposals, img_meta in zip(proposal_list, img_metas[0]):\n img_shape = img_meta['img_shape']\n scale_factor = img_meta['scale_factor']\n flip = img_meta['flip']\n flip_direction = img_meta['flip_direction']\n proposals[:, :4] = bbox_mapping(proposals[:, :4], img_shape,\n scale_factor, flip,\n flip_direction)\n return [proposal.cpu().numpy() for proposal in proposal_list]\n\n def show_result(self, data, result, top_k=20, **kwargs):\n \"\"\"Show RPN proposals on the image.\n\n Args:\n data (str or np.ndarray): Image filename or loaded image.\n result (Tensor or tuple): The results to draw over `img`\n bbox_result or (bbox_result, segm_result).\n top_k (int): Plot the first k bboxes only\n if set positive. Default: 20\n\n Returns:\n np.ndarray: The image with bboxes drawn on it.\n \"\"\"\n if kwargs is not None:\n kwargs['colors'] = 'green'\n sig = signature(mmcv.imshow_bboxes)\n for k in list(kwargs.keys()):\n if k not in sig.parameters:\n kwargs.pop(k)\n mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs)\n", "path": "mmdet/models/detectors/rpn.py"}]} | 3,360 | 228 |
gh_patches_debug_43438 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-2780 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
E0002: Unknown exception while processing rule E1024: 'list' object has no attribute 'path'
### CloudFormation Lint Version
cfn-lint 0.77.10
### What operating system are you using?
Mac
### Describe the bug
When validation checks run on a the Cidr function `!Cidr [ ipBlock, count, cidrBits ]`, the count and cidrBits doesn't validate more complex functions like the `!If` in the example.
### Expected behavior
Doesn't throw unknown exception
### Reproduction template
```yaml
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [0, !GetAZs ""]
CidrBlock: !If
- defineYourOwnSubnetCIDR
- !Ref FirstTierSubnet1CIDR
- !Select
- 0
- !Cidr
- 10.0.0.0/24
- !If [3AZ, 16, 8]
- 4
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/functions/Cidr.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import regex as re
6
7 from cfnlint.helpers import REGEX_CIDR
8 from cfnlint.rules import CloudFormationLintRule, RuleMatch
9
10
11 class Cidr(CloudFormationLintRule):
12 """Check if Cidr values are correct"""
13
14 id = "E1024"
15 shortdesc = "Cidr validation of parameters"
16 description = "Making sure the function CIDR is a list with valid values"
17 source_url = "https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-cidr.html"
18 tags = ["functions", "cidr"]
19
20 supported_functions = [
21 "Fn::FindInMap",
22 "Fn::Select",
23 "Ref",
24 "Fn::GetAtt",
25 "Fn::Sub",
26 "Fn::ImportValue",
27 ]
28
29 def check_ip_block(self, value, path):
30 matches = []
31 if isinstance(value, dict):
32 if len(value) == 1:
33 for index_key, _ in value.items():
34 if index_key not in self.supported_functions:
35 if index_key == "Fn::If":
36 if len(value.get("Fn::If")) == 3 and isinstance(
37 value.get("Fn::If"), list
38 ):
39 matches.extend(
40 self.check_ip_block(
41 value.get("Fn::If")[1],
42 path=path[:] + [index_key, 1],
43 )
44 )
45 matches.extend(
46 self.check_ip_block(
47 value.get("Fn::If")[2],
48 path=path[:] + [index_key, 2],
49 )
50 )
51 else:
52 message = "Cidr ipBlock should be Cidr Range, Ref, GetAtt, Sub or Select for {0}"
53 matches.append(
54 RuleMatch(
55 path, message.format("/".join(map(str, value)))
56 )
57 )
58 elif isinstance(value, (str)):
59 if not re.match(REGEX_CIDR, value):
60 message = "Cidr ipBlock should be a Cidr Range based string for {0}"
61 matches.append(
62 RuleMatch(path, message.format("/".join(map(str, path))))
63 )
64 else:
65 message = "Cidr ipBlock should be a string for {0}"
66 matches.append(RuleMatch(path, message.format("/".join(map(str, path)))))
67
68 return matches
69
70 def check_count(self, value, path):
71 matches = []
72 count_parameters = []
73 if isinstance(value, dict):
74 if len(value) == 1:
75 for index_key, index_value in value.items():
76 if index_key not in self.supported_functions:
77 if index_key == "Fn::If":
78 if len(value.get("Fn::If")) == 3 and isinstance(
79 value.get("Fn::If"), list
80 ):
81 matches.extend(
82 self.check_count(
83 value.get("Fn::If")[1],
84 path=path[:] + [index_key, 1],
85 )
86 )
87 matches.extend(
88 self.check_count(
89 value.get("Fn::If")[2],
90 path=path[:] + [index_key, 2],
91 )
92 )
93 else:
94 message = "Cidr count should be Int, Ref, or Select for {0}"
95 matches.append(
96 RuleMatch(
97 path, message.format("/".join(map(str, path)))
98 )
99 )
100 if index_key == "Ref":
101 count_parameters.append(index_value)
102 elif not isinstance(value, int):
103 message = "Cidr count should be a int for {0}"
104 extra_args = {
105 "actual_type": type(value).__name__,
106 "expected_type": int.__name__,
107 }
108 matches.append(
109 RuleMatch(path, message.format("/".join(map(str, path))), **extra_args)
110 )
111
112 return count_parameters, matches
113
114 def check_size_mask(self, value, path):
115 matches = []
116 size_mask_parameters = []
117 if isinstance(value, dict):
118 if len(value) == 1:
119 for index_key, index_value in value.items():
120 if index_key not in self.supported_functions:
121 if index_key == "Fn::If":
122 if len(value.get("Fn::If")) == 3 and isinstance(
123 value.get("Fn::If"), list
124 ):
125 matches.extend(
126 self.check_size_mask(
127 value.get("Fn::If")[1],
128 path=path[:] + [index_key, 1],
129 )
130 )
131 matches.extend(
132 self.check_size_mask(
133 value.get("Fn::If")[2],
134 path=path[:] + [index_key, 2],
135 )
136 )
137 else:
138 message = (
139 "Cidr sizeMask should be Int, Ref, or Select for {0}"
140 )
141 matches.append(
142 RuleMatch(
143 path, message.format("/".join(map(str, path)))
144 )
145 )
146 if index_key == "Ref":
147 size_mask_parameters.append(index_value)
148 elif not isinstance(value, int):
149 message = "Cidr sizeMask should be a int for {0}"
150 extra_args = {
151 "actual_type": type(value).__name__,
152 "expected_type": int.__name__,
153 }
154 matches.append(
155 RuleMatch(path, message.format("/".join(map(str, path))), **extra_args)
156 )
157
158 return size_mask_parameters, matches
159
160 def check_parameter_count(self, cfn, parameter_name):
161 """Check Count Parameter if used"""
162 matches = []
163 parameter_obj = cfn.get_parameters().get(parameter_name, {})
164 if parameter_obj:
165 tree = ["Parameters", parameter_name]
166 parameter_type = parameter_obj.get("Type")
167 if parameter_type == "Number":
168 max_value = parameter_obj.get("MaxValue")
169 min_value = parameter_obj.get("MinValue")
170 if (not min_value) or min_value < 1 or min_value > 256:
171 message = "Parameter for Cidr count have MinValue between 1 and 256 at {0}"
172 matches.append(
173 RuleMatch(
174 tree + ["MinValue"],
175 message.format("/".join(map(str, tree + ["MinValue"]))),
176 )
177 )
178 if (not max_value) or max_value < 1 or max_value > 256:
179 message = "Parameter for Cidr count have MaxValue between 1 and 256 at {0}"
180 matches.append(
181 RuleMatch(
182 tree + ["MaxValue"],
183 message.format("/".join(map(str, tree + ["MaxValue"]))),
184 )
185 )
186 else:
187 message = "Parameter for Cidr count have be of Type Number at {0}"
188 matches.append(
189 RuleMatch(tree, message.format("/".join(map(str, tree))))
190 )
191
192 return matches
193
194 def check_parameter_size_mask(self, cfn, parameter_name):
195 """Check SizeMask Parameter if used"""
196 matches = []
197 parameter_obj = cfn.get_parameters().get(parameter_name, {})
198 if parameter_obj:
199 tree = ["Parameters", parameter_name]
200 parameter_type = parameter_obj.get("Type")
201 if parameter_type == "Number":
202 max_value = parameter_obj.get("MaxValue")
203 min_value = parameter_obj.get("MinValue")
204 if (not min_value) or min_value < 1 or min_value > 256:
205 message = (
206 "Parameter for Cidr sizeMask have MinValue between 1 and "
207 "128 (for ipv6) and 32 (for ipv4) at {0}"
208 )
209 matches.append(
210 RuleMatch(
211 tree + ["MinValue"],
212 message.format("/".join(map(str, tree + ["MinValue"]))),
213 )
214 )
215 if (not max_value) or max_value < 1 or max_value > 256:
216 message = (
217 "Parameter for Cidr count have MaxValue between 1 and "
218 "128 (for ipv6) and 32 (for ipv4) at {0}"
219 )
220 matches.append(
221 RuleMatch(
222 tree + ["MaxValue"],
223 message.format("/".join(map(str, tree + ["MaxValue"]))),
224 )
225 )
226 else:
227 message = "Parameter for Cidr count have be of Type Number at {0}"
228 matches.append(
229 RuleMatch(tree, message.format("/".join(map(str, tree))))
230 )
231
232 return matches
233
234 def match(self, cfn):
235 matches = []
236
237 cidr_objs = cfn.search_deep_keys("Fn::Cidr")
238
239 count_parameters = []
240 size_mask_parameters = []
241
242 for cidr_obj in cidr_objs:
243 cidr_value_obj = cidr_obj[-1]
244 tree = cidr_obj[:-1]
245 if isinstance(cidr_value_obj, list):
246 if len(cidr_value_obj) in [2, 3]:
247 ip_block_obj = cidr_value_obj[0]
248 count_obj = cidr_value_obj[1]
249 if len(cidr_value_obj) == 3:
250 size_mask_obj = cidr_value_obj[2]
251 else:
252 size_mask_obj = None
253
254 matches.extend(self.check_ip_block(ip_block_obj, tree[:] + [0]))
255
256 new_count_parameters, new_matches = self.check_count(
257 count_obj, tree[:] + [1]
258 )
259 count_parameters.extend(new_count_parameters)
260 matches.extend(new_matches)
261
262 new_size_mask_parameters, new_matches = self.check_size_mask(
263 size_mask_obj, tree[:] + [2]
264 )
265 size_mask_parameters.extend(new_size_mask_parameters)
266 matches.extend(new_matches)
267
268 else:
269 message = "Cidr should be a list of 2 or 3 elements for {0}"
270 matches.append(
271 RuleMatch(tree, message.format("/".join(map(str, tree))))
272 )
273 else:
274 message = "Cidr should be a list of 2 or 3 elements for {0}"
275 matches.append(
276 RuleMatch(tree, message.format("/".join(map(str, tree))))
277 )
278
279 for count_parameter in set(count_parameters):
280 matches.extend(self.check_parameter_count(cfn, count_parameter))
281 for size_mask_parameter in set(size_mask_parameters):
282 matches.extend(self.check_parameter_size_mask(cfn, size_mask_parameter))
283
284 return matches
285
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/functions/Cidr.py b/src/cfnlint/rules/functions/Cidr.py
--- a/src/cfnlint/rules/functions/Cidr.py
+++ b/src/cfnlint/rules/functions/Cidr.py
@@ -49,7 +49,10 @@
)
)
else:
- message = "Cidr ipBlock should be Cidr Range, Ref, GetAtt, Sub or Select for {0}"
+ message = (
+ "Cidr ipBlock should be Cidr Range, Ref, GetAtt, Sub or"
+ " Select for {0}"
+ )
matches.append(
RuleMatch(
path, message.format("/".join(map(str, value)))
@@ -78,18 +81,16 @@
if len(value.get("Fn::If")) == 3 and isinstance(
value.get("Fn::If"), list
):
- matches.extend(
- self.check_count(
- value.get("Fn::If")[1],
- path=path[:] + [index_key, 1],
- )
- )
- matches.extend(
- self.check_count(
- value.get("Fn::If")[2],
- path=path[:] + [index_key, 2],
+ for i in [1, 2]:
+ (
+ new_count_parameters,
+ new_matches,
+ ) = self.check_count(
+ value.get("Fn::If")[i],
+ path=path[:] + [index_key, i],
)
- )
+ count_parameters.extend(new_count_parameters)
+ matches.extend(new_matches)
else:
message = "Cidr count should be Int, Ref, or Select for {0}"
matches.append(
@@ -168,7 +169,10 @@
max_value = parameter_obj.get("MaxValue")
min_value = parameter_obj.get("MinValue")
if (not min_value) or min_value < 1 or min_value > 256:
- message = "Parameter for Cidr count have MinValue between 1 and 256 at {0}"
+ message = (
+ "Parameter for Cidr count have MinValue between 1 and 256"
+ " at {0}"
+ )
matches.append(
RuleMatch(
tree + ["MinValue"],
@@ -176,7 +180,10 @@
)
)
if (not max_value) or max_value < 1 or max_value > 256:
- message = "Parameter for Cidr count have MaxValue between 1 and 256 at {0}"
+ message = (
+ "Parameter for Cidr count have MaxValue between 1 and 256"
+ " at {0}"
+ )
matches.append(
RuleMatch(
tree + ["MaxValue"],
@@ -258,7 +265,6 @@
)
count_parameters.extend(new_count_parameters)
matches.extend(new_matches)
-
new_size_mask_parameters, new_matches = self.check_size_mask(
size_mask_obj, tree[:] + [2]
)
| {"golden_diff": "diff --git a/src/cfnlint/rules/functions/Cidr.py b/src/cfnlint/rules/functions/Cidr.py\n--- a/src/cfnlint/rules/functions/Cidr.py\n+++ b/src/cfnlint/rules/functions/Cidr.py\n@@ -49,7 +49,10 @@\n )\n )\n else:\n- message = \"Cidr ipBlock should be Cidr Range, Ref, GetAtt, Sub or Select for {0}\"\n+ message = (\n+ \"Cidr ipBlock should be Cidr Range, Ref, GetAtt, Sub or\"\n+ \" Select for {0}\"\n+ )\n matches.append(\n RuleMatch(\n path, message.format(\"/\".join(map(str, value)))\n@@ -78,18 +81,16 @@\n if len(value.get(\"Fn::If\")) == 3 and isinstance(\n value.get(\"Fn::If\"), list\n ):\n- matches.extend(\n- self.check_count(\n- value.get(\"Fn::If\")[1],\n- path=path[:] + [index_key, 1],\n- )\n- )\n- matches.extend(\n- self.check_count(\n- value.get(\"Fn::If\")[2],\n- path=path[:] + [index_key, 2],\n+ for i in [1, 2]:\n+ (\n+ new_count_parameters,\n+ new_matches,\n+ ) = self.check_count(\n+ value.get(\"Fn::If\")[i],\n+ path=path[:] + [index_key, i],\n )\n- )\n+ count_parameters.extend(new_count_parameters)\n+ matches.extend(new_matches)\n else:\n message = \"Cidr count should be Int, Ref, or Select for {0}\"\n matches.append(\n@@ -168,7 +169,10 @@\n max_value = parameter_obj.get(\"MaxValue\")\n min_value = parameter_obj.get(\"MinValue\")\n if (not min_value) or min_value < 1 or min_value > 256:\n- message = \"Parameter for Cidr count have MinValue between 1 and 256 at {0}\"\n+ message = (\n+ \"Parameter for Cidr count have MinValue between 1 and 256\"\n+ \" at {0}\"\n+ )\n matches.append(\n RuleMatch(\n tree + [\"MinValue\"],\n@@ -176,7 +180,10 @@\n )\n )\n if (not max_value) or max_value < 1 or max_value > 256:\n- message = \"Parameter for Cidr count have MaxValue between 1 and 256 at {0}\"\n+ message = (\n+ \"Parameter for Cidr count have MaxValue between 1 and 256\"\n+ \" at {0}\"\n+ )\n matches.append(\n RuleMatch(\n tree + [\"MaxValue\"],\n@@ -258,7 +265,6 @@\n )\n count_parameters.extend(new_count_parameters)\n matches.extend(new_matches)\n-\n new_size_mask_parameters, new_matches = self.check_size_mask(\n size_mask_obj, tree[:] + [2]\n )\n", "issue": "E0002: Unknown exception while processing rule E1024: 'list' object has no attribute 'path'\n### CloudFormation Lint Version\n\ncfn-lint 0.77.10\n\n### What operating system are you using?\n\nMac\n\n### Describe the bug\n\nWhen validation checks run on a the Cidr function `!Cidr [ ipBlock, count, cidrBits ]`, the count and cidrBits doesn't validate more complex functions like the `!If` in the example.\n\n### Expected behavior\n\nDoesn't throw unknown exception\n\n### Reproduction template\n\n```yaml\r\nPublicSubnet1:\r\n Type: AWS::EC2::Subnet\r\n Properties:\r\n VpcId: !Ref VPC\r\n AvailabilityZone: !Select [0, !GetAZs \"\"]\r\n CidrBlock: !If\r\n - defineYourOwnSubnetCIDR\r\n - !Ref FirstTierSubnet1CIDR\r\n - !Select\r\n - 0\r\n - !Cidr\r\n - 10.0.0.0/24\r\n - !If [3AZ, 16, 8]\r\n - 4\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport regex as re\n\nfrom cfnlint.helpers import REGEX_CIDR\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass Cidr(CloudFormationLintRule):\n \"\"\"Check if Cidr values are correct\"\"\"\n\n id = \"E1024\"\n shortdesc = \"Cidr validation of parameters\"\n description = \"Making sure the function CIDR is a list with valid values\"\n source_url = \"https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-cidr.html\"\n tags = [\"functions\", \"cidr\"]\n\n supported_functions = [\n \"Fn::FindInMap\",\n \"Fn::Select\",\n \"Ref\",\n \"Fn::GetAtt\",\n \"Fn::Sub\",\n \"Fn::ImportValue\",\n ]\n\n def check_ip_block(self, value, path):\n matches = []\n if isinstance(value, dict):\n if len(value) == 1:\n for index_key, _ in value.items():\n if index_key not in self.supported_functions:\n if index_key == \"Fn::If\":\n if len(value.get(\"Fn::If\")) == 3 and isinstance(\n value.get(\"Fn::If\"), list\n ):\n matches.extend(\n self.check_ip_block(\n value.get(\"Fn::If\")[1],\n path=path[:] + [index_key, 1],\n )\n )\n matches.extend(\n self.check_ip_block(\n value.get(\"Fn::If\")[2],\n path=path[:] + [index_key, 2],\n )\n )\n else:\n message = \"Cidr ipBlock should be Cidr Range, Ref, GetAtt, Sub or Select for {0}\"\n matches.append(\n RuleMatch(\n path, message.format(\"/\".join(map(str, value)))\n )\n )\n elif isinstance(value, (str)):\n if not re.match(REGEX_CIDR, value):\n message = \"Cidr ipBlock should be a Cidr Range based string for {0}\"\n matches.append(\n RuleMatch(path, message.format(\"/\".join(map(str, path))))\n )\n else:\n message = \"Cidr ipBlock should be a string for {0}\"\n matches.append(RuleMatch(path, message.format(\"/\".join(map(str, path)))))\n\n return matches\n\n def check_count(self, value, path):\n matches = []\n count_parameters = []\n if isinstance(value, dict):\n if len(value) == 1:\n for index_key, index_value in value.items():\n if index_key not in self.supported_functions:\n if index_key == \"Fn::If\":\n if len(value.get(\"Fn::If\")) == 3 and isinstance(\n value.get(\"Fn::If\"), list\n ):\n matches.extend(\n self.check_count(\n value.get(\"Fn::If\")[1],\n path=path[:] + [index_key, 1],\n )\n )\n matches.extend(\n self.check_count(\n value.get(\"Fn::If\")[2],\n path=path[:] + [index_key, 2],\n )\n )\n else:\n message = \"Cidr count should be Int, Ref, or Select for {0}\"\n matches.append(\n RuleMatch(\n path, message.format(\"/\".join(map(str, path)))\n )\n )\n if index_key == \"Ref\":\n count_parameters.append(index_value)\n elif not isinstance(value, int):\n message = \"Cidr count should be a int for {0}\"\n extra_args = {\n \"actual_type\": type(value).__name__,\n \"expected_type\": int.__name__,\n }\n matches.append(\n RuleMatch(path, message.format(\"/\".join(map(str, path))), **extra_args)\n )\n\n return count_parameters, matches\n\n def check_size_mask(self, value, path):\n matches = []\n size_mask_parameters = []\n if isinstance(value, dict):\n if len(value) == 1:\n for index_key, index_value in value.items():\n if index_key not in self.supported_functions:\n if index_key == \"Fn::If\":\n if len(value.get(\"Fn::If\")) == 3 and isinstance(\n value.get(\"Fn::If\"), list\n ):\n matches.extend(\n self.check_size_mask(\n value.get(\"Fn::If\")[1],\n path=path[:] + [index_key, 1],\n )\n )\n matches.extend(\n self.check_size_mask(\n value.get(\"Fn::If\")[2],\n path=path[:] + [index_key, 2],\n )\n )\n else:\n message = (\n \"Cidr sizeMask should be Int, Ref, or Select for {0}\"\n )\n matches.append(\n RuleMatch(\n path, message.format(\"/\".join(map(str, path)))\n )\n )\n if index_key == \"Ref\":\n size_mask_parameters.append(index_value)\n elif not isinstance(value, int):\n message = \"Cidr sizeMask should be a int for {0}\"\n extra_args = {\n \"actual_type\": type(value).__name__,\n \"expected_type\": int.__name__,\n }\n matches.append(\n RuleMatch(path, message.format(\"/\".join(map(str, path))), **extra_args)\n )\n\n return size_mask_parameters, matches\n\n def check_parameter_count(self, cfn, parameter_name):\n \"\"\"Check Count Parameter if used\"\"\"\n matches = []\n parameter_obj = cfn.get_parameters().get(parameter_name, {})\n if parameter_obj:\n tree = [\"Parameters\", parameter_name]\n parameter_type = parameter_obj.get(\"Type\")\n if parameter_type == \"Number\":\n max_value = parameter_obj.get(\"MaxValue\")\n min_value = parameter_obj.get(\"MinValue\")\n if (not min_value) or min_value < 1 or min_value > 256:\n message = \"Parameter for Cidr count have MinValue between 1 and 256 at {0}\"\n matches.append(\n RuleMatch(\n tree + [\"MinValue\"],\n message.format(\"/\".join(map(str, tree + [\"MinValue\"]))),\n )\n )\n if (not max_value) or max_value < 1 or max_value > 256:\n message = \"Parameter for Cidr count have MaxValue between 1 and 256 at {0}\"\n matches.append(\n RuleMatch(\n tree + [\"MaxValue\"],\n message.format(\"/\".join(map(str, tree + [\"MaxValue\"]))),\n )\n )\n else:\n message = \"Parameter for Cidr count have be of Type Number at {0}\"\n matches.append(\n RuleMatch(tree, message.format(\"/\".join(map(str, tree))))\n )\n\n return matches\n\n def check_parameter_size_mask(self, cfn, parameter_name):\n \"\"\"Check SizeMask Parameter if used\"\"\"\n matches = []\n parameter_obj = cfn.get_parameters().get(parameter_name, {})\n if parameter_obj:\n tree = [\"Parameters\", parameter_name]\n parameter_type = parameter_obj.get(\"Type\")\n if parameter_type == \"Number\":\n max_value = parameter_obj.get(\"MaxValue\")\n min_value = parameter_obj.get(\"MinValue\")\n if (not min_value) or min_value < 1 or min_value > 256:\n message = (\n \"Parameter for Cidr sizeMask have MinValue between 1 and \"\n \"128 (for ipv6) and 32 (for ipv4) at {0}\"\n )\n matches.append(\n RuleMatch(\n tree + [\"MinValue\"],\n message.format(\"/\".join(map(str, tree + [\"MinValue\"]))),\n )\n )\n if (not max_value) or max_value < 1 or max_value > 256:\n message = (\n \"Parameter for Cidr count have MaxValue between 1 and \"\n \"128 (for ipv6) and 32 (for ipv4) at {0}\"\n )\n matches.append(\n RuleMatch(\n tree + [\"MaxValue\"],\n message.format(\"/\".join(map(str, tree + [\"MaxValue\"]))),\n )\n )\n else:\n message = \"Parameter for Cidr count have be of Type Number at {0}\"\n matches.append(\n RuleMatch(tree, message.format(\"/\".join(map(str, tree))))\n )\n\n return matches\n\n def match(self, cfn):\n matches = []\n\n cidr_objs = cfn.search_deep_keys(\"Fn::Cidr\")\n\n count_parameters = []\n size_mask_parameters = []\n\n for cidr_obj in cidr_objs:\n cidr_value_obj = cidr_obj[-1]\n tree = cidr_obj[:-1]\n if isinstance(cidr_value_obj, list):\n if len(cidr_value_obj) in [2, 3]:\n ip_block_obj = cidr_value_obj[0]\n count_obj = cidr_value_obj[1]\n if len(cidr_value_obj) == 3:\n size_mask_obj = cidr_value_obj[2]\n else:\n size_mask_obj = None\n\n matches.extend(self.check_ip_block(ip_block_obj, tree[:] + [0]))\n\n new_count_parameters, new_matches = self.check_count(\n count_obj, tree[:] + [1]\n )\n count_parameters.extend(new_count_parameters)\n matches.extend(new_matches)\n\n new_size_mask_parameters, new_matches = self.check_size_mask(\n size_mask_obj, tree[:] + [2]\n )\n size_mask_parameters.extend(new_size_mask_parameters)\n matches.extend(new_matches)\n\n else:\n message = \"Cidr should be a list of 2 or 3 elements for {0}\"\n matches.append(\n RuleMatch(tree, message.format(\"/\".join(map(str, tree))))\n )\n else:\n message = \"Cidr should be a list of 2 or 3 elements for {0}\"\n matches.append(\n RuleMatch(tree, message.format(\"/\".join(map(str, tree))))\n )\n\n for count_parameter in set(count_parameters):\n matches.extend(self.check_parameter_count(cfn, count_parameter))\n for size_mask_parameter in set(size_mask_parameters):\n matches.extend(self.check_parameter_size_mask(cfn, size_mask_parameter))\n\n return matches\n", "path": "src/cfnlint/rules/functions/Cidr.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport regex as re\n\nfrom cfnlint.helpers import REGEX_CIDR\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass Cidr(CloudFormationLintRule):\n \"\"\"Check if Cidr values are correct\"\"\"\n\n id = \"E1024\"\n shortdesc = \"Cidr validation of parameters\"\n description = \"Making sure the function CIDR is a list with valid values\"\n source_url = \"https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-cidr.html\"\n tags = [\"functions\", \"cidr\"]\n\n supported_functions = [\n \"Fn::FindInMap\",\n \"Fn::Select\",\n \"Ref\",\n \"Fn::GetAtt\",\n \"Fn::Sub\",\n \"Fn::ImportValue\",\n ]\n\n def check_ip_block(self, value, path):\n matches = []\n if isinstance(value, dict):\n if len(value) == 1:\n for index_key, _ in value.items():\n if index_key not in self.supported_functions:\n if index_key == \"Fn::If\":\n if len(value.get(\"Fn::If\")) == 3 and isinstance(\n value.get(\"Fn::If\"), list\n ):\n matches.extend(\n self.check_ip_block(\n value.get(\"Fn::If\")[1],\n path=path[:] + [index_key, 1],\n )\n )\n matches.extend(\n self.check_ip_block(\n value.get(\"Fn::If\")[2],\n path=path[:] + [index_key, 2],\n )\n )\n else:\n message = (\n \"Cidr ipBlock should be Cidr Range, Ref, GetAtt, Sub or\"\n \" Select for {0}\"\n )\n matches.append(\n RuleMatch(\n path, message.format(\"/\".join(map(str, value)))\n )\n )\n elif isinstance(value, (str)):\n if not re.match(REGEX_CIDR, value):\n message = \"Cidr ipBlock should be a Cidr Range based string for {0}\"\n matches.append(\n RuleMatch(path, message.format(\"/\".join(map(str, path))))\n )\n else:\n message = \"Cidr ipBlock should be a string for {0}\"\n matches.append(RuleMatch(path, message.format(\"/\".join(map(str, path)))))\n\n return matches\n\n def check_count(self, value, path):\n matches = []\n count_parameters = []\n if isinstance(value, dict):\n if len(value) == 1:\n for index_key, index_value in value.items():\n if index_key not in self.supported_functions:\n if index_key == \"Fn::If\":\n if len(value.get(\"Fn::If\")) == 3 and isinstance(\n value.get(\"Fn::If\"), list\n ):\n for i in [1, 2]:\n (\n new_count_parameters,\n new_matches,\n ) = self.check_count(\n value.get(\"Fn::If\")[i],\n path=path[:] + [index_key, i],\n )\n count_parameters.extend(new_count_parameters)\n matches.extend(new_matches)\n else:\n message = \"Cidr count should be Int, Ref, or Select for {0}\"\n matches.append(\n RuleMatch(\n path, message.format(\"/\".join(map(str, path)))\n )\n )\n if index_key == \"Ref\":\n count_parameters.append(index_value)\n elif not isinstance(value, int):\n message = \"Cidr count should be a int for {0}\"\n extra_args = {\n \"actual_type\": type(value).__name__,\n \"expected_type\": int.__name__,\n }\n matches.append(\n RuleMatch(path, message.format(\"/\".join(map(str, path))), **extra_args)\n )\n\n return count_parameters, matches\n\n def check_size_mask(self, value, path):\n matches = []\n size_mask_parameters = []\n if isinstance(value, dict):\n if len(value) == 1:\n for index_key, index_value in value.items():\n if index_key not in self.supported_functions:\n if index_key == \"Fn::If\":\n if len(value.get(\"Fn::If\")) == 3 and isinstance(\n value.get(\"Fn::If\"), list\n ):\n matches.extend(\n self.check_size_mask(\n value.get(\"Fn::If\")[1],\n path=path[:] + [index_key, 1],\n )\n )\n matches.extend(\n self.check_size_mask(\n value.get(\"Fn::If\")[2],\n path=path[:] + [index_key, 2],\n )\n )\n else:\n message = (\n \"Cidr sizeMask should be Int, Ref, or Select for {0}\"\n )\n matches.append(\n RuleMatch(\n path, message.format(\"/\".join(map(str, path)))\n )\n )\n if index_key == \"Ref\":\n size_mask_parameters.append(index_value)\n elif not isinstance(value, int):\n message = \"Cidr sizeMask should be a int for {0}\"\n extra_args = {\n \"actual_type\": type(value).__name__,\n \"expected_type\": int.__name__,\n }\n matches.append(\n RuleMatch(path, message.format(\"/\".join(map(str, path))), **extra_args)\n )\n\n return size_mask_parameters, matches\n\n def check_parameter_count(self, cfn, parameter_name):\n \"\"\"Check Count Parameter if used\"\"\"\n matches = []\n parameter_obj = cfn.get_parameters().get(parameter_name, {})\n if parameter_obj:\n tree = [\"Parameters\", parameter_name]\n parameter_type = parameter_obj.get(\"Type\")\n if parameter_type == \"Number\":\n max_value = parameter_obj.get(\"MaxValue\")\n min_value = parameter_obj.get(\"MinValue\")\n if (not min_value) or min_value < 1 or min_value > 256:\n message = (\n \"Parameter for Cidr count have MinValue between 1 and 256\"\n \" at {0}\"\n )\n matches.append(\n RuleMatch(\n tree + [\"MinValue\"],\n message.format(\"/\".join(map(str, tree + [\"MinValue\"]))),\n )\n )\n if (not max_value) or max_value < 1 or max_value > 256:\n message = (\n \"Parameter for Cidr count have MaxValue between 1 and 256\"\n \" at {0}\"\n )\n matches.append(\n RuleMatch(\n tree + [\"MaxValue\"],\n message.format(\"/\".join(map(str, tree + [\"MaxValue\"]))),\n )\n )\n else:\n message = \"Parameter for Cidr count have be of Type Number at {0}\"\n matches.append(\n RuleMatch(tree, message.format(\"/\".join(map(str, tree))))\n )\n\n return matches\n\n def check_parameter_size_mask(self, cfn, parameter_name):\n \"\"\"Check SizeMask Parameter if used\"\"\"\n matches = []\n parameter_obj = cfn.get_parameters().get(parameter_name, {})\n if parameter_obj:\n tree = [\"Parameters\", parameter_name]\n parameter_type = parameter_obj.get(\"Type\")\n if parameter_type == \"Number\":\n max_value = parameter_obj.get(\"MaxValue\")\n min_value = parameter_obj.get(\"MinValue\")\n if (not min_value) or min_value < 1 or min_value > 256:\n message = (\n \"Parameter for Cidr sizeMask have MinValue between 1 and \"\n \"128 (for ipv6) and 32 (for ipv4) at {0}\"\n )\n matches.append(\n RuleMatch(\n tree + [\"MinValue\"],\n message.format(\"/\".join(map(str, tree + [\"MinValue\"]))),\n )\n )\n if (not max_value) or max_value < 1 or max_value > 256:\n message = (\n \"Parameter for Cidr count have MaxValue between 1 and \"\n \"128 (for ipv6) and 32 (for ipv4) at {0}\"\n )\n matches.append(\n RuleMatch(\n tree + [\"MaxValue\"],\n message.format(\"/\".join(map(str, tree + [\"MaxValue\"]))),\n )\n )\n else:\n message = \"Parameter for Cidr count have be of Type Number at {0}\"\n matches.append(\n RuleMatch(tree, message.format(\"/\".join(map(str, tree))))\n )\n\n return matches\n\n def match(self, cfn):\n matches = []\n\n cidr_objs = cfn.search_deep_keys(\"Fn::Cidr\")\n\n count_parameters = []\n size_mask_parameters = []\n\n for cidr_obj in cidr_objs:\n cidr_value_obj = cidr_obj[-1]\n tree = cidr_obj[:-1]\n if isinstance(cidr_value_obj, list):\n if len(cidr_value_obj) in [2, 3]:\n ip_block_obj = cidr_value_obj[0]\n count_obj = cidr_value_obj[1]\n if len(cidr_value_obj) == 3:\n size_mask_obj = cidr_value_obj[2]\n else:\n size_mask_obj = None\n\n matches.extend(self.check_ip_block(ip_block_obj, tree[:] + [0]))\n\n new_count_parameters, new_matches = self.check_count(\n count_obj, tree[:] + [1]\n )\n count_parameters.extend(new_count_parameters)\n matches.extend(new_matches)\n new_size_mask_parameters, new_matches = self.check_size_mask(\n size_mask_obj, tree[:] + [2]\n )\n size_mask_parameters.extend(new_size_mask_parameters)\n matches.extend(new_matches)\n\n else:\n message = \"Cidr should be a list of 2 or 3 elements for {0}\"\n matches.append(\n RuleMatch(tree, message.format(\"/\".join(map(str, tree))))\n )\n else:\n message = \"Cidr should be a list of 2 or 3 elements for {0}\"\n matches.append(\n RuleMatch(tree, message.format(\"/\".join(map(str, tree))))\n )\n\n for count_parameter in set(count_parameters):\n matches.extend(self.check_parameter_count(cfn, count_parameter))\n for size_mask_parameter in set(size_mask_parameters):\n matches.extend(self.check_parameter_size_mask(cfn, size_mask_parameter))\n\n return matches\n", "path": "src/cfnlint/rules/functions/Cidr.py"}]} | 3,492 | 695 |
gh_patches_debug_29073 | rasdani/github-patches | git_diff | python-discord__bot-680 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Write unit tests for `bot/utils/time.py`
Write unit tests for [`bot/utils/time.py`](../blob/master/bot/utils/time.py). This file already has some unit tests, but they are written for `pytest`. The tests are currently located in [`tests/utils/test_time.py`](../blob/master/tests/utils/test_time.py), but should be moved to the appropriate location in the folder hierarchy, `tests/bot/utils/test_time.py` after they have been migrated to the `unittest` framework.
## Implementation details
Please make sure to read the general information in the [meta issue](553) and the [testing README](../blob/master/tests/README.md). We are aiming for a 100% [branch coverage](https://coverage.readthedocs.io/en/stable/branch.html) for this file, but if you think that is not possible, please discuss that in this issue.
## Additional information
If you want to work on this issue, **please make sure that you get assigned to it** by one of the core devs before starting to work on it. We would like to prevent the situation that multiple people are working on the same issue. To get assigned, leave a comment showing your interesting in tackling this issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/utils/time.py`
Content:
```
1 import asyncio
2 import datetime
3 from typing import Optional
4
5 import dateutil.parser
6 from dateutil.relativedelta import relativedelta
7
8 RFC1123_FORMAT = "%a, %d %b %Y %H:%M:%S GMT"
9 INFRACTION_FORMAT = "%Y-%m-%d %H:%M"
10
11
12 def _stringify_time_unit(value: int, unit: str) -> str:
13 """
14 Returns a string to represent a value and time unit, ensuring that it uses the right plural form of the unit.
15
16 >>> _stringify_time_unit(1, "seconds")
17 "1 second"
18 >>> _stringify_time_unit(24, "hours")
19 "24 hours"
20 >>> _stringify_time_unit(0, "minutes")
21 "less than a minute"
22 """
23 if value == 1:
24 return f"{value} {unit[:-1]}"
25 elif value == 0:
26 return f"less than a {unit[:-1]}"
27 else:
28 return f"{value} {unit}"
29
30
31 def humanize_delta(delta: relativedelta, precision: str = "seconds", max_units: int = 6) -> str:
32 """
33 Returns a human-readable version of the relativedelta.
34
35 precision specifies the smallest unit of time to include (e.g. "seconds", "minutes").
36 max_units specifies the maximum number of units of time to include (e.g. 1 may include days but not hours).
37 """
38 if max_units <= 0:
39 raise ValueError("max_units must be positive")
40
41 units = (
42 ("years", delta.years),
43 ("months", delta.months),
44 ("days", delta.days),
45 ("hours", delta.hours),
46 ("minutes", delta.minutes),
47 ("seconds", delta.seconds),
48 )
49
50 # Add the time units that are >0, but stop at accuracy or max_units.
51 time_strings = []
52 unit_count = 0
53 for unit, value in units:
54 if value:
55 time_strings.append(_stringify_time_unit(value, unit))
56 unit_count += 1
57
58 if unit == precision or unit_count >= max_units:
59 break
60
61 # Add the 'and' between the last two units, if necessary
62 if len(time_strings) > 1:
63 time_strings[-1] = f"{time_strings[-2]} and {time_strings[-1]}"
64 del time_strings[-2]
65
66 # If nothing has been found, just make the value 0 precision, e.g. `0 days`.
67 if not time_strings:
68 humanized = _stringify_time_unit(0, precision)
69 else:
70 humanized = ", ".join(time_strings)
71
72 return humanized
73
74
75 def time_since(past_datetime: datetime.datetime, precision: str = "seconds", max_units: int = 6) -> str:
76 """
77 Takes a datetime and returns a human-readable string that describes how long ago that datetime was.
78
79 precision specifies the smallest unit of time to include (e.g. "seconds", "minutes").
80 max_units specifies the maximum number of units of time to include (e.g. 1 may include days but not hours).
81 """
82 now = datetime.datetime.utcnow()
83 delta = abs(relativedelta(now, past_datetime))
84
85 humanized = humanize_delta(delta, precision, max_units)
86
87 return f"{humanized} ago"
88
89
90 def parse_rfc1123(stamp: str) -> datetime.datetime:
91 """Parse RFC1123 time string into datetime."""
92 return datetime.datetime.strptime(stamp, RFC1123_FORMAT).replace(tzinfo=datetime.timezone.utc)
93
94
95 # Hey, this could actually be used in the off_topic_names and reddit cogs :)
96 async def wait_until(time: datetime.datetime, start: Optional[datetime.datetime] = None) -> None:
97 """
98 Wait until a given time.
99
100 :param time: A datetime.datetime object to wait until.
101 :param start: The start from which to calculate the waiting duration. Defaults to UTC time.
102 """
103 delay = time - (start or datetime.datetime.utcnow())
104 delay_seconds = delay.total_seconds()
105
106 # Incorporate a small delay so we don't rapid-fire the event due to time precision errors
107 if delay_seconds > 1.0:
108 await asyncio.sleep(delay_seconds)
109
110
111 def format_infraction(timestamp: str) -> str:
112 """Format an infraction timestamp to a more readable ISO 8601 format."""
113 return dateutil.parser.isoparse(timestamp).strftime(INFRACTION_FORMAT)
114
115
116 def format_infraction_with_duration(
117 expiry: Optional[str],
118 date_from: datetime.datetime = None,
119 max_units: int = 2
120 ) -> Optional[str]:
121 """
122 Format an infraction timestamp to a more readable ISO 8601 format WITH the duration.
123
124 Returns a human-readable version of the duration between datetime.utcnow() and an expiry.
125 Unlike `humanize_delta`, this function will force the `precision` to be `seconds` by not passing it.
126 `max_units` specifies the maximum number of units of time to include (e.g. 1 may include days but not hours).
127 By default, max_units is 2.
128 """
129 if not expiry:
130 return None
131
132 date_from = date_from or datetime.datetime.utcnow()
133 date_to = dateutil.parser.isoparse(expiry).replace(tzinfo=None, microsecond=0)
134
135 expiry_formatted = format_infraction(expiry)
136
137 duration = humanize_delta(relativedelta(date_to, date_from), max_units=max_units)
138 duration_formatted = f" ({duration})" if duration else ''
139
140 return f"{expiry_formatted}{duration_formatted}"
141
142
143 def until_expiration(expiry: Optional[str], max_units: int = 2) -> Optional[str]:
144 """
145 Get the remaining time until infraction's expiration, in a human-readable version of the relativedelta.
146
147 Unlike `humanize_delta`, this function will force the `precision` to be `seconds` by not passing it.
148 `max_units` specifies the maximum number of units of time to include (e.g. 1 may include days but not hours).
149 By default, max_units is 2.
150 """
151 if not expiry:
152 return None
153
154 now = datetime.datetime.utcnow()
155 since = dateutil.parser.isoparse(expiry).replace(tzinfo=None, microsecond=0)
156
157 if since < now:
158 return None
159
160 return humanize_delta(relativedelta(since, now), max_units=max_units)
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bot/utils/time.py b/bot/utils/time.py
--- a/bot/utils/time.py
+++ b/bot/utils/time.py
@@ -115,7 +115,7 @@
def format_infraction_with_duration(
expiry: Optional[str],
- date_from: datetime.datetime = None,
+ date_from: Optional[datetime.datetime] = None,
max_units: int = 2
) -> Optional[str]:
"""
@@ -140,10 +140,15 @@
return f"{expiry_formatted}{duration_formatted}"
-def until_expiration(expiry: Optional[str], max_units: int = 2) -> Optional[str]:
+def until_expiration(
+ expiry: Optional[str],
+ now: Optional[datetime.datetime] = None,
+ max_units: int = 2
+) -> Optional[str]:
"""
Get the remaining time until infraction's expiration, in a human-readable version of the relativedelta.
+ Returns a human-readable version of the remaining duration between datetime.utcnow() and an expiry.
Unlike `humanize_delta`, this function will force the `precision` to be `seconds` by not passing it.
`max_units` specifies the maximum number of units of time to include (e.g. 1 may include days but not hours).
By default, max_units is 2.
@@ -151,7 +156,7 @@
if not expiry:
return None
- now = datetime.datetime.utcnow()
+ now = now or datetime.datetime.utcnow()
since = dateutil.parser.isoparse(expiry).replace(tzinfo=None, microsecond=0)
if since < now:
| {"golden_diff": "diff --git a/bot/utils/time.py b/bot/utils/time.py\n--- a/bot/utils/time.py\n+++ b/bot/utils/time.py\n@@ -115,7 +115,7 @@\n \n def format_infraction_with_duration(\n expiry: Optional[str],\n- date_from: datetime.datetime = None,\n+ date_from: Optional[datetime.datetime] = None,\n max_units: int = 2\n ) -> Optional[str]:\n \"\"\"\n@@ -140,10 +140,15 @@\n return f\"{expiry_formatted}{duration_formatted}\"\n \n \n-def until_expiration(expiry: Optional[str], max_units: int = 2) -> Optional[str]:\n+def until_expiration(\n+ expiry: Optional[str],\n+ now: Optional[datetime.datetime] = None,\n+ max_units: int = 2\n+) -> Optional[str]:\n \"\"\"\n Get the remaining time until infraction's expiration, in a human-readable version of the relativedelta.\n \n+ Returns a human-readable version of the remaining duration between datetime.utcnow() and an expiry.\n Unlike `humanize_delta`, this function will force the `precision` to be `seconds` by not passing it.\n `max_units` specifies the maximum number of units of time to include (e.g. 1 may include days but not hours).\n By default, max_units is 2.\n@@ -151,7 +156,7 @@\n if not expiry:\n return None\n \n- now = datetime.datetime.utcnow()\n+ now = now or datetime.datetime.utcnow()\n since = dateutil.parser.isoparse(expiry).replace(tzinfo=None, microsecond=0)\n \n if since < now:\n", "issue": "Write unit tests for `bot/utils/time.py`\nWrite unit tests for [`bot/utils/time.py`](../blob/master/bot/utils/time.py). This file already has some unit tests, but they are written for `pytest`. The tests are currently located in [`tests/utils/test_time.py`](../blob/master/tests/utils/test_time.py), but should be moved to the appropriate location in the folder hierarchy, `tests/bot/utils/test_time.py` after they have been migrated to the `unittest` framework.\r\n\r\n## Implementation details\r\nPlease make sure to read the general information in the [meta issue](553) and the [testing README](../blob/master/tests/README.md). We are aiming for a 100% [branch coverage](https://coverage.readthedocs.io/en/stable/branch.html) for this file, but if you think that is not possible, please discuss that in this issue.\r\n\r\n## Additional information\r\nIf you want to work on this issue, **please make sure that you get assigned to it** by one of the core devs before starting to work on it. We would like to prevent the situation that multiple people are working on the same issue. To get assigned, leave a comment showing your interesting in tackling this issue.\r\n\n", "before_files": [{"content": "import asyncio\nimport datetime\nfrom typing import Optional\n\nimport dateutil.parser\nfrom dateutil.relativedelta import relativedelta\n\nRFC1123_FORMAT = \"%a, %d %b %Y %H:%M:%S GMT\"\nINFRACTION_FORMAT = \"%Y-%m-%d %H:%M\"\n\n\ndef _stringify_time_unit(value: int, unit: str) -> str:\n \"\"\"\n Returns a string to represent a value and time unit, ensuring that it uses the right plural form of the unit.\n\n >>> _stringify_time_unit(1, \"seconds\")\n \"1 second\"\n >>> _stringify_time_unit(24, \"hours\")\n \"24 hours\"\n >>> _stringify_time_unit(0, \"minutes\")\n \"less than a minute\"\n \"\"\"\n if value == 1:\n return f\"{value} {unit[:-1]}\"\n elif value == 0:\n return f\"less than a {unit[:-1]}\"\n else:\n return f\"{value} {unit}\"\n\n\ndef humanize_delta(delta: relativedelta, precision: str = \"seconds\", max_units: int = 6) -> str:\n \"\"\"\n Returns a human-readable version of the relativedelta.\n\n precision specifies the smallest unit of time to include (e.g. \"seconds\", \"minutes\").\n max_units specifies the maximum number of units of time to include (e.g. 1 may include days but not hours).\n \"\"\"\n if max_units <= 0:\n raise ValueError(\"max_units must be positive\")\n\n units = (\n (\"years\", delta.years),\n (\"months\", delta.months),\n (\"days\", delta.days),\n (\"hours\", delta.hours),\n (\"minutes\", delta.minutes),\n (\"seconds\", delta.seconds),\n )\n\n # Add the time units that are >0, but stop at accuracy or max_units.\n time_strings = []\n unit_count = 0\n for unit, value in units:\n if value:\n time_strings.append(_stringify_time_unit(value, unit))\n unit_count += 1\n\n if unit == precision or unit_count >= max_units:\n break\n\n # Add the 'and' between the last two units, if necessary\n if len(time_strings) > 1:\n time_strings[-1] = f\"{time_strings[-2]} and {time_strings[-1]}\"\n del time_strings[-2]\n\n # If nothing has been found, just make the value 0 precision, e.g. `0 days`.\n if not time_strings:\n humanized = _stringify_time_unit(0, precision)\n else:\n humanized = \", \".join(time_strings)\n\n return humanized\n\n\ndef time_since(past_datetime: datetime.datetime, precision: str = \"seconds\", max_units: int = 6) -> str:\n \"\"\"\n Takes a datetime and returns a human-readable string that describes how long ago that datetime was.\n\n precision specifies the smallest unit of time to include (e.g. \"seconds\", \"minutes\").\n max_units specifies the maximum number of units of time to include (e.g. 1 may include days but not hours).\n \"\"\"\n now = datetime.datetime.utcnow()\n delta = abs(relativedelta(now, past_datetime))\n\n humanized = humanize_delta(delta, precision, max_units)\n\n return f\"{humanized} ago\"\n\n\ndef parse_rfc1123(stamp: str) -> datetime.datetime:\n \"\"\"Parse RFC1123 time string into datetime.\"\"\"\n return datetime.datetime.strptime(stamp, RFC1123_FORMAT).replace(tzinfo=datetime.timezone.utc)\n\n\n# Hey, this could actually be used in the off_topic_names and reddit cogs :)\nasync def wait_until(time: datetime.datetime, start: Optional[datetime.datetime] = None) -> None:\n \"\"\"\n Wait until a given time.\n\n :param time: A datetime.datetime object to wait until.\n :param start: The start from which to calculate the waiting duration. Defaults to UTC time.\n \"\"\"\n delay = time - (start or datetime.datetime.utcnow())\n delay_seconds = delay.total_seconds()\n\n # Incorporate a small delay so we don't rapid-fire the event due to time precision errors\n if delay_seconds > 1.0:\n await asyncio.sleep(delay_seconds)\n\n\ndef format_infraction(timestamp: str) -> str:\n \"\"\"Format an infraction timestamp to a more readable ISO 8601 format.\"\"\"\n return dateutil.parser.isoparse(timestamp).strftime(INFRACTION_FORMAT)\n\n\ndef format_infraction_with_duration(\n expiry: Optional[str],\n date_from: datetime.datetime = None,\n max_units: int = 2\n) -> Optional[str]:\n \"\"\"\n Format an infraction timestamp to a more readable ISO 8601 format WITH the duration.\n\n Returns a human-readable version of the duration between datetime.utcnow() and an expiry.\n Unlike `humanize_delta`, this function will force the `precision` to be `seconds` by not passing it.\n `max_units` specifies the maximum number of units of time to include (e.g. 1 may include days but not hours).\n By default, max_units is 2.\n \"\"\"\n if not expiry:\n return None\n\n date_from = date_from or datetime.datetime.utcnow()\n date_to = dateutil.parser.isoparse(expiry).replace(tzinfo=None, microsecond=0)\n\n expiry_formatted = format_infraction(expiry)\n\n duration = humanize_delta(relativedelta(date_to, date_from), max_units=max_units)\n duration_formatted = f\" ({duration})\" if duration else ''\n\n return f\"{expiry_formatted}{duration_formatted}\"\n\n\ndef until_expiration(expiry: Optional[str], max_units: int = 2) -> Optional[str]:\n \"\"\"\n Get the remaining time until infraction's expiration, in a human-readable version of the relativedelta.\n\n Unlike `humanize_delta`, this function will force the `precision` to be `seconds` by not passing it.\n `max_units` specifies the maximum number of units of time to include (e.g. 1 may include days but not hours).\n By default, max_units is 2.\n \"\"\"\n if not expiry:\n return None\n\n now = datetime.datetime.utcnow()\n since = dateutil.parser.isoparse(expiry).replace(tzinfo=None, microsecond=0)\n\n if since < now:\n return None\n\n return humanize_delta(relativedelta(since, now), max_units=max_units)\n", "path": "bot/utils/time.py"}], "after_files": [{"content": "import asyncio\nimport datetime\nfrom typing import Optional\n\nimport dateutil.parser\nfrom dateutil.relativedelta import relativedelta\n\nRFC1123_FORMAT = \"%a, %d %b %Y %H:%M:%S GMT\"\nINFRACTION_FORMAT = \"%Y-%m-%d %H:%M\"\n\n\ndef _stringify_time_unit(value: int, unit: str) -> str:\n \"\"\"\n Returns a string to represent a value and time unit, ensuring that it uses the right plural form of the unit.\n\n >>> _stringify_time_unit(1, \"seconds\")\n \"1 second\"\n >>> _stringify_time_unit(24, \"hours\")\n \"24 hours\"\n >>> _stringify_time_unit(0, \"minutes\")\n \"less than a minute\"\n \"\"\"\n if value == 1:\n return f\"{value} {unit[:-1]}\"\n elif value == 0:\n return f\"less than a {unit[:-1]}\"\n else:\n return f\"{value} {unit}\"\n\n\ndef humanize_delta(delta: relativedelta, precision: str = \"seconds\", max_units: int = 6) -> str:\n \"\"\"\n Returns a human-readable version of the relativedelta.\n\n precision specifies the smallest unit of time to include (e.g. \"seconds\", \"minutes\").\n max_units specifies the maximum number of units of time to include (e.g. 1 may include days but not hours).\n \"\"\"\n if max_units <= 0:\n raise ValueError(\"max_units must be positive\")\n\n units = (\n (\"years\", delta.years),\n (\"months\", delta.months),\n (\"days\", delta.days),\n (\"hours\", delta.hours),\n (\"minutes\", delta.minutes),\n (\"seconds\", delta.seconds),\n )\n\n # Add the time units that are >0, but stop at accuracy or max_units.\n time_strings = []\n unit_count = 0\n for unit, value in units:\n if value:\n time_strings.append(_stringify_time_unit(value, unit))\n unit_count += 1\n\n if unit == precision or unit_count >= max_units:\n break\n\n # Add the 'and' between the last two units, if necessary\n if len(time_strings) > 1:\n time_strings[-1] = f\"{time_strings[-2]} and {time_strings[-1]}\"\n del time_strings[-2]\n\n # If nothing has been found, just make the value 0 precision, e.g. `0 days`.\n if not time_strings:\n humanized = _stringify_time_unit(0, precision)\n else:\n humanized = \", \".join(time_strings)\n\n return humanized\n\n\ndef time_since(past_datetime: datetime.datetime, precision: str = \"seconds\", max_units: int = 6) -> str:\n \"\"\"\n Takes a datetime and returns a human-readable string that describes how long ago that datetime was.\n\n precision specifies the smallest unit of time to include (e.g. \"seconds\", \"minutes\").\n max_units specifies the maximum number of units of time to include (e.g. 1 may include days but not hours).\n \"\"\"\n now = datetime.datetime.utcnow()\n delta = abs(relativedelta(now, past_datetime))\n\n humanized = humanize_delta(delta, precision, max_units)\n\n return f\"{humanized} ago\"\n\n\ndef parse_rfc1123(stamp: str) -> datetime.datetime:\n \"\"\"Parse RFC1123 time string into datetime.\"\"\"\n return datetime.datetime.strptime(stamp, RFC1123_FORMAT).replace(tzinfo=datetime.timezone.utc)\n\n\n# Hey, this could actually be used in the off_topic_names and reddit cogs :)\nasync def wait_until(time: datetime.datetime, start: Optional[datetime.datetime] = None) -> None:\n \"\"\"\n Wait until a given time.\n\n :param time: A datetime.datetime object to wait until.\n :param start: The start from which to calculate the waiting duration. Defaults to UTC time.\n \"\"\"\n delay = time - (start or datetime.datetime.utcnow())\n delay_seconds = delay.total_seconds()\n\n # Incorporate a small delay so we don't rapid-fire the event due to time precision errors\n if delay_seconds > 1.0:\n await asyncio.sleep(delay_seconds)\n\n\ndef format_infraction(timestamp: str) -> str:\n \"\"\"Format an infraction timestamp to a more readable ISO 8601 format.\"\"\"\n return dateutil.parser.isoparse(timestamp).strftime(INFRACTION_FORMAT)\n\n\ndef format_infraction_with_duration(\n expiry: Optional[str],\n date_from: Optional[datetime.datetime] = None,\n max_units: int = 2\n) -> Optional[str]:\n \"\"\"\n Format an infraction timestamp to a more readable ISO 8601 format WITH the duration.\n\n Returns a human-readable version of the duration between datetime.utcnow() and an expiry.\n Unlike `humanize_delta`, this function will force the `precision` to be `seconds` by not passing it.\n `max_units` specifies the maximum number of units of time to include (e.g. 1 may include days but not hours).\n By default, max_units is 2.\n \"\"\"\n if not expiry:\n return None\n\n date_from = date_from or datetime.datetime.utcnow()\n date_to = dateutil.parser.isoparse(expiry).replace(tzinfo=None, microsecond=0)\n\n expiry_formatted = format_infraction(expiry)\n\n duration = humanize_delta(relativedelta(date_to, date_from), max_units=max_units)\n duration_formatted = f\" ({duration})\" if duration else ''\n\n return f\"{expiry_formatted}{duration_formatted}\"\n\n\ndef until_expiration(\n expiry: Optional[str],\n now: Optional[datetime.datetime] = None,\n max_units: int = 2\n) -> Optional[str]:\n \"\"\"\n Get the remaining time until infraction's expiration, in a human-readable version of the relativedelta.\n\n Returns a human-readable version of the remaining duration between datetime.utcnow() and an expiry.\n Unlike `humanize_delta`, this function will force the `precision` to be `seconds` by not passing it.\n `max_units` specifies the maximum number of units of time to include (e.g. 1 may include days but not hours).\n By default, max_units is 2.\n \"\"\"\n if not expiry:\n return None\n\n now = now or datetime.datetime.utcnow()\n since = dateutil.parser.isoparse(expiry).replace(tzinfo=None, microsecond=0)\n\n if since < now:\n return None\n\n return humanize_delta(relativedelta(since, now), max_units=max_units)\n", "path": "bot/utils/time.py"}]} | 2,301 | 366 |
gh_patches_debug_7613 | rasdani/github-patches | git_diff | OpenNMT__OpenNMT-py-969 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error when using ./tools/embeddings_to_torch.py
**I am getting the following error.
Is it harmful and anyone know how to solve it?**
[2018-09-24 21:06:09,964 INFO] From: ./glove_experiment/data.vocab.pt
[2018-09-24 21:06:09,964 INFO] * source vocab: 50002 words
[2018-09-24 21:06:09,964 INFO] * target vocab: 50004 words
[2018-09-24 21:06:42,008 INFO] Got 400000 encryption embeddings from ./glove/original.txt
[2018-09-24 21:08:21,394 INFO] Got 1142358 decryption embeddings from ./glove/wiki.fr.vec
[2018-09-24 21:08:21,699 INFO]
Matching:
[2018-09-24 21:08:21,699 INFO] * enc: 19625 match, 30377 missing, (39.25%)
[2018-09-24 21:08:21,699 INFO] * dec: 1071 match, 48933 missing, (2.14%)
[2018-09-24 21:08:21,699 INFO]
Filtered embeddings:
--- Logging error ---
Traceback (most recent call last):
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 993, in emit
msg = self.format(record)
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 839, in format
return fmt.format(record)
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 576, in format
record.message = record.getMessage()
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 338, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "./tools/embeddings_to_torch.py", line 148, in <module>
main()
File "./tools/embeddings_to_torch.py", line 134, in main
logger.info("\t* enc: ", filtered_enc_embeddings.size())
Message: '\t* enc: '
Arguments: (torch.Size([50002, 300]),)
--- Logging error ---
Traceback (most recent call last):
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 993, in emit
msg = self.format(record)
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 839, in format
return fmt.format(record)
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 576, in format
record.message = record.getMessage()
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 338, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "./tools/embeddings_to_torch.py", line 148, in <module>
main()
File "./tools/embeddings_to_torch.py", line 134, in main
logger.info("\t* enc: ", filtered_enc_embeddings.size())
Message: '\t* enc: '
Arguments: (torch.Size([50002, 300]),)
--- Logging error ---
Traceback (most recent call last):
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 993, in emit
msg = self.format(record)
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 839, in format
return fmt.format(record)
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 576, in format
record.message = record.getMessage()
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 338, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "./tools/embeddings_to_torch.py", line 148, in <module>
main()
File "./tools/embeddings_to_torch.py", line 135, in main
logger.info("\t* dec: ", filtered_dec_embeddings.size())
Message: '\t* dec: '
Arguments: (torch.Size([50004, 300]),)
--- Logging error ---
Traceback (most recent call last):
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 993, in emit
msg = self.format(record)
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 839, in format
return fmt.format(record)
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 576, in format
record.message = record.getMessage()
File "/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py", line 338, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "./tools/embeddings_to_torch.py", line 148, in <module>
main()
File "./tools/embeddings_to_torch.py", line 135, in main
logger.info("\t* dec: ", filtered_dec_embeddings.size())
Message: '\t* dec: '
Arguments: (torch.Size([50004, 300]),)
[2018-09-24 21:08:21,701 INFO]
Saving embedding as:
* enc: ./glove_experiment/embeddings.enc.pt
* dec: ./glove_experiment/embeddings.dec.pt
[2018-09-24 21:08:22,065 INFO]
Done.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/embeddings_to_torch.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from __future__ import print_function
4 from __future__ import division
5 import six
6 import sys
7 import numpy as np
8 import argparse
9 import torch
10 from onmt.utils.logging import init_logger, logger
11
12
13 def get_vocabs(dict_file):
14 vocabs = torch.load(dict_file)
15
16 enc_vocab, dec_vocab = None, None
17
18 # the vocab object is a list of tuple (name, torchtext.Vocab)
19 # we iterate over this list and associate vocabularies based on the name
20 for vocab in vocabs:
21 if vocab[0] == 'src':
22 enc_vocab = vocab[1]
23 if vocab[0] == 'tgt':
24 dec_vocab = vocab[1]
25 assert enc_vocab is not None and dec_vocab is not None
26
27 logger.info("From: %s" % dict_file)
28 logger.info("\t* source vocab: %d words" % len(enc_vocab))
29 logger.info("\t* target vocab: %d words" % len(dec_vocab))
30
31 return enc_vocab, dec_vocab
32
33
34 def get_embeddings(file_enc, opt, flag):
35 embs = dict()
36 if flag == 'enc':
37 for (i, l) in enumerate(open(file_enc, 'rb')):
38 if i < opt.skip_lines:
39 continue
40 if not l:
41 break
42 if len(l) == 0:
43 continue
44
45 l_split = l.decode('utf8').strip().split(' ')
46 if len(l_split) == 2:
47 continue
48 embs[l_split[0]] = [float(em) for em in l_split[1:]]
49 logger.info("Got {} encryption embeddings from {}".format(len(embs),
50 file_enc))
51 else:
52
53 for (i, l) in enumerate(open(file_enc, 'rb')):
54 if not l:
55 break
56 if len(l) == 0:
57 continue
58
59 l_split = l.decode('utf8').strip().split(' ')
60 if len(l_split) == 2:
61 continue
62 embs[l_split[0]] = [float(em) for em in l_split[1:]]
63 logger.info("Got {} decryption embeddings from {}".format(len(embs),
64 file_enc))
65 return embs
66
67
68 def match_embeddings(vocab, emb, opt):
69 dim = len(six.next(six.itervalues(emb)))
70 filtered_embeddings = np.zeros((len(vocab), dim))
71 count = {"match": 0, "miss": 0}
72 for w, w_id in vocab.stoi.items():
73 if w in emb:
74 filtered_embeddings[w_id] = emb[w]
75 count['match'] += 1
76 else:
77 if opt.verbose:
78 logger.info(u"not found:\t{}".format(w), file=sys.stderr)
79 count['miss'] += 1
80
81 return torch.Tensor(filtered_embeddings), count
82
83
84 TYPES = ["GloVe", "word2vec"]
85
86
87 def main():
88
89 parser = argparse.ArgumentParser(description='embeddings_to_torch.py')
90 parser.add_argument('-emb_file_enc', required=True,
91 help="source Embeddings from this file")
92 parser.add_argument('-emb_file_dec', required=True,
93 help="target Embeddings from this file")
94 parser.add_argument('-output_file', required=True,
95 help="Output file for the prepared data")
96 parser.add_argument('-dict_file', required=True,
97 help="Dictionary file")
98 parser.add_argument('-verbose', action="store_true", default=False)
99 parser.add_argument('-skip_lines', type=int, default=0,
100 help="Skip first lines of the embedding file")
101 parser.add_argument('-type', choices=TYPES, default="GloVe")
102 opt = parser.parse_args()
103
104 enc_vocab, dec_vocab = get_vocabs(opt.dict_file)
105 if opt.type == "word2vec":
106 opt.skip_lines = 1
107
108 embeddings_enc = get_embeddings(opt.emb_file_enc, opt, flag='enc')
109 embeddings_dec = get_embeddings(opt.emb_file_dec, opt, flag='dec')
110
111 filtered_enc_embeddings, enc_count = match_embeddings(enc_vocab,
112 embeddings_enc,
113 opt)
114 filtered_dec_embeddings, dec_count = match_embeddings(dec_vocab,
115 embeddings_dec,
116 opt)
117 logger.info("\nMatching: ")
118 match_percent = [_['match'] / (_['match'] + _['miss']) * 100
119 for _ in [enc_count, dec_count]]
120 logger.info("\t* enc: %d match, %d missing, (%.2f%%)"
121 % (enc_count['match'],
122 enc_count['miss'],
123 match_percent[0]))
124 logger.info("\t* dec: %d match, %d missing, (%.2f%%)"
125 % (dec_count['match'],
126 dec_count['miss'],
127 match_percent[1]))
128
129 logger.info("\nFiltered embeddings:")
130 logger.info("\t* enc: ", filtered_enc_embeddings.size())
131 logger.info("\t* dec: ", filtered_dec_embeddings.size())
132
133 enc_output_file = opt.output_file + ".enc.pt"
134 dec_output_file = opt.output_file + ".dec.pt"
135 logger.info("\nSaving embedding as:\n\t* enc: %s\n\t* dec: %s"
136 % (enc_output_file, dec_output_file))
137 torch.save(filtered_enc_embeddings, enc_output_file)
138 torch.save(filtered_dec_embeddings, dec_output_file)
139 logger.info("\nDone.")
140
141
142 if __name__ == "__main__":
143 init_logger('embeddings_to_torch.log')
144 main()
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/embeddings_to_torch.py b/tools/embeddings_to_torch.py
--- a/tools/embeddings_to_torch.py
+++ b/tools/embeddings_to_torch.py
@@ -127,8 +127,8 @@
match_percent[1]))
logger.info("\nFiltered embeddings:")
- logger.info("\t* enc: ", filtered_enc_embeddings.size())
- logger.info("\t* dec: ", filtered_dec_embeddings.size())
+ logger.info("\t* enc: %s" % str(filtered_enc_embeddings.size()))
+ logger.info("\t* dec: %s" % str(filtered_dec_embeddings.size()))
enc_output_file = opt.output_file + ".enc.pt"
dec_output_file = opt.output_file + ".dec.pt"
| {"golden_diff": "diff --git a/tools/embeddings_to_torch.py b/tools/embeddings_to_torch.py\n--- a/tools/embeddings_to_torch.py\n+++ b/tools/embeddings_to_torch.py\n@@ -127,8 +127,8 @@\n match_percent[1]))\n \n logger.info(\"\\nFiltered embeddings:\")\n- logger.info(\"\\t* enc: \", filtered_enc_embeddings.size())\n- logger.info(\"\\t* dec: \", filtered_dec_embeddings.size())\n+ logger.info(\"\\t* enc: %s\" % str(filtered_enc_embeddings.size()))\n+ logger.info(\"\\t* dec: %s\" % str(filtered_dec_embeddings.size()))\n \n enc_output_file = opt.output_file + \".enc.pt\"\n dec_output_file = opt.output_file + \".dec.pt\"\n", "issue": "Error when using ./tools/embeddings_to_torch.py\n**I am getting the following error.\r\nIs it harmful and anyone know how to solve it?**\r\n\r\n\r\n[2018-09-24 21:06:09,964 INFO] From: ./glove_experiment/data.vocab.pt\r\n[2018-09-24 21:06:09,964 INFO] \t* source vocab: 50002 words\r\n[2018-09-24 21:06:09,964 INFO] \t* target vocab: 50004 words\r\n[2018-09-24 21:06:42,008 INFO] Got 400000 encryption embeddings from ./glove/original.txt\r\n[2018-09-24 21:08:21,394 INFO] Got 1142358 decryption embeddings from ./glove/wiki.fr.vec\r\n[2018-09-24 21:08:21,699 INFO] \r\nMatching: \r\n[2018-09-24 21:08:21,699 INFO] \t* enc: 19625 match, 30377 missing, (39.25%)\r\n[2018-09-24 21:08:21,699 INFO] \t* dec: 1071 match, 48933 missing, (2.14%)\r\n[2018-09-24 21:08:21,699 INFO] \r\nFiltered embeddings:\r\n--- Logging error ---\r\nTraceback (most recent call last):\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 993, in emit\r\n msg = self.format(record)\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 839, in format\r\n return fmt.format(record)\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 576, in format\r\n record.message = record.getMessage()\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 338, in getMessage\r\n msg = msg % self.args\r\nTypeError: not all arguments converted during string formatting\r\nCall stack:\r\n File \"./tools/embeddings_to_torch.py\", line 148, in <module>\r\n main()\r\n File \"./tools/embeddings_to_torch.py\", line 134, in main\r\n logger.info(\"\\t* enc: \", filtered_enc_embeddings.size())\r\nMessage: '\\t* enc: '\r\nArguments: (torch.Size([50002, 300]),)\r\n--- Logging error ---\r\nTraceback (most recent call last):\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 993, in emit\r\n msg = self.format(record)\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 839, in format\r\n return fmt.format(record)\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 576, in format\r\n record.message = record.getMessage()\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 338, in getMessage\r\n msg = msg % self.args\r\nTypeError: not all arguments converted during string formatting\r\nCall stack:\r\n File \"./tools/embeddings_to_torch.py\", line 148, in <module>\r\n main()\r\n File \"./tools/embeddings_to_torch.py\", line 134, in main\r\n logger.info(\"\\t* enc: \", filtered_enc_embeddings.size())\r\nMessage: '\\t* enc: '\r\nArguments: (torch.Size([50002, 300]),)\r\n--- Logging error ---\r\nTraceback (most recent call last):\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 993, in emit\r\n msg = self.format(record)\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 839, in format\r\n return fmt.format(record)\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 576, in format\r\n record.message = record.getMessage()\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 338, in getMessage\r\n msg = msg % self.args\r\nTypeError: not all arguments converted during string formatting\r\nCall stack:\r\n File \"./tools/embeddings_to_torch.py\", line 148, in <module>\r\n main()\r\n File \"./tools/embeddings_to_torch.py\", line 135, in main\r\n logger.info(\"\\t* dec: \", filtered_dec_embeddings.size())\r\nMessage: '\\t* dec: '\r\nArguments: (torch.Size([50004, 300]),)\r\n--- Logging error ---\r\nTraceback (most recent call last):\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 993, in emit\r\n msg = self.format(record)\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 839, in format\r\n return fmt.format(record)\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 576, in format\r\n record.message = record.getMessage()\r\n File \"/home/eeb439/anaconda3/envs/pytorch0.4text0.3/lib/python3.6/logging/__init__.py\", line 338, in getMessage\r\n msg = msg % self.args\r\nTypeError: not all arguments converted during string formatting\r\nCall stack:\r\n File \"./tools/embeddings_to_torch.py\", line 148, in <module>\r\n main()\r\n File \"./tools/embeddings_to_torch.py\", line 135, in main\r\n logger.info(\"\\t* dec: \", filtered_dec_embeddings.size())\r\nMessage: '\\t* dec: '\r\nArguments: (torch.Size([50004, 300]),)\r\n[2018-09-24 21:08:21,701 INFO] \r\nSaving embedding as:\r\n\t* enc: ./glove_experiment/embeddings.enc.pt\r\n\t* dec: ./glove_experiment/embeddings.dec.pt\r\n[2018-09-24 21:08:22,065 INFO] \r\nDone.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import print_function\nfrom __future__ import division\nimport six\nimport sys\nimport numpy as np\nimport argparse\nimport torch\nfrom onmt.utils.logging import init_logger, logger\n\n\ndef get_vocabs(dict_file):\n vocabs = torch.load(dict_file)\n\n enc_vocab, dec_vocab = None, None\n\n # the vocab object is a list of tuple (name, torchtext.Vocab)\n # we iterate over this list and associate vocabularies based on the name\n for vocab in vocabs:\n if vocab[0] == 'src':\n enc_vocab = vocab[1]\n if vocab[0] == 'tgt':\n dec_vocab = vocab[1]\n assert enc_vocab is not None and dec_vocab is not None\n\n logger.info(\"From: %s\" % dict_file)\n logger.info(\"\\t* source vocab: %d words\" % len(enc_vocab))\n logger.info(\"\\t* target vocab: %d words\" % len(dec_vocab))\n\n return enc_vocab, dec_vocab\n\n\ndef get_embeddings(file_enc, opt, flag):\n embs = dict()\n if flag == 'enc':\n for (i, l) in enumerate(open(file_enc, 'rb')):\n if i < opt.skip_lines:\n continue\n if not l:\n break\n if len(l) == 0:\n continue\n\n l_split = l.decode('utf8').strip().split(' ')\n if len(l_split) == 2:\n continue\n embs[l_split[0]] = [float(em) for em in l_split[1:]]\n logger.info(\"Got {} encryption embeddings from {}\".format(len(embs),\n file_enc))\n else:\n\n for (i, l) in enumerate(open(file_enc, 'rb')):\n if not l:\n break\n if len(l) == 0:\n continue\n\n l_split = l.decode('utf8').strip().split(' ')\n if len(l_split) == 2:\n continue\n embs[l_split[0]] = [float(em) for em in l_split[1:]]\n logger.info(\"Got {} decryption embeddings from {}\".format(len(embs),\n file_enc))\n return embs\n\n\ndef match_embeddings(vocab, emb, opt):\n dim = len(six.next(six.itervalues(emb)))\n filtered_embeddings = np.zeros((len(vocab), dim))\n count = {\"match\": 0, \"miss\": 0}\n for w, w_id in vocab.stoi.items():\n if w in emb:\n filtered_embeddings[w_id] = emb[w]\n count['match'] += 1\n else:\n if opt.verbose:\n logger.info(u\"not found:\\t{}\".format(w), file=sys.stderr)\n count['miss'] += 1\n\n return torch.Tensor(filtered_embeddings), count\n\n\nTYPES = [\"GloVe\", \"word2vec\"]\n\n\ndef main():\n\n parser = argparse.ArgumentParser(description='embeddings_to_torch.py')\n parser.add_argument('-emb_file_enc', required=True,\n help=\"source Embeddings from this file\")\n parser.add_argument('-emb_file_dec', required=True,\n help=\"target Embeddings from this file\")\n parser.add_argument('-output_file', required=True,\n help=\"Output file for the prepared data\")\n parser.add_argument('-dict_file', required=True,\n help=\"Dictionary file\")\n parser.add_argument('-verbose', action=\"store_true\", default=False)\n parser.add_argument('-skip_lines', type=int, default=0,\n help=\"Skip first lines of the embedding file\")\n parser.add_argument('-type', choices=TYPES, default=\"GloVe\")\n opt = parser.parse_args()\n\n enc_vocab, dec_vocab = get_vocabs(opt.dict_file)\n if opt.type == \"word2vec\":\n opt.skip_lines = 1\n\n embeddings_enc = get_embeddings(opt.emb_file_enc, opt, flag='enc')\n embeddings_dec = get_embeddings(opt.emb_file_dec, opt, flag='dec')\n\n filtered_enc_embeddings, enc_count = match_embeddings(enc_vocab,\n embeddings_enc,\n opt)\n filtered_dec_embeddings, dec_count = match_embeddings(dec_vocab,\n embeddings_dec,\n opt)\n logger.info(\"\\nMatching: \")\n match_percent = [_['match'] / (_['match'] + _['miss']) * 100\n for _ in [enc_count, dec_count]]\n logger.info(\"\\t* enc: %d match, %d missing, (%.2f%%)\"\n % (enc_count['match'],\n enc_count['miss'],\n match_percent[0]))\n logger.info(\"\\t* dec: %d match, %d missing, (%.2f%%)\"\n % (dec_count['match'],\n dec_count['miss'],\n match_percent[1]))\n\n logger.info(\"\\nFiltered embeddings:\")\n logger.info(\"\\t* enc: \", filtered_enc_embeddings.size())\n logger.info(\"\\t* dec: \", filtered_dec_embeddings.size())\n\n enc_output_file = opt.output_file + \".enc.pt\"\n dec_output_file = opt.output_file + \".dec.pt\"\n logger.info(\"\\nSaving embedding as:\\n\\t* enc: %s\\n\\t* dec: %s\"\n % (enc_output_file, dec_output_file))\n torch.save(filtered_enc_embeddings, enc_output_file)\n torch.save(filtered_dec_embeddings, dec_output_file)\n logger.info(\"\\nDone.\")\n\n\nif __name__ == \"__main__\":\n init_logger('embeddings_to_torch.log')\n main()\n", "path": "tools/embeddings_to_torch.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import print_function\nfrom __future__ import division\nimport six\nimport sys\nimport numpy as np\nimport argparse\nimport torch\nfrom onmt.utils.logging import init_logger, logger\n\n\ndef get_vocabs(dict_file):\n vocabs = torch.load(dict_file)\n\n enc_vocab, dec_vocab = None, None\n\n # the vocab object is a list of tuple (name, torchtext.Vocab)\n # we iterate over this list and associate vocabularies based on the name\n for vocab in vocabs:\n if vocab[0] == 'src':\n enc_vocab = vocab[1]\n if vocab[0] == 'tgt':\n dec_vocab = vocab[1]\n assert enc_vocab is not None and dec_vocab is not None\n\n logger.info(\"From: %s\" % dict_file)\n logger.info(\"\\t* source vocab: %d words\" % len(enc_vocab))\n logger.info(\"\\t* target vocab: %d words\" % len(dec_vocab))\n\n return enc_vocab, dec_vocab\n\n\ndef get_embeddings(file_enc, opt, flag):\n embs = dict()\n if flag == 'enc':\n for (i, l) in enumerate(open(file_enc, 'rb')):\n if i < opt.skip_lines:\n continue\n if not l:\n break\n if len(l) == 0:\n continue\n\n l_split = l.decode('utf8').strip().split(' ')\n if len(l_split) == 2:\n continue\n embs[l_split[0]] = [float(em) for em in l_split[1:]]\n logger.info(\"Got {} encryption embeddings from {}\".format(len(embs),\n file_enc))\n else:\n\n for (i, l) in enumerate(open(file_enc, 'rb')):\n if not l:\n break\n if len(l) == 0:\n continue\n\n l_split = l.decode('utf8').strip().split(' ')\n if len(l_split) == 2:\n continue\n embs[l_split[0]] = [float(em) for em in l_split[1:]]\n logger.info(\"Got {} decryption embeddings from {}\".format(len(embs),\n file_enc))\n return embs\n\n\ndef match_embeddings(vocab, emb, opt):\n dim = len(six.next(six.itervalues(emb)))\n filtered_embeddings = np.zeros((len(vocab), dim))\n count = {\"match\": 0, \"miss\": 0}\n for w, w_id in vocab.stoi.items():\n if w in emb:\n filtered_embeddings[w_id] = emb[w]\n count['match'] += 1\n else:\n if opt.verbose:\n logger.info(u\"not found:\\t{}\".format(w), file=sys.stderr)\n count['miss'] += 1\n\n return torch.Tensor(filtered_embeddings), count\n\n\nTYPES = [\"GloVe\", \"word2vec\"]\n\n\ndef main():\n\n parser = argparse.ArgumentParser(description='embeddings_to_torch.py')\n parser.add_argument('-emb_file_enc', required=True,\n help=\"source Embeddings from this file\")\n parser.add_argument('-emb_file_dec', required=True,\n help=\"target Embeddings from this file\")\n parser.add_argument('-output_file', required=True,\n help=\"Output file for the prepared data\")\n parser.add_argument('-dict_file', required=True,\n help=\"Dictionary file\")\n parser.add_argument('-verbose', action=\"store_true\", default=False)\n parser.add_argument('-skip_lines', type=int, default=0,\n help=\"Skip first lines of the embedding file\")\n parser.add_argument('-type', choices=TYPES, default=\"GloVe\")\n opt = parser.parse_args()\n\n enc_vocab, dec_vocab = get_vocabs(opt.dict_file)\n if opt.type == \"word2vec\":\n opt.skip_lines = 1\n\n embeddings_enc = get_embeddings(opt.emb_file_enc, opt, flag='enc')\n embeddings_dec = get_embeddings(opt.emb_file_dec, opt, flag='dec')\n\n filtered_enc_embeddings, enc_count = match_embeddings(enc_vocab,\n embeddings_enc,\n opt)\n filtered_dec_embeddings, dec_count = match_embeddings(dec_vocab,\n embeddings_dec,\n opt)\n logger.info(\"\\nMatching: \")\n match_percent = [_['match'] / (_['match'] + _['miss']) * 100\n for _ in [enc_count, dec_count]]\n logger.info(\"\\t* enc: %d match, %d missing, (%.2f%%)\"\n % (enc_count['match'],\n enc_count['miss'],\n match_percent[0]))\n logger.info(\"\\t* dec: %d match, %d missing, (%.2f%%)\"\n % (dec_count['match'],\n dec_count['miss'],\n match_percent[1]))\n\n logger.info(\"\\nFiltered embeddings:\")\n logger.info(\"\\t* enc: %s\" % str(filtered_enc_embeddings.size()))\n logger.info(\"\\t* dec: %s\" % str(filtered_dec_embeddings.size()))\n\n enc_output_file = opt.output_file + \".enc.pt\"\n dec_output_file = opt.output_file + \".dec.pt\"\n logger.info(\"\\nSaving embedding as:\\n\\t* enc: %s\\n\\t* dec: %s\"\n % (enc_output_file, dec_output_file))\n torch.save(filtered_enc_embeddings, enc_output_file)\n torch.save(filtered_dec_embeddings, dec_output_file)\n logger.info(\"\\nDone.\")\n\n\nif __name__ == \"__main__\":\n init_logger('embeddings_to_torch.log')\n main()\n", "path": "tools/embeddings_to_torch.py"}]} | 3,559 | 164 |
gh_patches_debug_398 | rasdani/github-patches | git_diff | optuna__optuna-1882 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove the document for `with_traceback` method of Optuna's exception classes
Currently, Optuna's exception classes have the documentations of `with_traceback` method, which is inherited from `Exception`. I don't think it is informative for readers and it can be removed from the reference.

The following `Exception` has the `with_traceback` method.
- [ ] `optuna.exceptions.CLIUsageError`
- [ ] `optuna.exceptions.OptunaError`
- [ ] `optuna.exceptions.TrialPruned`
- [ ] `optuna.exceptions.CLIUsageError`
- [ ] `optuna.exceptions.StorageInternalError`
- [ ] `optuna.exceptions.DuplicatedStudyError`
CC @keisuke-umezawa Please let me know if you have any comments.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Configuration file for the Sphinx documentation builder.
4 #
5 # This file does only contain a selection of the most common options. For a
6 # full list see the documentation:
7 # http://www.sphinx-doc.org/en/master/config
8
9 # -- Path setup --------------------------------------------------------------
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14 #
15 # import os
16 # import sys
17 # sys.path.insert(0, os.path.abspath('.'))
18
19 import pkg_resources
20
21 from sphinx_gallery.sorting import FileNameSortKey
22
23 __version__ = pkg_resources.get_distribution('optuna').version
24
25 # -- Project information -----------------------------------------------------
26
27 project = 'Optuna'
28 copyright = '2018, Optuna Contributors.'
29 author = 'Optuna Contributors.'
30
31 # The short X.Y version
32 version = __version__
33 # The full version, including alpha/beta/rc tags
34 release = __version__
35
36 # -- General configuration ---------------------------------------------------
37
38 # If your documentation needs a minimal Sphinx version, state it here.
39 #
40 # needs_sphinx = '1.0'
41
42 # Add any Sphinx extension module names here, as strings. They can be
43 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
44 # ones.
45 extensions = [
46 'sphinx.ext.autodoc',
47 'sphinx.ext.autosummary',
48 'sphinx.ext.doctest',
49 'sphinx.ext.intersphinx',
50 'sphinx.ext.mathjax',
51 'sphinx.ext.napoleon',
52 'sphinx.ext.viewcode',
53 'sphinx.ext.githubpages',
54 'cliff.sphinxext',
55 'sphinx_gallery.gen_gallery',
56 ]
57
58 # Add any paths that contain templates here, relative to this directory.
59 templates_path = ['_templates']
60
61 # The suffix(es) of source filenames.
62 # You can specify multiple suffix as a list of string:
63 #
64 # source_suffix = ['.rst', '.md']
65 source_suffix = '.rst'
66
67 # The master toctree document.
68 master_doc = 'index'
69
70 # The language for content autogenerated by Sphinx. Refer to documentation
71 # for a list of supported languages.
72 #
73 # This is also used if you do content translation via gettext catalogs.
74 # Usually you set "language" from the command line for these cases.
75 language = None
76
77 # List of patterns, relative to source directory, that match files and
78 # directories to ignore when looking for source files.
79 # This pattern also affects html_static_path and html_extra_path .
80 exclude_patterns = []
81
82 # The name of the Pygments (syntax highlighting) style to use.
83 pygments_style = 'sphinx'
84
85 # -- Options for HTML output -------------------------------------------------
86
87 # The theme to use for HTML and HTML Help pages. See the documentation for
88 # a list of builtin themes.
89 #
90 html_theme = 'sphinx_rtd_theme'
91
92 # Theme options are theme-specific and customize the look and feel of a theme
93 # further. For a list of options available for each theme, see the
94 # documentation.
95 #
96 html_theme_options = {
97 'logo_only': True
98 }
99
100 html_favicon = '../image/favicon.ico'
101
102 html_logo = '../image/optuna-logo.png'
103
104 # Add any paths that contain custom static files (such as style sheets) here,
105 # relative to this directory. They are copied after the builtin static files,
106 # so a file named "default.css" will overwrite the builtin "default.css".
107 html_static_path = ['_static', 'plotly_figures']
108 html_css_files = ["css/custom.css"]
109
110 # Custom sidebar templates, must be a dictionary that maps document names
111 # to template names.
112 #
113 # The default sidebars (for documents that don't match any pattern) are
114 # defined by theme itself. Builtin themes are using these templates by
115 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
116 # 'searchbox.html']``.
117 #
118 # html_sidebars = {}
119
120 # -- Options for HTMLHelp output ---------------------------------------------
121
122 # Output file base name for HTML help builder.
123 htmlhelp_basename = 'Optunadoc'
124
125 # -- Options for LaTeX output ------------------------------------------------
126
127 latex_elements = {
128 # The paper size ('letterpaper' or 'a4paper').
129 #
130 # 'papersize': 'letterpaper',
131
132 # The font size ('10pt', '11pt' or '12pt').
133 #
134 # 'pointsize': '10pt',
135
136 # Additional stuff for the LaTeX preamble.
137 #
138 # 'preamble': '',
139
140 # Latex figure (float) alignment
141 #
142 # 'figure_align': 'htbp',
143 }
144
145 # Grouping the document tree into LaTeX files. List of tuples
146 # (source start file, target name, title,
147 # author, documentclass [howto, manual, or own class]).
148 latex_documents = [
149 (master_doc, 'Optuna.tex', 'Optuna Documentation', 'Optuna Contributors.', 'manual'),
150 ]
151
152 # -- Options for manual page output ------------------------------------------
153
154 # One entry per manual page. List of tuples
155 # (source start file, name, description, authors, manual section).
156 man_pages = [(master_doc, 'optuna', 'Optuna Documentation', [author], 1)]
157
158 # -- Options for Texinfo output ----------------------------------------------
159
160 # Grouping the document tree into Texinfo files. List of tuples
161 # (source start file, target name, title, author,
162 # dir menu entry, description, category)
163 texinfo_documents = [
164 (master_doc, 'Optuna', 'Optuna Documentation', author, 'Optuna',
165 'One line description of project.', 'Miscellaneous'),
166 ]
167
168 intersphinx_mapping = {'python': ('https://docs.python.org/3', None)}
169
170 # -- Extension configuration -------------------------------------------------
171 autosummary_generate = True
172 autodoc_default_options = {
173 'members': True,
174 'inherited-members': True,
175 }
176
177 sphinx_gallery_conf = {
178 'examples_dirs': [
179 '../../tutorial',
180 ],
181 'gallery_dirs': [
182 'tutorial',
183 ],
184 'within_subsection_order': FileNameSortKey,
185 'filename_pattern': r'/*\.py',
186 'first_notebook_cell': None,
187 }
188
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -172,6 +172,7 @@
autodoc_default_options = {
'members': True,
'inherited-members': True,
+ 'exclude-members': 'with_traceback',
}
sphinx_gallery_conf = {
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -172,6 +172,7 @@\n autodoc_default_options = {\n 'members': True,\n 'inherited-members': True,\n+ 'exclude-members': 'with_traceback',\n }\n \n sphinx_gallery_conf = {\n", "issue": "Remove the document for `with_traceback` method of Optuna's exception classes\nCurrently, Optuna's exception classes have the documentations of `with_traceback` method, which is inherited from `Exception`. I don't think it is informative for readers and it can be removed from the reference.\r\n\r\n\r\n\r\nThe following `Exception` has the `with_traceback` method.\r\n- [ ] `optuna.exceptions.CLIUsageError`\r\n- [ ] `optuna.exceptions.OptunaError`\r\n- [ ] `optuna.exceptions.TrialPruned`\r\n- [ ] `optuna.exceptions.CLIUsageError`\r\n- [ ] `optuna.exceptions.StorageInternalError`\r\n- [ ] `optuna.exceptions.DuplicatedStudyError`\r\n\r\nCC @keisuke-umezawa Please let me know if you have any comments.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nimport pkg_resources\n\nfrom sphinx_gallery.sorting import FileNameSortKey\n\n__version__ = pkg_resources.get_distribution('optuna').version\n\n# -- Project information -----------------------------------------------------\n\nproject = 'Optuna'\ncopyright = '2018, Optuna Contributors.'\nauthor = 'Optuna Contributors.'\n\n# The short X.Y version\nversion = __version__\n# The full version, including alpha/beta/rc tags\nrelease = __version__\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.githubpages',\n 'cliff.sphinxext',\n 'sphinx_gallery.gen_gallery',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n 'logo_only': True\n}\n\nhtml_favicon = '../image/favicon.ico'\n\nhtml_logo = '../image/optuna-logo.png'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static', 'plotly_figures']\nhtml_css_files = [\"css/custom.css\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Optunadoc'\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'Optuna.tex', 'Optuna Documentation', 'Optuna Contributors.', 'manual'),\n]\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, 'optuna', 'Optuna Documentation', [author], 1)]\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'Optuna', 'Optuna Documentation', author, 'Optuna',\n 'One line description of project.', 'Miscellaneous'),\n]\n\nintersphinx_mapping = {'python': ('https://docs.python.org/3', None)}\n\n# -- Extension configuration -------------------------------------------------\nautosummary_generate = True\nautodoc_default_options = {\n 'members': True,\n 'inherited-members': True,\n}\n\nsphinx_gallery_conf = {\n 'examples_dirs': [\n '../../tutorial',\n ],\n 'gallery_dirs': [\n 'tutorial',\n ],\n 'within_subsection_order': FileNameSortKey,\n 'filename_pattern': r'/*\\.py',\n 'first_notebook_cell': None,\n}\n", "path": "docs/source/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nimport pkg_resources\n\nfrom sphinx_gallery.sorting import FileNameSortKey\n\n__version__ = pkg_resources.get_distribution('optuna').version\n\n# -- Project information -----------------------------------------------------\n\nproject = 'Optuna'\ncopyright = '2018, Optuna Contributors.'\nauthor = 'Optuna Contributors.'\n\n# The short X.Y version\nversion = __version__\n# The full version, including alpha/beta/rc tags\nrelease = __version__\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.githubpages',\n 'cliff.sphinxext',\n 'sphinx_gallery.gen_gallery',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n 'logo_only': True\n}\n\nhtml_favicon = '../image/favicon.ico'\n\nhtml_logo = '../image/optuna-logo.png'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static', 'plotly_figures']\nhtml_css_files = [\"css/custom.css\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Optunadoc'\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'Optuna.tex', 'Optuna Documentation', 'Optuna Contributors.', 'manual'),\n]\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, 'optuna', 'Optuna Documentation', [author], 1)]\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'Optuna', 'Optuna Documentation', author, 'Optuna',\n 'One line description of project.', 'Miscellaneous'),\n]\n\nintersphinx_mapping = {'python': ('https://docs.python.org/3', None)}\n\n# -- Extension configuration -------------------------------------------------\nautosummary_generate = True\nautodoc_default_options = {\n 'members': True,\n 'inherited-members': True,\n 'exclude-members': 'with_traceback',\n}\n\nsphinx_gallery_conf = {\n 'examples_dirs': [\n '../../tutorial',\n ],\n 'gallery_dirs': [\n 'tutorial',\n ],\n 'within_subsection_order': FileNameSortKey,\n 'filename_pattern': r'/*\\.py',\n 'first_notebook_cell': None,\n}\n", "path": "docs/source/conf.py"}]} | 2,293 | 82 |
gh_patches_debug_27672 | rasdani/github-patches | git_diff | bids-standard__pybids-589 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
model: JSON to dict modified key values for transformation
In ` Replace` transformation, you specify as a dict which variables to transform.
e.g.:
```
{'LIKELY': "5"}
```
However, the parser from JSON to dict to convert BIDS Stats Models modifies keys to lower case, which in the case of specific case sensitive values modifies the transformation itself.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bids/utils.py`
Content:
```
1 """ Utility functions. """
2
3 import re
4 import os
5
6
7 def listify(obj):
8 ''' Wraps all non-list or tuple objects in a list; provides a simple way
9 to accept flexible arguments. '''
10 return obj if isinstance(obj, (list, tuple, type(None))) else [obj]
11
12
13 def matches_entities(obj, entities, strict=False):
14 ''' Checks whether an object's entities match the input. '''
15 if strict and set(obj.entities.keys()) != set(entities.keys()):
16 return False
17
18 comm_ents = list(set(obj.entities.keys()) & set(entities.keys()))
19 for k in comm_ents:
20 current = obj.entities[k]
21 target = entities[k]
22 if isinstance(target, (list, tuple)):
23 if current not in target:
24 return False
25 elif current != target:
26 return False
27 return True
28
29
30 def natural_sort(l, field=None):
31 '''
32 based on snippet found at http://stackoverflow.com/a/4836734/2445984
33 '''
34 convert = lambda text: int(text) if text.isdigit() else text.lower()
35
36 def alphanum_key(key):
37 if field is not None:
38 key = getattr(key, field)
39 if not isinstance(key, str):
40 key = str(key)
41 return [convert(c) for c in re.split('([0-9]+)', key)]
42 return sorted(l, key=alphanum_key)
43
44
45 def convert_JSON(j):
46 """ Recursively convert CamelCase keys to snake_case.
47 From: https://stackoverflow.com/questions/17156078/converting-identifier-naming-between-camelcase-and-underscores-during-json-seria
48 """
49
50 def camel_to_snake(s):
51 a = re.compile('((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))')
52 return a.sub(r'_\1', s).lower()
53
54 def convertArray(a):
55 newArr = []
56 for i in a:
57 if isinstance(i,list):
58 newArr.append(convertArray(i))
59 elif isinstance(i, dict):
60 newArr.append(convert_JSON(i))
61 else:
62 newArr.append(i)
63 return newArr
64
65 out = {}
66 for k, value in j.items():
67 newK = camel_to_snake(k)
68
69 if isinstance(value, dict):
70 out[newK] = convert_JSON(value)
71 elif isinstance(value, list):
72 out[newK] = convertArray(value)
73 else:
74 out[newK] = value
75
76 return out
77
78
79 def splitext(path):
80 """splitext for paths with directories that may contain dots.
81 From https://stackoverflow.com/questions/5930036/separating-file-extensions-using-python-os-path-module"""
82 li = []
83 path_without_extensions = os.path.join(os.path.dirname(path),
84 os.path.basename(path).split(os.extsep)[0])
85 extensions = os.path.basename(path).split(os.extsep)[1:]
86 li.append(path_without_extensions)
87 # li.append(extensions) if you want extensions in another list inside the list that is returned.
88 li.extend(extensions)
89 return li
90
91
92 def make_bidsfile(filename):
93 """Create a BIDSFile instance of the appropriate class. """
94 from .layout import models
95
96 patt = re.compile("[._]*[a-zA-Z0-9]*?\\.([^/\\\\]+)$")
97 m = re.search(patt, filename)
98
99 ext = None if not m else m.group(1)
100
101 if ext in ['nii', 'nii.gz']:
102 cls = 'BIDSImageFile'
103 elif ext in ['tsv', 'tsv.gz']:
104 cls = 'BIDSDataFile'
105 elif ext == 'json':
106 cls = 'BIDSJSONFile'
107 else:
108 cls = 'BIDSFile'
109
110 Cls = getattr(models, cls)
111 return Cls(filename)
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bids/utils.py b/bids/utils.py
--- a/bids/utils.py
+++ b/bids/utils.py
@@ -44,9 +44,10 @@
def convert_JSON(j):
""" Recursively convert CamelCase keys to snake_case.
- From: https://stackoverflow.com/questions/17156078/converting-identifier-naming-between-camelcase-and-underscores-during-json-seria
+ From: https://stackoverflow.com/questions/17156078/
+ converting-identifier-naming-between-camelcase-and-
+ underscores-during-json-seria
"""
-
def camel_to_snake(s):
a = re.compile('((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))')
return a.sub(r'_\1', s).lower()
@@ -54,7 +55,7 @@
def convertArray(a):
newArr = []
for i in a:
- if isinstance(i,list):
+ if isinstance(i, list):
newArr.append(convertArray(i))
elif isinstance(i, dict):
newArr.append(convert_JSON(i))
@@ -66,7 +67,8 @@
for k, value in j.items():
newK = camel_to_snake(k)
- if isinstance(value, dict):
+ # Replace transformation uses a dict, so skip lower-casing
+ if isinstance(value, dict) and k != 'Replace':
out[newK] = convert_JSON(value)
elif isinstance(value, list):
out[newK] = convertArray(value)
| {"golden_diff": "diff --git a/bids/utils.py b/bids/utils.py\n--- a/bids/utils.py\n+++ b/bids/utils.py\n@@ -44,9 +44,10 @@\n \n def convert_JSON(j):\n \"\"\" Recursively convert CamelCase keys to snake_case.\n- From: https://stackoverflow.com/questions/17156078/converting-identifier-naming-between-camelcase-and-underscores-during-json-seria\n+ From: https://stackoverflow.com/questions/17156078/\n+ converting-identifier-naming-between-camelcase-and-\n+ underscores-during-json-seria\n \"\"\"\n-\n def camel_to_snake(s):\n a = re.compile('((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))')\n return a.sub(r'_\\1', s).lower()\n@@ -54,7 +55,7 @@\n def convertArray(a):\n newArr = []\n for i in a:\n- if isinstance(i,list):\n+ if isinstance(i, list):\n newArr.append(convertArray(i))\n elif isinstance(i, dict):\n newArr.append(convert_JSON(i))\n@@ -66,7 +67,8 @@\n for k, value in j.items():\n newK = camel_to_snake(k)\n \n- if isinstance(value, dict):\n+ # Replace transformation uses a dict, so skip lower-casing\n+ if isinstance(value, dict) and k != 'Replace':\n out[newK] = convert_JSON(value)\n elif isinstance(value, list):\n out[newK] = convertArray(value)\n", "issue": "model: JSON to dict modified key values for transformation\nIn ` Replace` transformation, you specify as a dict which variables to transform.\r\n\r\ne.g.:\r\n\r\n```\r\n{'LIKELY': \"5\"}\r\n```\r\n\r\nHowever, the parser from JSON to dict to convert BIDS Stats Models modifies keys to lower case, which in the case of specific case sensitive values modifies the transformation itself.\n", "before_files": [{"content": "\"\"\" Utility functions. \"\"\"\n\nimport re\nimport os\n\n\ndef listify(obj):\n ''' Wraps all non-list or tuple objects in a list; provides a simple way\n to accept flexible arguments. '''\n return obj if isinstance(obj, (list, tuple, type(None))) else [obj]\n\n\ndef matches_entities(obj, entities, strict=False):\n ''' Checks whether an object's entities match the input. '''\n if strict and set(obj.entities.keys()) != set(entities.keys()):\n return False\n\n comm_ents = list(set(obj.entities.keys()) & set(entities.keys()))\n for k in comm_ents:\n current = obj.entities[k]\n target = entities[k]\n if isinstance(target, (list, tuple)):\n if current not in target:\n return False\n elif current != target:\n return False\n return True\n\n\ndef natural_sort(l, field=None):\n '''\n based on snippet found at http://stackoverflow.com/a/4836734/2445984\n '''\n convert = lambda text: int(text) if text.isdigit() else text.lower()\n\n def alphanum_key(key):\n if field is not None:\n key = getattr(key, field)\n if not isinstance(key, str):\n key = str(key)\n return [convert(c) for c in re.split('([0-9]+)', key)]\n return sorted(l, key=alphanum_key)\n\n\ndef convert_JSON(j):\n \"\"\" Recursively convert CamelCase keys to snake_case.\n From: https://stackoverflow.com/questions/17156078/converting-identifier-naming-between-camelcase-and-underscores-during-json-seria\n \"\"\"\n\n def camel_to_snake(s):\n a = re.compile('((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))')\n return a.sub(r'_\\1', s).lower()\n\n def convertArray(a):\n newArr = []\n for i in a:\n if isinstance(i,list):\n newArr.append(convertArray(i))\n elif isinstance(i, dict):\n newArr.append(convert_JSON(i))\n else:\n newArr.append(i)\n return newArr\n\n out = {}\n for k, value in j.items():\n newK = camel_to_snake(k)\n\n if isinstance(value, dict):\n out[newK] = convert_JSON(value)\n elif isinstance(value, list):\n out[newK] = convertArray(value)\n else:\n out[newK] = value\n\n return out\n\n\ndef splitext(path):\n \"\"\"splitext for paths with directories that may contain dots.\n From https://stackoverflow.com/questions/5930036/separating-file-extensions-using-python-os-path-module\"\"\"\n li = []\n path_without_extensions = os.path.join(os.path.dirname(path),\n os.path.basename(path).split(os.extsep)[0])\n extensions = os.path.basename(path).split(os.extsep)[1:]\n li.append(path_without_extensions)\n # li.append(extensions) if you want extensions in another list inside the list that is returned.\n li.extend(extensions)\n return li\n\n\ndef make_bidsfile(filename):\n \"\"\"Create a BIDSFile instance of the appropriate class. \"\"\"\n from .layout import models\n\n patt = re.compile(\"[._]*[a-zA-Z0-9]*?\\\\.([^/\\\\\\\\]+)$\")\n m = re.search(patt, filename)\n\n ext = None if not m else m.group(1)\n\n if ext in ['nii', 'nii.gz']:\n cls = 'BIDSImageFile'\n elif ext in ['tsv', 'tsv.gz']:\n cls = 'BIDSDataFile'\n elif ext == 'json':\n cls = 'BIDSJSONFile'\n else:\n cls = 'BIDSFile'\n\n Cls = getattr(models, cls)\n return Cls(filename)\n", "path": "bids/utils.py"}], "after_files": [{"content": "\"\"\" Utility functions. \"\"\"\n\nimport re\nimport os\n\n\ndef listify(obj):\n ''' Wraps all non-list or tuple objects in a list; provides a simple way\n to accept flexible arguments. '''\n return obj if isinstance(obj, (list, tuple, type(None))) else [obj]\n\n\ndef matches_entities(obj, entities, strict=False):\n ''' Checks whether an object's entities match the input. '''\n if strict and set(obj.entities.keys()) != set(entities.keys()):\n return False\n\n comm_ents = list(set(obj.entities.keys()) & set(entities.keys()))\n for k in comm_ents:\n current = obj.entities[k]\n target = entities[k]\n if isinstance(target, (list, tuple)):\n if current not in target:\n return False\n elif current != target:\n return False\n return True\n\n\ndef natural_sort(l, field=None):\n '''\n based on snippet found at http://stackoverflow.com/a/4836734/2445984\n '''\n convert = lambda text: int(text) if text.isdigit() else text.lower()\n\n def alphanum_key(key):\n if field is not None:\n key = getattr(key, field)\n if not isinstance(key, str):\n key = str(key)\n return [convert(c) for c in re.split('([0-9]+)', key)]\n return sorted(l, key=alphanum_key)\n\n\ndef convert_JSON(j):\n \"\"\" Recursively convert CamelCase keys to snake_case.\n From: https://stackoverflow.com/questions/17156078/\n converting-identifier-naming-between-camelcase-and-\n underscores-during-json-seria\n \"\"\"\n def camel_to_snake(s):\n a = re.compile('((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))')\n return a.sub(r'_\\1', s).lower()\n\n def convertArray(a):\n newArr = []\n for i in a:\n if isinstance(i, list):\n newArr.append(convertArray(i))\n elif isinstance(i, dict):\n newArr.append(convert_JSON(i))\n else:\n newArr.append(i)\n return newArr\n\n out = {}\n for k, value in j.items():\n newK = camel_to_snake(k)\n\n # Replace transformation uses a dict, so skip lower-casing\n if isinstance(value, dict) and k != 'Replace':\n out[newK] = convert_JSON(value)\n elif isinstance(value, list):\n out[newK] = convertArray(value)\n else:\n out[newK] = value\n\n return out\n\n\ndef splitext(path):\n \"\"\"splitext for paths with directories that may contain dots.\n From https://stackoverflow.com/questions/5930036/separating-file-extensions-using-python-os-path-module\"\"\"\n li = []\n path_without_extensions = os.path.join(os.path.dirname(path),\n os.path.basename(path).split(os.extsep)[0])\n extensions = os.path.basename(path).split(os.extsep)[1:]\n li.append(path_without_extensions)\n # li.append(extensions) if you want extensions in another list inside the list that is returned.\n li.extend(extensions)\n return li\n\n\ndef make_bidsfile(filename):\n \"\"\"Create a BIDSFile instance of the appropriate class. \"\"\"\n from .layout import models\n\n patt = re.compile(\"[._]*[a-zA-Z0-9]*?\\\\.([^/\\\\\\\\]+)$\")\n m = re.search(patt, filename)\n\n ext = None if not m else m.group(1)\n\n if ext in ['nii', 'nii.gz']:\n cls = 'BIDSImageFile'\n elif ext in ['tsv', 'tsv.gz']:\n cls = 'BIDSDataFile'\n elif ext == 'json':\n cls = 'BIDSJSONFile'\n else:\n cls = 'BIDSFile'\n\n Cls = getattr(models, cls)\n return Cls(filename)\n", "path": "bids/utils.py"}]} | 1,410 | 353 |
gh_patches_debug_24872 | rasdani/github-patches | git_diff | rotki__rotki-174 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
USD Value for IOTA is incorrect
## Problem Definition
The usd value reported on my exchange is inconsistent with the usd value that rotkehlchen shows.
I tried to find where the USD value is calculated for exchange assets and I found the following API call: [rotkehlchen.inquirer#L68](https://github.com/kascheri12/rotkehlchen/blob/master/rotkehlchen/inquirer.py#L68)
The asset "IOTA" uses symbol "IOT" at the api endpoint therefore the incorrect rate is returned when querying:
https://min-api.cryptocompare.com/data/price?fsym=IOTA&tsyms=USD
vs.
https://min-api.cryptocompare.com/data/price?fsym=IOT&tsyms=USD
USD Value for IOTA is incorrect
## Problem Definition
The usd value reported on my exchange is inconsistent with the usd value that rotkehlchen shows.
I tried to find where the USD value is calculated for exchange assets and I found the following API call: [rotkehlchen.inquirer#L68](https://github.com/kascheri12/rotkehlchen/blob/master/rotkehlchen/inquirer.py#L68)
The asset "IOTA" uses symbol "IOT" at the api endpoint therefore the incorrect rate is returned when querying:
https://min-api.cryptocompare.com/data/price?fsym=IOTA&tsyms=USD
vs.
https://min-api.cryptocompare.com/data/price?fsym=IOT&tsyms=USD
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rotkehlchen/constants.py`
Content:
```
1 from typing import cast
2 from rotkehlchen import typing
3
4 ETH_DAO_FORK_TS = 1469020840 # 2016-07-20 13:20:40 UTC
5 BTC_BCH_FORK_TS = 1501593374 # 2017-08-01 13:16:14 UTC
6
7 SUPPORTED_EXCHANGES = ['kraken', 'poloniex', 'bittrex', 'bitmex', 'binance']
8 ROTKEHLCHEN_SERVER_TIMEOUT = 5
9 ALL_REMOTES_TIMEOUT = 20
10
11 YEAR_IN_SECONDS = 31536000 # 60 * 60 * 24 * 365
12
13 S_EMPTYSTR = typing.EmptyStr('')
14
15 S_BTC = cast(typing.NonEthTokenBlockchainAsset, 'BTC')
16 S_ETH = cast(typing.NonEthTokenBlockchainAsset, 'ETH')
17 S_DATACOIN = cast(typing.NonEthTokenBlockchainAsset, 'DATAcoin')
18
19 S_RDN = cast(typing.EthToken, 'RDN')
20
21
22 S_USD = typing.FiatAsset('USD')
23 S_EUR = typing.FiatAsset('EUR')
24 S_GBP = typing.FiatAsset('GBP')
25 S_JPY = typing.FiatAsset('JPY')
26 S_CNY = typing.FiatAsset('CNY')
27 FIAT_CURRENCIES = (S_USD, S_EUR, S_GBP, S_JPY, S_CNY)
28
29 EV_BUY = typing.EventType('buy')
30 EV_SELL = typing.EventType('sell')
31 EV_TX_GAS_COST = typing.EventType('tx_gas_cost')
32 EV_ASSET_MOVE = typing.EventType('asset_movement')
33 EV_LOAN_SETTLE = typing.EventType('loan_settlement')
34 EV_INTEREST_PAYMENT = typing.EventType('interest_rate_payment')
35 EV_MARGIN_CLOSE = typing.EventType('margin_position_close')
36
```
Path: `rotkehlchen/inquirer.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import logging
4 from typing import Dict, Iterable, Optional, cast
5
6 import requests
7
8 from rotkehlchen import typing
9 from rotkehlchen.constants import FIAT_CURRENCIES, S_DATACOIN, S_RDN, S_USD
10 from rotkehlchen.errors import RemoteError
11 from rotkehlchen.fval import FVal
12 from rotkehlchen.utils import query_fiat_pair, retry_calls, rlk_jsonloads
13
14 logger = logging.getLogger(__name__)
15
16
17 def get_fiat_usd_exchange_rates(
18 currencies: Optional[Iterable[typing.FiatAsset]] = None,
19 ) -> Dict[typing.FiatAsset, FVal]:
20 rates = {S_USD: FVal(1)}
21 if not currencies:
22 currencies = FIAT_CURRENCIES[1:]
23 for currency in currencies:
24 rates[currency] = query_fiat_pair(S_USD, currency)
25
26 return rates
27
28
29 def world_to_cryptocompare(asset):
30 # Adjust some ETH tokens to how cryptocompare knows them
31 if asset == S_RDN:
32 # remove this if cryptocompare changes the symbol
33 asset = cast(typing.EthToken, 'RDN*')
34 elif asset == S_DATACOIN:
35 asset = cast(typing.NonEthTokenBlockchainAsset, 'DATA')
36
37 return asset
38
39
40 class Inquirer(object):
41 def __init__(self, kraken=None): # TODO: Add type after fixing cyclic dependency
42 self.kraken = kraken
43 self.session = requests.session()
44
45 def query_kraken_for_price(
46 self,
47 asset: typing.Asset,
48 asset_btc_price: FVal,
49 ) -> FVal:
50 if asset == 'BTC':
51 return self.kraken.usdprice['BTC']
52 return asset_btc_price * self.kraken.usdprice['BTC']
53
54 def find_usd_price(
55 self,
56 asset: typing.Asset,
57 asset_btc_price: Optional[FVal] = None,
58 ) -> FVal:
59 if self.kraken and self.kraken.first_connection_made and asset_btc_price is not None:
60 return self.query_kraken_for_price(asset, asset_btc_price)
61
62 asset = world_to_cryptocompare(asset)
63 resp = retry_calls(
64 5,
65 'find_usd_price',
66 'requests.get',
67 requests.get,
68 u'https://min-api.cryptocompare.com/data/price?'
69 'fsym={}&tsyms=USD'.format(asset)
70 )
71
72 if resp.status_code != 200:
73 raise RemoteError('Cant reach cryptocompare to get USD value of {}'.format(asset))
74
75 resp = rlk_jsonloads(resp.text)
76
77 # If there is an error in the response skip this token
78 if 'USD' not in resp:
79 if resp['Response'] == 'Error':
80 print('Could not query USD price for {}. Error: "{}"'.format(
81 asset,
82 resp['Message']),
83 )
84 else:
85 print('Could not query USD price for {}'.format(asset))
86 return FVal(0)
87
88 return FVal(resp['USD'])
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rotkehlchen/constants.py b/rotkehlchen/constants.py
--- a/rotkehlchen/constants.py
+++ b/rotkehlchen/constants.py
@@ -15,6 +15,7 @@
S_BTC = cast(typing.NonEthTokenBlockchainAsset, 'BTC')
S_ETH = cast(typing.NonEthTokenBlockchainAsset, 'ETH')
S_DATACOIN = cast(typing.NonEthTokenBlockchainAsset, 'DATAcoin')
+S_IOTA = cast(typing.NonEthTokenBlockchainAsset, 'IOTA')
S_RDN = cast(typing.EthToken, 'RDN')
diff --git a/rotkehlchen/inquirer.py b/rotkehlchen/inquirer.py
--- a/rotkehlchen/inquirer.py
+++ b/rotkehlchen/inquirer.py
@@ -6,7 +6,7 @@
import requests
from rotkehlchen import typing
-from rotkehlchen.constants import FIAT_CURRENCIES, S_DATACOIN, S_RDN, S_USD
+from rotkehlchen.constants import FIAT_CURRENCIES, S_DATACOIN, S_RDN, S_USD, S_IOTA
from rotkehlchen.errors import RemoteError
from rotkehlchen.fval import FVal
from rotkehlchen.utils import query_fiat_pair, retry_calls, rlk_jsonloads
@@ -33,6 +33,8 @@
asset = cast(typing.EthToken, 'RDN*')
elif asset == S_DATACOIN:
asset = cast(typing.NonEthTokenBlockchainAsset, 'DATA')
+ elif asset == S_IOTA:
+ asset = cast(typing.NonEthTokenBlockchainAsset, 'IOT')
return asset
| {"golden_diff": "diff --git a/rotkehlchen/constants.py b/rotkehlchen/constants.py\n--- a/rotkehlchen/constants.py\n+++ b/rotkehlchen/constants.py\n@@ -15,6 +15,7 @@\n S_BTC = cast(typing.NonEthTokenBlockchainAsset, 'BTC')\n S_ETH = cast(typing.NonEthTokenBlockchainAsset, 'ETH')\n S_DATACOIN = cast(typing.NonEthTokenBlockchainAsset, 'DATAcoin')\n+S_IOTA = cast(typing.NonEthTokenBlockchainAsset, 'IOTA')\n \n S_RDN = cast(typing.EthToken, 'RDN')\n \ndiff --git a/rotkehlchen/inquirer.py b/rotkehlchen/inquirer.py\n--- a/rotkehlchen/inquirer.py\n+++ b/rotkehlchen/inquirer.py\n@@ -6,7 +6,7 @@\n import requests\n \n from rotkehlchen import typing\n-from rotkehlchen.constants import FIAT_CURRENCIES, S_DATACOIN, S_RDN, S_USD\n+from rotkehlchen.constants import FIAT_CURRENCIES, S_DATACOIN, S_RDN, S_USD, S_IOTA\n from rotkehlchen.errors import RemoteError\n from rotkehlchen.fval import FVal\n from rotkehlchen.utils import query_fiat_pair, retry_calls, rlk_jsonloads\n@@ -33,6 +33,8 @@\n asset = cast(typing.EthToken, 'RDN*')\n elif asset == S_DATACOIN:\n asset = cast(typing.NonEthTokenBlockchainAsset, 'DATA')\n+ elif asset == S_IOTA:\n+ asset = cast(typing.NonEthTokenBlockchainAsset, 'IOT')\n \n return asset\n", "issue": "USD Value for IOTA is incorrect\n## Problem Definition\r\n\r\nThe usd value reported on my exchange is inconsistent with the usd value that rotkehlchen shows.\r\n\r\nI tried to find where the USD value is calculated for exchange assets and I found the following API call: [rotkehlchen.inquirer#L68](https://github.com/kascheri12/rotkehlchen/blob/master/rotkehlchen/inquirer.py#L68) \r\n\r\nThe asset \"IOTA\" uses symbol \"IOT\" at the api endpoint therefore the incorrect rate is returned when querying: \r\nhttps://min-api.cryptocompare.com/data/price?fsym=IOTA&tsyms=USD\r\nvs.\r\nhttps://min-api.cryptocompare.com/data/price?fsym=IOT&tsyms=USD\nUSD Value for IOTA is incorrect\n## Problem Definition\r\n\r\nThe usd value reported on my exchange is inconsistent with the usd value that rotkehlchen shows.\r\n\r\nI tried to find where the USD value is calculated for exchange assets and I found the following API call: [rotkehlchen.inquirer#L68](https://github.com/kascheri12/rotkehlchen/blob/master/rotkehlchen/inquirer.py#L68) \r\n\r\nThe asset \"IOTA\" uses symbol \"IOT\" at the api endpoint therefore the incorrect rate is returned when querying: \r\nhttps://min-api.cryptocompare.com/data/price?fsym=IOTA&tsyms=USD\r\nvs.\r\nhttps://min-api.cryptocompare.com/data/price?fsym=IOT&tsyms=USD\n", "before_files": [{"content": "from typing import cast\nfrom rotkehlchen import typing\n\nETH_DAO_FORK_TS = 1469020840 # 2016-07-20 13:20:40 UTC\nBTC_BCH_FORK_TS = 1501593374 # 2017-08-01 13:16:14 UTC\n\nSUPPORTED_EXCHANGES = ['kraken', 'poloniex', 'bittrex', 'bitmex', 'binance']\nROTKEHLCHEN_SERVER_TIMEOUT = 5\nALL_REMOTES_TIMEOUT = 20\n\nYEAR_IN_SECONDS = 31536000 # 60 * 60 * 24 * 365\n\nS_EMPTYSTR = typing.EmptyStr('')\n\nS_BTC = cast(typing.NonEthTokenBlockchainAsset, 'BTC')\nS_ETH = cast(typing.NonEthTokenBlockchainAsset, 'ETH')\nS_DATACOIN = cast(typing.NonEthTokenBlockchainAsset, 'DATAcoin')\n\nS_RDN = cast(typing.EthToken, 'RDN')\n\n\nS_USD = typing.FiatAsset('USD')\nS_EUR = typing.FiatAsset('EUR')\nS_GBP = typing.FiatAsset('GBP')\nS_JPY = typing.FiatAsset('JPY')\nS_CNY = typing.FiatAsset('CNY')\nFIAT_CURRENCIES = (S_USD, S_EUR, S_GBP, S_JPY, S_CNY)\n\nEV_BUY = typing.EventType('buy')\nEV_SELL = typing.EventType('sell')\nEV_TX_GAS_COST = typing.EventType('tx_gas_cost')\nEV_ASSET_MOVE = typing.EventType('asset_movement')\nEV_LOAN_SETTLE = typing.EventType('loan_settlement')\nEV_INTEREST_PAYMENT = typing.EventType('interest_rate_payment')\nEV_MARGIN_CLOSE = typing.EventType('margin_position_close')\n", "path": "rotkehlchen/constants.py"}, {"content": "from __future__ import unicode_literals\n\nimport logging\nfrom typing import Dict, Iterable, Optional, cast\n\nimport requests\n\nfrom rotkehlchen import typing\nfrom rotkehlchen.constants import FIAT_CURRENCIES, S_DATACOIN, S_RDN, S_USD\nfrom rotkehlchen.errors import RemoteError\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.utils import query_fiat_pair, retry_calls, rlk_jsonloads\n\nlogger = logging.getLogger(__name__)\n\n\ndef get_fiat_usd_exchange_rates(\n currencies: Optional[Iterable[typing.FiatAsset]] = None,\n) -> Dict[typing.FiatAsset, FVal]:\n rates = {S_USD: FVal(1)}\n if not currencies:\n currencies = FIAT_CURRENCIES[1:]\n for currency in currencies:\n rates[currency] = query_fiat_pair(S_USD, currency)\n\n return rates\n\n\ndef world_to_cryptocompare(asset):\n # Adjust some ETH tokens to how cryptocompare knows them\n if asset == S_RDN:\n # remove this if cryptocompare changes the symbol\n asset = cast(typing.EthToken, 'RDN*')\n elif asset == S_DATACOIN:\n asset = cast(typing.NonEthTokenBlockchainAsset, 'DATA')\n\n return asset\n\n\nclass Inquirer(object):\n def __init__(self, kraken=None): # TODO: Add type after fixing cyclic dependency\n self.kraken = kraken\n self.session = requests.session()\n\n def query_kraken_for_price(\n self,\n asset: typing.Asset,\n asset_btc_price: FVal,\n ) -> FVal:\n if asset == 'BTC':\n return self.kraken.usdprice['BTC']\n return asset_btc_price * self.kraken.usdprice['BTC']\n\n def find_usd_price(\n self,\n asset: typing.Asset,\n asset_btc_price: Optional[FVal] = None,\n ) -> FVal:\n if self.kraken and self.kraken.first_connection_made and asset_btc_price is not None:\n return self.query_kraken_for_price(asset, asset_btc_price)\n\n asset = world_to_cryptocompare(asset)\n resp = retry_calls(\n 5,\n 'find_usd_price',\n 'requests.get',\n requests.get,\n u'https://min-api.cryptocompare.com/data/price?'\n 'fsym={}&tsyms=USD'.format(asset)\n )\n\n if resp.status_code != 200:\n raise RemoteError('Cant reach cryptocompare to get USD value of {}'.format(asset))\n\n resp = rlk_jsonloads(resp.text)\n\n # If there is an error in the response skip this token\n if 'USD' not in resp:\n if resp['Response'] == 'Error':\n print('Could not query USD price for {}. Error: \"{}\"'.format(\n asset,\n resp['Message']),\n )\n else:\n print('Could not query USD price for {}'.format(asset))\n return FVal(0)\n\n return FVal(resp['USD'])\n", "path": "rotkehlchen/inquirer.py"}], "after_files": [{"content": "from typing import cast\nfrom rotkehlchen import typing\n\nETH_DAO_FORK_TS = 1469020840 # 2016-07-20 13:20:40 UTC\nBTC_BCH_FORK_TS = 1501593374 # 2017-08-01 13:16:14 UTC\n\nSUPPORTED_EXCHANGES = ['kraken', 'poloniex', 'bittrex', 'bitmex', 'binance']\nROTKEHLCHEN_SERVER_TIMEOUT = 5\nALL_REMOTES_TIMEOUT = 20\n\nYEAR_IN_SECONDS = 31536000 # 60 * 60 * 24 * 365\n\nS_EMPTYSTR = typing.EmptyStr('')\n\nS_BTC = cast(typing.NonEthTokenBlockchainAsset, 'BTC')\nS_ETH = cast(typing.NonEthTokenBlockchainAsset, 'ETH')\nS_DATACOIN = cast(typing.NonEthTokenBlockchainAsset, 'DATAcoin')\nS_IOTA = cast(typing.NonEthTokenBlockchainAsset, 'IOTA')\n\nS_RDN = cast(typing.EthToken, 'RDN')\n\n\nS_USD = typing.FiatAsset('USD')\nS_EUR = typing.FiatAsset('EUR')\nS_GBP = typing.FiatAsset('GBP')\nS_JPY = typing.FiatAsset('JPY')\nS_CNY = typing.FiatAsset('CNY')\nFIAT_CURRENCIES = (S_USD, S_EUR, S_GBP, S_JPY, S_CNY)\n\nEV_BUY = typing.EventType('buy')\nEV_SELL = typing.EventType('sell')\nEV_TX_GAS_COST = typing.EventType('tx_gas_cost')\nEV_ASSET_MOVE = typing.EventType('asset_movement')\nEV_LOAN_SETTLE = typing.EventType('loan_settlement')\nEV_INTEREST_PAYMENT = typing.EventType('interest_rate_payment')\nEV_MARGIN_CLOSE = typing.EventType('margin_position_close')\n", "path": "rotkehlchen/constants.py"}, {"content": "from __future__ import unicode_literals\n\nimport logging\nfrom typing import Dict, Iterable, Optional, cast\n\nimport requests\n\nfrom rotkehlchen import typing\nfrom rotkehlchen.constants import FIAT_CURRENCIES, S_DATACOIN, S_RDN, S_USD, S_IOTA\nfrom rotkehlchen.errors import RemoteError\nfrom rotkehlchen.fval import FVal\nfrom rotkehlchen.utils import query_fiat_pair, retry_calls, rlk_jsonloads\n\nlogger = logging.getLogger(__name__)\n\n\ndef get_fiat_usd_exchange_rates(\n currencies: Optional[Iterable[typing.FiatAsset]] = None,\n) -> Dict[typing.FiatAsset, FVal]:\n rates = {S_USD: FVal(1)}\n if not currencies:\n currencies = FIAT_CURRENCIES[1:]\n for currency in currencies:\n rates[currency] = query_fiat_pair(S_USD, currency)\n\n return rates\n\n\ndef world_to_cryptocompare(asset):\n # Adjust some ETH tokens to how cryptocompare knows them\n if asset == S_RDN:\n # remove this if cryptocompare changes the symbol\n asset = cast(typing.EthToken, 'RDN*')\n elif asset == S_DATACOIN:\n asset = cast(typing.NonEthTokenBlockchainAsset, 'DATA')\n elif asset == S_IOTA:\n asset = cast(typing.NonEthTokenBlockchainAsset, 'IOT')\n\n return asset\n\n\nclass Inquirer(object):\n def __init__(self, kraken=None): # TODO: Add type after fixing cyclic dependency\n self.kraken = kraken\n self.session = requests.session()\n\n def query_kraken_for_price(\n self,\n asset: typing.Asset,\n asset_btc_price: FVal,\n ) -> FVal:\n if asset == 'BTC':\n return self.kraken.usdprice['BTC']\n return asset_btc_price * self.kraken.usdprice['BTC']\n\n def find_usd_price(\n self,\n asset: typing.Asset,\n asset_btc_price: Optional[FVal] = None,\n ) -> FVal:\n if self.kraken and self.kraken.first_connection_made and asset_btc_price is not None:\n return self.query_kraken_for_price(asset, asset_btc_price)\n\n asset = world_to_cryptocompare(asset)\n resp = retry_calls(\n 5,\n 'find_usd_price',\n 'requests.get',\n requests.get,\n u'https://min-api.cryptocompare.com/data/price?'\n 'fsym={}&tsyms=USD'.format(asset)\n )\n\n if resp.status_code != 200:\n raise RemoteError('Cant reach cryptocompare to get USD value of {}'.format(asset))\n\n resp = rlk_jsonloads(resp.text)\n\n # If there is an error in the response skip this token\n if 'USD' not in resp:\n if resp['Response'] == 'Error':\n print('Could not query USD price for {}. Error: \"{}\"'.format(\n asset,\n resp['Message']),\n )\n else:\n print('Could not query USD price for {}'.format(asset))\n return FVal(0)\n\n return FVal(resp['USD'])\n", "path": "rotkehlchen/inquirer.py"}]} | 1,973 | 386 |
gh_patches_debug_42204 | rasdani/github-patches | git_diff | getsentry__sentry-python-295 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Celery integration not capturing error with max_tasks_per_child = 1
The celery integration is failing to capture the exception when I use a celery factory pattern which patches the celery task with Flask's context.
This is `web/celery_factory.py`
```
# Source: https://stackoverflow.com/questions/12044776/how-to-use-flask-sqlalchemy-in-a-celery-task
from celery import Celery
import flask
class FlaskCelery(Celery):
def __init__(self, *args, **kwargs):
super(FlaskCelery, self).__init__(*args, **kwargs)
self.patch_task()
if 'app' in kwargs:
self.init_app(kwargs['app'])
def patch_task(self):
TaskBase = self.Task
_celery = self
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
if flask.has_app_context():
return TaskBase.__call__(self, *args, **kwargs)
else:
with _celery.app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
self.Task = ContextTask
def init_app(self, app):
self.app = app
self.config_from_object(app.config)
celery_app = FlaskCelery()
```
I am adding a random `raise` inside a simple task
```
import celery_app from celery_factory.py
@celery_app.task
def simple_task():
raise Exception("Testing Celery exception")
```
The error I get printed is:
```
[2019-03-08 21:24:21,117: ERROR/ForkPoolWorker-31] Task simple_task[d6e959b1-7253-4e55-861d-c1968ae14e1c] raised unexpected: RuntimeError('No active exception to reraise')
Traceback (most recent call last):
File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/celery/app/trace.py", line 382, in trace_task
R = retval = fun(*args, **kwargs)
File "/Users/okomarov/Documents/repos/myproject/web/celery_factory.py", line 28, in __call__
return TaskBase.__call__(self, *args, **kwargs)
File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/celery/app/trace.py", line 641, in __protected_call__
return self.run(*args, **kwargs)
File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py", line 66, in _inner
reraise(*_capture_exception())
File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/_compat.py", line 52, in reraise
raise value
File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py", line 64, in _inner
return f(*args, **kwargs)
File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py", line 66, in _inner
reraise(*_capture_exception())
File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/_compat.py", line 52, in reraise
raise value
File "/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py", line 64, in _inner
return f(*args, **kwargs)
File "/Users/okomarov/Documents/repos/myproject/web/simple_task.py", line 4, in simple_task
raise Exception("Testing Celery exception")
RuntimeError: No active exception to reraise
```
Relevant pip packages:
```
Celery==4.2.1
Flask==1.0.2
sentry-sdk==0.7.4
```
The integration is called as following (flask integration works as expected):
```
from flask import Flask
from celery_factory import celery_app
from config import config_to_use
def create_app():
app = Flask()
app.config.from_object(config_to_use)
init_logging(app)
register_extensions(app)
register_blueprints(app)
register_jinja_extras(app)
return app
def init_logging(app):
import sentry_sdk
from sentry_sdk.integrations.flask import FlaskIntegration
from sentry_sdk.integrations.celery import CeleryIntegration
sentry_sdk.init(
dsn=app.config.get('FLASK_SENTRY_DSN'),
integrations=[FlaskIntegration(), CeleryIntegration()]
)
...
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/celery.py`
Content:
```
1 from __future__ import absolute_import
2
3 import sys
4
5 from celery.exceptions import SoftTimeLimitExceeded, Retry # type: ignore
6
7 from sentry_sdk.hub import Hub
8 from sentry_sdk.utils import capture_internal_exceptions, event_from_exception
9 from sentry_sdk._compat import reraise
10 from sentry_sdk.integrations import Integration
11 from sentry_sdk.integrations.logging import ignore_logger
12
13
14 class CeleryIntegration(Integration):
15 identifier = "celery"
16
17 @staticmethod
18 def setup_once():
19 import celery.app.trace as trace # type: ignore
20
21 old_build_tracer = trace.build_tracer
22
23 def sentry_build_tracer(name, task, *args, **kwargs):
24 # Need to patch both methods because older celery sometimes
25 # short-circuits to task.run if it thinks it's safe.
26 task.__call__ = _wrap_task_call(task.__call__)
27 task.run = _wrap_task_call(task.run)
28 return _wrap_tracer(task, old_build_tracer(name, task, *args, **kwargs))
29
30 trace.build_tracer = sentry_build_tracer
31
32 # This logger logs every status of every task that ran on the worker.
33 # Meaning that every task's breadcrumbs are full of stuff like "Task
34 # <foo> raised unexpected <bar>".
35 ignore_logger("celery.worker.job")
36
37
38 def _wrap_tracer(task, f):
39 # Need to wrap tracer for pushing the scope before prerun is sent, and
40 # popping it after postrun is sent.
41 #
42 # This is the reason we don't use signals for hooking in the first place.
43 # Also because in Celery 3, signal dispatch returns early if one handler
44 # crashes.
45 def _inner(*args, **kwargs):
46 hub = Hub.current
47 if hub.get_integration(CeleryIntegration) is None:
48 return f(*args, **kwargs)
49
50 with hub.push_scope() as scope:
51 scope._name = "celery"
52 scope.add_event_processor(_make_event_processor(task, *args, **kwargs))
53
54 return f(*args, **kwargs)
55
56 return _inner
57
58
59 def _wrap_task_call(f):
60 # Need to wrap task call because the exception is caught before we get to
61 # see it. Also celery's reported stacktrace is untrustworthy.
62 def _inner(*args, **kwargs):
63 try:
64 return f(*args, **kwargs)
65 except Exception:
66 reraise(*_capture_exception())
67
68 return _inner
69
70
71 def _make_event_processor(task, uuid, args, kwargs, request=None):
72 def event_processor(event, hint):
73 with capture_internal_exceptions():
74 event["transaction"] = task.name
75
76 with capture_internal_exceptions():
77 extra = event.setdefault("extra", {})
78 extra["celery-job"] = {
79 "task_name": task.name,
80 "args": args,
81 "kwargs": kwargs,
82 }
83
84 if "exc_info" in hint:
85 with capture_internal_exceptions():
86 if isinstance(hint["exc_info"][1], Retry):
87 return None
88
89 if hasattr(task, "throws") and isinstance(
90 hint["exc_info"][1], task.throws
91 ):
92 return None
93
94 with capture_internal_exceptions():
95 if issubclass(hint["exc_info"][0], SoftTimeLimitExceeded):
96 event["fingerprint"] = [
97 "celery",
98 "SoftTimeLimitExceeded",
99 getattr(task, "name", task),
100 ]
101
102 return event
103
104 return event_processor
105
106
107 def _capture_exception():
108 hub = Hub.current
109 exc_info = sys.exc_info()
110
111 if hub.get_integration(CeleryIntegration) is not None:
112 event, hint = event_from_exception(
113 exc_info,
114 client_options=hub.client.options,
115 mechanism={"type": "celery", "handled": False},
116 )
117 hub.capture_event(event, hint=hint)
118
119 return exc_info
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/integrations/celery.py b/sentry_sdk/integrations/celery.py
--- a/sentry_sdk/integrations/celery.py
+++ b/sentry_sdk/integrations/celery.py
@@ -23,12 +23,14 @@
def sentry_build_tracer(name, task, *args, **kwargs):
# Need to patch both methods because older celery sometimes
# short-circuits to task.run if it thinks it's safe.
- task.__call__ = _wrap_task_call(task.__call__)
- task.run = _wrap_task_call(task.run)
+ task.__call__ = _wrap_task_call(task, task.__call__)
+ task.run = _wrap_task_call(task, task.run)
return _wrap_tracer(task, old_build_tracer(name, task, *args, **kwargs))
trace.build_tracer = sentry_build_tracer
+ _patch_worker_exit()
+
# This logger logs every status of every task that ran on the worker.
# Meaning that every task's breadcrumbs are full of stuff like "Task
# <foo> raised unexpected <bar>".
@@ -56,14 +58,17 @@
return _inner
-def _wrap_task_call(f):
+def _wrap_task_call(task, f):
# Need to wrap task call because the exception is caught before we get to
# see it. Also celery's reported stacktrace is untrustworthy.
def _inner(*args, **kwargs):
try:
return f(*args, **kwargs)
except Exception:
- reraise(*_capture_exception())
+ exc_info = sys.exc_info()
+ with capture_internal_exceptions():
+ _capture_exception(task, exc_info)
+ reraise(*exc_info)
return _inner
@@ -82,15 +87,6 @@
}
if "exc_info" in hint:
- with capture_internal_exceptions():
- if isinstance(hint["exc_info"][1], Retry):
- return None
-
- if hasattr(task, "throws") and isinstance(
- hint["exc_info"][1], task.throws
- ):
- return None
-
with capture_internal_exceptions():
if issubclass(hint["exc_info"][0], SoftTimeLimitExceeded):
event["fingerprint"] = [
@@ -104,16 +100,39 @@
return event_processor
-def _capture_exception():
+def _capture_exception(task, exc_info):
hub = Hub.current
- exc_info = sys.exc_info()
- if hub.get_integration(CeleryIntegration) is not None:
- event, hint = event_from_exception(
- exc_info,
- client_options=hub.client.options,
- mechanism={"type": "celery", "handled": False},
- )
- hub.capture_event(event, hint=hint)
+ if hub.get_integration(CeleryIntegration) is None:
+ return
+ if isinstance(exc_info[1], Retry):
+ return
+ if hasattr(task, "throws") and isinstance(exc_info[1], task.throws):
+ return
+
+ event, hint = event_from_exception(
+ exc_info,
+ client_options=hub.client.options,
+ mechanism={"type": "celery", "handled": False},
+ )
+
+ hub.capture_event(event, hint=hint)
+
+
+def _patch_worker_exit():
+ # Need to flush queue before worker shutdown because a crashing worker will
+ # call os._exit
+ from billiard.pool import Worker # type: ignore
+
+ old_workloop = Worker.workloop
+
+ def sentry_workloop(*args, **kwargs):
+ try:
+ return old_workloop(*args, **kwargs)
+ finally:
+ with capture_internal_exceptions():
+ hub = Hub.current
+ if hub.get_integration(CeleryIntegration) is not None:
+ hub.flush()
- return exc_info
+ Worker.workloop = sentry_workloop
| {"golden_diff": "diff --git a/sentry_sdk/integrations/celery.py b/sentry_sdk/integrations/celery.py\n--- a/sentry_sdk/integrations/celery.py\n+++ b/sentry_sdk/integrations/celery.py\n@@ -23,12 +23,14 @@\n def sentry_build_tracer(name, task, *args, **kwargs):\n # Need to patch both methods because older celery sometimes\n # short-circuits to task.run if it thinks it's safe.\n- task.__call__ = _wrap_task_call(task.__call__)\n- task.run = _wrap_task_call(task.run)\n+ task.__call__ = _wrap_task_call(task, task.__call__)\n+ task.run = _wrap_task_call(task, task.run)\n return _wrap_tracer(task, old_build_tracer(name, task, *args, **kwargs))\n \n trace.build_tracer = sentry_build_tracer\n \n+ _patch_worker_exit()\n+\n # This logger logs every status of every task that ran on the worker.\n # Meaning that every task's breadcrumbs are full of stuff like \"Task\n # <foo> raised unexpected <bar>\".\n@@ -56,14 +58,17 @@\n return _inner\n \n \n-def _wrap_task_call(f):\n+def _wrap_task_call(task, f):\n # Need to wrap task call because the exception is caught before we get to\n # see it. Also celery's reported stacktrace is untrustworthy.\n def _inner(*args, **kwargs):\n try:\n return f(*args, **kwargs)\n except Exception:\n- reraise(*_capture_exception())\n+ exc_info = sys.exc_info()\n+ with capture_internal_exceptions():\n+ _capture_exception(task, exc_info)\n+ reraise(*exc_info)\n \n return _inner\n \n@@ -82,15 +87,6 @@\n }\n \n if \"exc_info\" in hint:\n- with capture_internal_exceptions():\n- if isinstance(hint[\"exc_info\"][1], Retry):\n- return None\n-\n- if hasattr(task, \"throws\") and isinstance(\n- hint[\"exc_info\"][1], task.throws\n- ):\n- return None\n-\n with capture_internal_exceptions():\n if issubclass(hint[\"exc_info\"][0], SoftTimeLimitExceeded):\n event[\"fingerprint\"] = [\n@@ -104,16 +100,39 @@\n return event_processor\n \n \n-def _capture_exception():\n+def _capture_exception(task, exc_info):\n hub = Hub.current\n- exc_info = sys.exc_info()\n \n- if hub.get_integration(CeleryIntegration) is not None:\n- event, hint = event_from_exception(\n- exc_info,\n- client_options=hub.client.options,\n- mechanism={\"type\": \"celery\", \"handled\": False},\n- )\n- hub.capture_event(event, hint=hint)\n+ if hub.get_integration(CeleryIntegration) is None:\n+ return\n+ if isinstance(exc_info[1], Retry):\n+ return\n+ if hasattr(task, \"throws\") and isinstance(exc_info[1], task.throws):\n+ return\n+\n+ event, hint = event_from_exception(\n+ exc_info,\n+ client_options=hub.client.options,\n+ mechanism={\"type\": \"celery\", \"handled\": False},\n+ )\n+\n+ hub.capture_event(event, hint=hint)\n+\n+\n+def _patch_worker_exit():\n+ # Need to flush queue before worker shutdown because a crashing worker will\n+ # call os._exit\n+ from billiard.pool import Worker # type: ignore\n+\n+ old_workloop = Worker.workloop\n+\n+ def sentry_workloop(*args, **kwargs):\n+ try:\n+ return old_workloop(*args, **kwargs)\n+ finally:\n+ with capture_internal_exceptions():\n+ hub = Hub.current\n+ if hub.get_integration(CeleryIntegration) is not None:\n+ hub.flush()\n \n- return exc_info\n+ Worker.workloop = sentry_workloop\n", "issue": "Celery integration not capturing error with max_tasks_per_child = 1\nThe celery integration is failing to capture the exception when I use a celery factory pattern which patches the celery task with Flask's context.\r\n\r\nThis is `web/celery_factory.py`\r\n```\r\n# Source: https://stackoverflow.com/questions/12044776/how-to-use-flask-sqlalchemy-in-a-celery-task\r\n\r\nfrom celery import Celery\r\nimport flask\r\n\r\n\r\nclass FlaskCelery(Celery):\r\n\r\n def __init__(self, *args, **kwargs):\r\n super(FlaskCelery, self).__init__(*args, **kwargs)\r\n self.patch_task()\r\n\r\n if 'app' in kwargs:\r\n self.init_app(kwargs['app'])\r\n\r\n def patch_task(self):\r\n TaskBase = self.Task\r\n _celery = self\r\n\r\n class ContextTask(TaskBase):\r\n abstract = True\r\n\r\n def __call__(self, *args, **kwargs):\r\n if flask.has_app_context():\r\n return TaskBase.__call__(self, *args, **kwargs)\r\n else:\r\n with _celery.app.app_context():\r\n return TaskBase.__call__(self, *args, **kwargs)\r\n\r\n self.Task = ContextTask\r\n\r\n def init_app(self, app):\r\n self.app = app\r\n self.config_from_object(app.config)\r\n\r\n\r\ncelery_app = FlaskCelery()\r\n```\r\n\r\nI am adding a random `raise` inside a simple task\r\n\r\n```\r\nimport celery_app from celery_factory.py\r\n@celery_app.task\r\ndef simple_task():\r\n raise Exception(\"Testing Celery exception\")\r\n```\r\n\r\nThe error I get printed is:\r\n```\r\n[2019-03-08 21:24:21,117: ERROR/ForkPoolWorker-31] Task simple_task[d6e959b1-7253-4e55-861d-c1968ae14e1c] raised unexpected: RuntimeError('No active exception to reraise')\r\nTraceback (most recent call last):\r\n File \"/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/celery/app/trace.py\", line 382, in trace_task\r\n R = retval = fun(*args, **kwargs)\r\n File \"/Users/okomarov/Documents/repos/myproject/web/celery_factory.py\", line 28, in __call__\r\n return TaskBase.__call__(self, *args, **kwargs)\r\n File \"/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/celery/app/trace.py\", line 641, in __protected_call__\r\n return self.run(*args, **kwargs)\r\n File \"/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py\", line 66, in _inner\r\n reraise(*_capture_exception())\r\n File \"/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/_compat.py\", line 52, in reraise\r\n raise value\r\n File \"/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py\", line 64, in _inner\r\n return f(*args, **kwargs)\r\n File \"/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py\", line 66, in _inner\r\n reraise(*_capture_exception())\r\n File \"/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/_compat.py\", line 52, in reraise\r\n raise value\r\n File \"/Users/okomarov/.virtualenvs/myenv/lib/python3.7/site-packages/sentry_sdk/integrations/celery.py\", line 64, in _inner\r\n return f(*args, **kwargs)\r\n File \"/Users/okomarov/Documents/repos/myproject/web/simple_task.py\", line 4, in simple_task\r\n raise Exception(\"Testing Celery exception\")\r\nRuntimeError: No active exception to reraise\r\n```\r\n\r\nRelevant pip packages:\r\n```\r\nCelery==4.2.1\r\nFlask==1.0.2\r\nsentry-sdk==0.7.4\r\n```\r\n\r\nThe integration is called as following (flask integration works as expected):\r\n```\r\nfrom flask import Flask\r\nfrom celery_factory import celery_app\r\nfrom config import config_to_use\r\n\r\n\r\ndef create_app():\r\n app = Flask()\r\n app.config.from_object(config_to_use)\r\n\r\n init_logging(app)\r\n\r\n register_extensions(app)\r\n register_blueprints(app)\r\n register_jinja_extras(app)\r\n\r\n return app\r\n\r\n\r\ndef init_logging(app):\r\n import sentry_sdk\r\n from sentry_sdk.integrations.flask import FlaskIntegration\r\n from sentry_sdk.integrations.celery import CeleryIntegration\r\n\r\n sentry_sdk.init(\r\n dsn=app.config.get('FLASK_SENTRY_DSN'),\r\n integrations=[FlaskIntegration(), CeleryIntegration()]\r\n )\r\n\r\n...\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport sys\n\nfrom celery.exceptions import SoftTimeLimitExceeded, Retry # type: ignore\n\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.utils import capture_internal_exceptions, event_from_exception\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations.logging import ignore_logger\n\n\nclass CeleryIntegration(Integration):\n identifier = \"celery\"\n\n @staticmethod\n def setup_once():\n import celery.app.trace as trace # type: ignore\n\n old_build_tracer = trace.build_tracer\n\n def sentry_build_tracer(name, task, *args, **kwargs):\n # Need to patch both methods because older celery sometimes\n # short-circuits to task.run if it thinks it's safe.\n task.__call__ = _wrap_task_call(task.__call__)\n task.run = _wrap_task_call(task.run)\n return _wrap_tracer(task, old_build_tracer(name, task, *args, **kwargs))\n\n trace.build_tracer = sentry_build_tracer\n\n # This logger logs every status of every task that ran on the worker.\n # Meaning that every task's breadcrumbs are full of stuff like \"Task\n # <foo> raised unexpected <bar>\".\n ignore_logger(\"celery.worker.job\")\n\n\ndef _wrap_tracer(task, f):\n # Need to wrap tracer for pushing the scope before prerun is sent, and\n # popping it after postrun is sent.\n #\n # This is the reason we don't use signals for hooking in the first place.\n # Also because in Celery 3, signal dispatch returns early if one handler\n # crashes.\n def _inner(*args, **kwargs):\n hub = Hub.current\n if hub.get_integration(CeleryIntegration) is None:\n return f(*args, **kwargs)\n\n with hub.push_scope() as scope:\n scope._name = \"celery\"\n scope.add_event_processor(_make_event_processor(task, *args, **kwargs))\n\n return f(*args, **kwargs)\n\n return _inner\n\n\ndef _wrap_task_call(f):\n # Need to wrap task call because the exception is caught before we get to\n # see it. Also celery's reported stacktrace is untrustworthy.\n def _inner(*args, **kwargs):\n try:\n return f(*args, **kwargs)\n except Exception:\n reraise(*_capture_exception())\n\n return _inner\n\n\ndef _make_event_processor(task, uuid, args, kwargs, request=None):\n def event_processor(event, hint):\n with capture_internal_exceptions():\n event[\"transaction\"] = task.name\n\n with capture_internal_exceptions():\n extra = event.setdefault(\"extra\", {})\n extra[\"celery-job\"] = {\n \"task_name\": task.name,\n \"args\": args,\n \"kwargs\": kwargs,\n }\n\n if \"exc_info\" in hint:\n with capture_internal_exceptions():\n if isinstance(hint[\"exc_info\"][1], Retry):\n return None\n\n if hasattr(task, \"throws\") and isinstance(\n hint[\"exc_info\"][1], task.throws\n ):\n return None\n\n with capture_internal_exceptions():\n if issubclass(hint[\"exc_info\"][0], SoftTimeLimitExceeded):\n event[\"fingerprint\"] = [\n \"celery\",\n \"SoftTimeLimitExceeded\",\n getattr(task, \"name\", task),\n ]\n\n return event\n\n return event_processor\n\n\ndef _capture_exception():\n hub = Hub.current\n exc_info = sys.exc_info()\n\n if hub.get_integration(CeleryIntegration) is not None:\n event, hint = event_from_exception(\n exc_info,\n client_options=hub.client.options,\n mechanism={\"type\": \"celery\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n", "path": "sentry_sdk/integrations/celery.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport sys\n\nfrom celery.exceptions import SoftTimeLimitExceeded, Retry # type: ignore\n\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.utils import capture_internal_exceptions, event_from_exception\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations.logging import ignore_logger\n\n\nclass CeleryIntegration(Integration):\n identifier = \"celery\"\n\n @staticmethod\n def setup_once():\n import celery.app.trace as trace # type: ignore\n\n old_build_tracer = trace.build_tracer\n\n def sentry_build_tracer(name, task, *args, **kwargs):\n # Need to patch both methods because older celery sometimes\n # short-circuits to task.run if it thinks it's safe.\n task.__call__ = _wrap_task_call(task, task.__call__)\n task.run = _wrap_task_call(task, task.run)\n return _wrap_tracer(task, old_build_tracer(name, task, *args, **kwargs))\n\n trace.build_tracer = sentry_build_tracer\n\n _patch_worker_exit()\n\n # This logger logs every status of every task that ran on the worker.\n # Meaning that every task's breadcrumbs are full of stuff like \"Task\n # <foo> raised unexpected <bar>\".\n ignore_logger(\"celery.worker.job\")\n\n\ndef _wrap_tracer(task, f):\n # Need to wrap tracer for pushing the scope before prerun is sent, and\n # popping it after postrun is sent.\n #\n # This is the reason we don't use signals for hooking in the first place.\n # Also because in Celery 3, signal dispatch returns early if one handler\n # crashes.\n def _inner(*args, **kwargs):\n hub = Hub.current\n if hub.get_integration(CeleryIntegration) is None:\n return f(*args, **kwargs)\n\n with hub.push_scope() as scope:\n scope._name = \"celery\"\n scope.add_event_processor(_make_event_processor(task, *args, **kwargs))\n\n return f(*args, **kwargs)\n\n return _inner\n\n\ndef _wrap_task_call(task, f):\n # Need to wrap task call because the exception is caught before we get to\n # see it. Also celery's reported stacktrace is untrustworthy.\n def _inner(*args, **kwargs):\n try:\n return f(*args, **kwargs)\n except Exception:\n exc_info = sys.exc_info()\n with capture_internal_exceptions():\n _capture_exception(task, exc_info)\n reraise(*exc_info)\n\n return _inner\n\n\ndef _make_event_processor(task, uuid, args, kwargs, request=None):\n def event_processor(event, hint):\n with capture_internal_exceptions():\n event[\"transaction\"] = task.name\n\n with capture_internal_exceptions():\n extra = event.setdefault(\"extra\", {})\n extra[\"celery-job\"] = {\n \"task_name\": task.name,\n \"args\": args,\n \"kwargs\": kwargs,\n }\n\n if \"exc_info\" in hint:\n with capture_internal_exceptions():\n if issubclass(hint[\"exc_info\"][0], SoftTimeLimitExceeded):\n event[\"fingerprint\"] = [\n \"celery\",\n \"SoftTimeLimitExceeded\",\n getattr(task, \"name\", task),\n ]\n\n return event\n\n return event_processor\n\n\ndef _capture_exception(task, exc_info):\n hub = Hub.current\n\n if hub.get_integration(CeleryIntegration) is None:\n return\n if isinstance(exc_info[1], Retry):\n return\n if hasattr(task, \"throws\") and isinstance(exc_info[1], task.throws):\n return\n\n event, hint = event_from_exception(\n exc_info,\n client_options=hub.client.options,\n mechanism={\"type\": \"celery\", \"handled\": False},\n )\n\n hub.capture_event(event, hint=hint)\n\n\ndef _patch_worker_exit():\n # Need to flush queue before worker shutdown because a crashing worker will\n # call os._exit\n from billiard.pool import Worker # type: ignore\n\n old_workloop = Worker.workloop\n\n def sentry_workloop(*args, **kwargs):\n try:\n return old_workloop(*args, **kwargs)\n finally:\n with capture_internal_exceptions():\n hub = Hub.current\n if hub.get_integration(CeleryIntegration) is not None:\n hub.flush()\n\n Worker.workloop = sentry_workloop\n", "path": "sentry_sdk/integrations/celery.py"}]} | 2,483 | 897 |
gh_patches_debug_20588 | rasdani/github-patches | git_diff | dotkom__onlineweb4-812 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Hide attendanceevent from django admin
https://online.ntnu.no/admin/events/attendanceevent/
This view should not be used by anyone and attendance info should be edited through the event directly.
Should be possible to hide this by removing
`admin.site.register(AttendanceEvent, AttendanceEventAdmin)`
in events/admin.py (untested)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/events/admin.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from django import forms
4 from django.contrib import admin
5 from django.core import validators
6 from django.utils.translation import ugettext as _
7
8 from apps.events.models import Event
9 from apps.events.models import AttendanceEvent
10 from apps.events.models import Attendee
11 from apps.events.models import CompanyEvent
12 from apps.events.models import RuleBundle
13 from apps.events.models import FieldOfStudyRule
14 from apps.events.models import GradeRule
15 from apps.events.models import UserGroupRule
16 from apps.feedback.admin import FeedbackRelationInline
17
18
19
20 class AttendeeInline(admin.TabularInline):
21 model = Attendee
22 extra = 1
23 classes = ('grp-collapse grp-open',) # style
24 inline_classes = ('grp-collapse grp-open',) # style
25
26
27 class CompanyInline(admin.TabularInline):
28 model = CompanyEvent
29 max_num = 20
30 extra = 0
31 classes = ('grp-collapse grp-open',) # style
32 inline_classes = ('grp-collapse grp-open',) # style
33
34
35 class RuleBundleInline(admin.TabularInline):
36 model = RuleBundle
37 extra = 1
38 max_num = 20
39 classes = ('grp-collapse grp-open',) # style
40 inline_classes = ('grp-collapse grp-open',) # style
41
42
43 class AttendanceEventAdmin(admin.ModelAdmin):
44 model = AttendanceEvent
45 inlines = (AttendeeInline, RuleBundleInline)
46
47
48 class AttendeeAdmin(admin.ModelAdmin):
49 model = Attendee
50 list_display = ('user', 'event', 'paid')
51 actions = None
52
53 def delete_model(self, request, obj):
54 event = obj.event.event
55 event.notify_waiting_list(host=request.META['HTTP_HOST'], unattended_user=obj.user)
56 obj.delete()
57
58
59 class CompanyEventAdmin(admin.ModelAdmin):
60 model = CompanyEvent
61 inlines = (CompanyInline,)
62
63
64 class RuleBundleAdmin(admin.ModelAdmin):
65 model = RuleBundle
66
67
68 class FieldOfStudyRuleAdmin(admin.ModelAdmin):
69 model = FieldOfStudyRule
70
71
72 class GradeRuleAdmin(admin.ModelAdmin):
73 model = GradeRule
74
75
76 class UserGroupRuleAdmin(admin.ModelAdmin):
77 model = UserGroupRule
78
79
80 class AttendanceEventInline(admin.StackedInline):
81 model = AttendanceEvent
82 max_num = 1
83 extra = 0
84 filter_horizontal = ('rule_bundles',)
85 classes = ('grp-collapse grp-open',) # style
86 inline_classes = ('grp-collapse grp-open',) # style
87
88
89 class EventAdmin(admin.ModelAdmin):
90 inlines = (AttendanceEventInline, FeedbackRelationInline, CompanyInline)
91 exclude = ("author", )
92 search_fields = ('title',)
93
94 def save_model(self, request, obj, form, change):
95 if not change: # created
96 obj.author = request.user
97 else:
98 # If attendance max capacity changed we will notify users that they are now on the attend list
99 old_event = Event.objects.get(id=obj.id)
100 if old_event.is_attendance_event() and old_event.wait_list:
101 diff_capacity = obj.attendance_event.max_capacity - old_event.attendance_event.max_capacity
102 if diff_capacity > 0:
103 if diff_capacity > len(old_event.wait_list):
104 diff_capacity = len(old_event.wait_list)
105 # Using old_event because max_capacity has already been changed in obj
106 old_event.notify_waiting_list(host=request.META['HTTP_HOST'], extra_capacity=diff_capacity)
107 obj.save()
108
109 def save_formset(self, request, form, formset, change):
110 instances = formset.save(commit=False)
111 for instance in instances:
112 instance.save()
113 formset.save_m2m()
114
115 def get_form(self, request, obj=None, **kwargs):
116 form = super(EventAdmin, self).get_form(request, obj, **kwargs)
117 form.base_fields['ingress_short'].validators=[validators.MinLengthValidator(50)]
118 form.base_fields['ingress'].validators=[validators.MinLengthValidator(75)]
119 form.base_fields['description'].validators=[validators.MinLengthValidator(140)]
120 return form
121
122 admin.site.register(Event, EventAdmin)
123 admin.site.register(Attendee, AttendeeAdmin)
124 admin.site.register(AttendanceEvent, AttendanceEventAdmin)
125 admin.site.register(RuleBundle, RuleBundleAdmin)
126 admin.site.register(GradeRule, GradeRuleAdmin)
127 admin.site.register(UserGroupRule, UserGroupRuleAdmin)
128 admin.site.register(FieldOfStudyRule, FieldOfStudyRuleAdmin)
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/events/admin.py b/apps/events/admin.py
--- a/apps/events/admin.py
+++ b/apps/events/admin.py
@@ -40,11 +40,6 @@
inline_classes = ('grp-collapse grp-open',) # style
-class AttendanceEventAdmin(admin.ModelAdmin):
- model = AttendanceEvent
- inlines = (AttendeeInline, RuleBundleInline)
-
-
class AttendeeAdmin(admin.ModelAdmin):
model = Attendee
list_display = ('user', 'event', 'paid')
@@ -119,9 +114,9 @@
form.base_fields['description'].validators=[validators.MinLengthValidator(140)]
return form
+
admin.site.register(Event, EventAdmin)
admin.site.register(Attendee, AttendeeAdmin)
-admin.site.register(AttendanceEvent, AttendanceEventAdmin)
admin.site.register(RuleBundle, RuleBundleAdmin)
admin.site.register(GradeRule, GradeRuleAdmin)
admin.site.register(UserGroupRule, UserGroupRuleAdmin)
| {"golden_diff": "diff --git a/apps/events/admin.py b/apps/events/admin.py\n--- a/apps/events/admin.py\n+++ b/apps/events/admin.py\n@@ -40,11 +40,6 @@\n inline_classes = ('grp-collapse grp-open',) # style\n \n \n-class AttendanceEventAdmin(admin.ModelAdmin):\n- model = AttendanceEvent\n- inlines = (AttendeeInline, RuleBundleInline)\n-\n-\n class AttendeeAdmin(admin.ModelAdmin):\n model = Attendee\n list_display = ('user', 'event', 'paid')\n@@ -119,9 +114,9 @@\n form.base_fields['description'].validators=[validators.MinLengthValidator(140)]\n return form\n \n+\n admin.site.register(Event, EventAdmin)\n admin.site.register(Attendee, AttendeeAdmin)\n-admin.site.register(AttendanceEvent, AttendanceEventAdmin)\n admin.site.register(RuleBundle, RuleBundleAdmin)\n admin.site.register(GradeRule, GradeRuleAdmin)\n admin.site.register(UserGroupRule, UserGroupRuleAdmin)\n", "issue": "Hide attendanceevent from django admin\nhttps://online.ntnu.no/admin/events/attendanceevent/\n\nThis view should not be used by anyone and attendance info should be edited through the event directly. \n\nShould be possible to hide this by removing \n`admin.site.register(AttendanceEvent, AttendanceEventAdmin)`\n in events/admin.py (untested)\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom django import forms\nfrom django.contrib import admin\nfrom django.core import validators\nfrom django.utils.translation import ugettext as _\n\nfrom apps.events.models import Event\nfrom apps.events.models import AttendanceEvent\nfrom apps.events.models import Attendee\nfrom apps.events.models import CompanyEvent\nfrom apps.events.models import RuleBundle\nfrom apps.events.models import FieldOfStudyRule\nfrom apps.events.models import GradeRule\nfrom apps.events.models import UserGroupRule\nfrom apps.feedback.admin import FeedbackRelationInline\n\n\n\nclass AttendeeInline(admin.TabularInline):\n model = Attendee\n extra = 1\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass CompanyInline(admin.TabularInline):\n model = CompanyEvent\n max_num = 20\n extra = 0\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass RuleBundleInline(admin.TabularInline):\n model = RuleBundle\n extra = 1\n max_num = 20\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass AttendanceEventAdmin(admin.ModelAdmin):\n model = AttendanceEvent\n inlines = (AttendeeInline, RuleBundleInline)\n\n\nclass AttendeeAdmin(admin.ModelAdmin):\n model = Attendee\n list_display = ('user', 'event', 'paid')\n actions = None\n\n def delete_model(self, request, obj):\n event = obj.event.event\n event.notify_waiting_list(host=request.META['HTTP_HOST'], unattended_user=obj.user)\n obj.delete()\n\n\nclass CompanyEventAdmin(admin.ModelAdmin):\n model = CompanyEvent\n inlines = (CompanyInline,)\n\n\nclass RuleBundleAdmin(admin.ModelAdmin):\n model = RuleBundle\n\n\nclass FieldOfStudyRuleAdmin(admin.ModelAdmin):\n model = FieldOfStudyRule\n\n\nclass GradeRuleAdmin(admin.ModelAdmin):\n model = GradeRule\n\n\nclass UserGroupRuleAdmin(admin.ModelAdmin):\n model = UserGroupRule\n\n\nclass AttendanceEventInline(admin.StackedInline):\n model = AttendanceEvent\n max_num = 1\n extra = 0\n filter_horizontal = ('rule_bundles',)\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass EventAdmin(admin.ModelAdmin):\n inlines = (AttendanceEventInline, FeedbackRelationInline, CompanyInline)\n exclude = (\"author\", )\n search_fields = ('title',)\n\n def save_model(self, request, obj, form, change):\n if not change: # created\n obj.author = request.user\n else:\n # If attendance max capacity changed we will notify users that they are now on the attend list\n old_event = Event.objects.get(id=obj.id)\n if old_event.is_attendance_event() and old_event.wait_list:\n diff_capacity = obj.attendance_event.max_capacity - old_event.attendance_event.max_capacity\n if diff_capacity > 0:\n if diff_capacity > len(old_event.wait_list):\n diff_capacity = len(old_event.wait_list)\n # Using old_event because max_capacity has already been changed in obj\n old_event.notify_waiting_list(host=request.META['HTTP_HOST'], extra_capacity=diff_capacity)\n obj.save()\n\n def save_formset(self, request, form, formset, change):\n instances = formset.save(commit=False)\n for instance in instances:\n instance.save()\n formset.save_m2m()\n\n def get_form(self, request, obj=None, **kwargs):\n form = super(EventAdmin, self).get_form(request, obj, **kwargs)\n form.base_fields['ingress_short'].validators=[validators.MinLengthValidator(50)]\n form.base_fields['ingress'].validators=[validators.MinLengthValidator(75)]\n form.base_fields['description'].validators=[validators.MinLengthValidator(140)]\n return form\n\nadmin.site.register(Event, EventAdmin)\nadmin.site.register(Attendee, AttendeeAdmin)\nadmin.site.register(AttendanceEvent, AttendanceEventAdmin)\nadmin.site.register(RuleBundle, RuleBundleAdmin)\nadmin.site.register(GradeRule, GradeRuleAdmin)\nadmin.site.register(UserGroupRule, UserGroupRuleAdmin)\nadmin.site.register(FieldOfStudyRule, FieldOfStudyRuleAdmin)\n", "path": "apps/events/admin.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom django import forms\nfrom django.contrib import admin\nfrom django.core import validators\nfrom django.utils.translation import ugettext as _\n\nfrom apps.events.models import Event\nfrom apps.events.models import AttendanceEvent\nfrom apps.events.models import Attendee\nfrom apps.events.models import CompanyEvent\nfrom apps.events.models import RuleBundle\nfrom apps.events.models import FieldOfStudyRule\nfrom apps.events.models import GradeRule\nfrom apps.events.models import UserGroupRule\nfrom apps.feedback.admin import FeedbackRelationInline\n\n\n\nclass AttendeeInline(admin.TabularInline):\n model = Attendee\n extra = 1\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass CompanyInline(admin.TabularInline):\n model = CompanyEvent\n max_num = 20\n extra = 0\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass RuleBundleInline(admin.TabularInline):\n model = RuleBundle\n extra = 1\n max_num = 20\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass AttendeeAdmin(admin.ModelAdmin):\n model = Attendee\n list_display = ('user', 'event', 'paid')\n actions = None\n\n def delete_model(self, request, obj):\n event = obj.event.event\n event.notify_waiting_list(host=request.META['HTTP_HOST'], unattended_user=obj.user)\n obj.delete()\n\n\nclass CompanyEventAdmin(admin.ModelAdmin):\n model = CompanyEvent\n inlines = (CompanyInline,)\n\n\nclass RuleBundleAdmin(admin.ModelAdmin):\n model = RuleBundle\n\n\nclass FieldOfStudyRuleAdmin(admin.ModelAdmin):\n model = FieldOfStudyRule\n\n\nclass GradeRuleAdmin(admin.ModelAdmin):\n model = GradeRule\n\n\nclass UserGroupRuleAdmin(admin.ModelAdmin):\n model = UserGroupRule\n\n\nclass AttendanceEventInline(admin.StackedInline):\n model = AttendanceEvent\n max_num = 1\n extra = 0\n filter_horizontal = ('rule_bundles',)\n classes = ('grp-collapse grp-open',) # style\n inline_classes = ('grp-collapse grp-open',) # style\n\n\nclass EventAdmin(admin.ModelAdmin):\n inlines = (AttendanceEventInline, FeedbackRelationInline, CompanyInline)\n exclude = (\"author\", )\n search_fields = ('title',)\n\n def save_model(self, request, obj, form, change):\n if not change: # created\n obj.author = request.user\n else:\n # If attendance max capacity changed we will notify users that they are now on the attend list\n old_event = Event.objects.get(id=obj.id)\n if old_event.is_attendance_event() and old_event.wait_list:\n diff_capacity = obj.attendance_event.max_capacity - old_event.attendance_event.max_capacity\n if diff_capacity > 0:\n if diff_capacity > len(old_event.wait_list):\n diff_capacity = len(old_event.wait_list)\n # Using old_event because max_capacity has already been changed in obj\n old_event.notify_waiting_list(host=request.META['HTTP_HOST'], extra_capacity=diff_capacity)\n obj.save()\n\n def save_formset(self, request, form, formset, change):\n instances = formset.save(commit=False)\n for instance in instances:\n instance.save()\n formset.save_m2m()\n\n def get_form(self, request, obj=None, **kwargs):\n form = super(EventAdmin, self).get_form(request, obj, **kwargs)\n form.base_fields['ingress_short'].validators=[validators.MinLengthValidator(50)]\n form.base_fields['ingress'].validators=[validators.MinLengthValidator(75)]\n form.base_fields['description'].validators=[validators.MinLengthValidator(140)]\n return form\n\n\nadmin.site.register(Event, EventAdmin)\nadmin.site.register(Attendee, AttendeeAdmin)\nadmin.site.register(RuleBundle, RuleBundleAdmin)\nadmin.site.register(GradeRule, GradeRuleAdmin)\nadmin.site.register(UserGroupRule, UserGroupRuleAdmin)\nadmin.site.register(FieldOfStudyRule, FieldOfStudyRuleAdmin)\n", "path": "apps/events/admin.py"}]} | 1,567 | 214 |
gh_patches_debug_31888 | rasdani/github-patches | git_diff | Flexget__Flexget-1608 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Crash when using convert_magnet with aria2.
There is a crash when using convert_magnet with aria2.
```
2017-01-02 19:51 CRITICAL task discover_movies_hd BUG: Unhandled error in plugin aria2: u'file'
2017-01-02 19:51 CRITICAL manager discover_movies_hd An unexpected crash has occurred. Writing crash report to /home/wyrm/.flexget/crash_report.2017.01.02.195150778857.log. Please verify you are running the latest version of flexget by using "flexget -V" from CLI or by using version_checker plugin at http://flexget.com/wiki/Plugins/version_checker. You are currently using version 2.8.17.dev
2017-01-02 19:51 WARNING task discover_movies_hd Aborting task (plugin: aria2)
```
I don't have a crashlog, sorry.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flexget/plugins/modify/convert_magnet.py`
Content:
```
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # noqa pylint: disable=unused-import, redefined-builtin
3 import os
4 import time
5 import logging
6
7 from flexget import plugin
8 from flexget.event import event
9 from flexget.utils.tools import parse_timedelta
10 from flexget.utils.pathscrub import pathscrub
11
12 log = logging.getLogger('convert_magnet')
13
14
15 class ConvertMagnet(object):
16 """Convert magnet only entries to a torrent file"""
17
18 schema = {
19 "oneOf": [
20 # Allow convert_magnet: no form to turn off plugin altogether
21 {"type": "boolean"},
22 {
23 "type": "object",
24 "properties": {
25 "timeout": {"type": "string", "format": "interval", "default": "30 seconds"},
26 },
27 "additionalProperties": False
28 }
29 ]
30 }
31
32 def magnet_to_torrent(self, magnet_uri, destination_folder, timeout):
33 import libtorrent
34 params = libtorrent.parse_magnet_uri(magnet_uri)
35 session = libtorrent.session()
36 # for some reason the info_hash needs to be bytes but it's a struct called sha1_hash
37 params['info_hash'] = bytes(params['info_hash'])
38 handle = libtorrent.add_magnet_uri(session, magnet_uri, params)
39 log.debug('Acquiring torrent metadata for magnet %s', magnet_uri)
40 timeout_value = timeout
41 while not handle.has_metadata():
42 time.sleep(0.1)
43 timeout_value -= 0.1
44 if timeout_value <= 0:
45 raise plugin.PluginError('Timed out after {} seconds trying to magnetize'.format(timeout))
46 log.debug('Metadata acquired')
47 torrent_info = handle.get_torrent_info()
48 torrent_file = libtorrent.create_torrent(torrent_info)
49 torrent_path = pathscrub(os.path.join(destination_folder, torrent_info.name() + ".torrent"))
50 with open(torrent_path, "wb") as f:
51 f.write(libtorrent.bencode(torrent_file.generate()))
52 log.debug('Torrent file wrote to %s', torrent_path)
53 return torrent_path
54
55 def prepare_config(self, config):
56 if not isinstance(config, dict):
57 config = {}
58 config.setdefault('timeout', '30 seconds')
59 return config
60
61 @plugin.priority(255)
62 def on_task_start(self, task, config):
63 if config is False:
64 return
65 try:
66 import libtorrent # noqa
67 except ImportError:
68 raise plugin.DependencyError('convert_magnet', 'libtorrent', 'libtorrent package required', log)
69
70 @plugin.priority(130)
71 def on_task_download(self, task, config):
72 if config is False:
73 return
74 config = self.prepare_config(config)
75 # Create the conversion target directory
76 converted_path = os.path.join(task.manager.config_base, 'converted')
77
78 timeout = parse_timedelta(config['timeout']).total_seconds()
79
80 if not os.path.isdir(converted_path):
81 os.mkdir(converted_path)
82
83 for entry in task.accepted:
84 if entry['url'].startswith('magnet:'):
85 entry.setdefault('urls', [entry['url']])
86 try:
87 log.info('Converting entry {} magnet URI to a torrent file'.format(entry['title']))
88 torrent_file = self.magnet_to_torrent(entry['url'], converted_path, timeout)
89 except (plugin.PluginError, TypeError) as e:
90 log.error('Unable to convert Magnet URI for entry %s: %s', entry['title'], e)
91 continue
92 # Windows paths need an extra / prepended to them for url
93 if not torrent_file.startswith('/'):
94 torrent_file = '/' + torrent_file
95 entry['url'] = torrent_file
96 entry['file'] = torrent_file
97 # make sure it's first in the list because of how download plugin works
98 entry['urls'].insert(0, 'file://{}'.format(torrent_file))
99
100
101 @event('plugin.register')
102 def register_plugin():
103 plugin.register(ConvertMagnet, 'convert_magnet', api_ver=2)
104
```
Path: `flexget/plugins/clients/aria2.py`
Content:
```
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # noqa pylint: disable=unused-import, redefined-builtin
3
4 import logging
5 import os
6 import xmlrpc.client
7 from socket import error as socket_error
8
9 from flexget import plugin
10 from flexget.event import event
11 from flexget.utils.template import RenderError
12
13 log = logging.getLogger('aria2')
14
15
16 class OutputAria2(object):
17 """
18 Simple Aria2 output
19
20 Example::
21
22 aria2:
23 path: ~/downloads/
24
25 """
26
27 schema = {
28 'type': 'object',
29 'properties': {
30 'server': {'type': 'string', 'default': 'localhost'},
31 'port': {'type': 'integer', 'default': 6800},
32 'secret': {'type': 'string', 'default': ''},
33 'username': {'type': 'string', 'default': ''}, # NOTE: To be deprecated by aria2
34 'password': {'type': 'string', 'default': ''},
35 'path': {'type': 'string'},
36 'options': {
37 'type': 'object',
38 'additionalProperties': {'oneOf': [{'type': 'string'}, {'type': 'integer'}]}
39 }
40
41 },
42 'required': ['path'],
43 'additionalProperties': False
44 }
45
46 def aria2_connection(self, server, port, username=None, password=None):
47 if username and password:
48 userpass = '%s:%s@' % (username, password)
49 else:
50 userpass = ''
51 url = 'http://%s%s:%s/rpc' % (userpass, server, port)
52 log.debug('aria2 url: %s' % url)
53 log.info('Connecting to daemon at %s', url)
54 try:
55 return xmlrpc.client.ServerProxy(url).aria2
56 except xmlrpc.client.ProtocolError as err:
57 raise plugin.PluginError('Could not connect to aria2 at %s. Protocol error %s: %s'
58 % (url, err.errcode, err.errmsg), log)
59 except xmlrpc.client.Fault as err:
60 raise plugin.PluginError('XML-RPC fault: Unable to connect to aria2 daemon at %s: %s'
61 % (url, err.faultString), log)
62 except socket_error as e:
63 raise plugin.PluginError('Socket connection issue with aria2 daemon at %s: %s' % (url, e), log)
64 except:
65 log.debug('Unexpected error during aria2 connection', exc_info=True)
66 raise plugin.PluginError('Unidentified error during connection to aria2 daemon', log)
67
68 def prepare_config(self, config):
69 config.setdefault('server', 'localhost')
70 config.setdefault('port', 6800)
71 config.setdefault('username', '')
72 config.setdefault('password', '')
73 config.setdefault('secret', '')
74 config.setdefault('options', {})
75 return config
76
77 def on_task_output(self, task, config):
78 # don't add when learning
79 if task.options.learn:
80 return
81 config = self.prepare_config(config)
82 aria2 = self.aria2_connection(config['server'], config['port'],
83 config['username'], config['password'])
84 for entry in task.accepted:
85 if task.options.test:
86 log.verbose('Would add `%s` to aria2.', entry['title'])
87 continue
88 try:
89 self.add_entry(aria2, entry, config)
90 except socket_error as se:
91 entry.fail('Unable to reach Aria2: %s', se)
92 except xmlrpc.client.Fault as err:
93 log.critical('Fault code %s message %s', err.faultCode, err.faultString)
94 entry.fail('Aria2 communication Fault')
95 except Exception as e:
96 log.debug('Exception type %s', type(e), exc_info=True)
97 raise
98
99 def add_entry(self, aria2, entry, config):
100 """
101 Add entry to Aria2
102 """
103 options = config['options']
104 try:
105 options['dir'] = os.path.expanduser(entry.render(config['path']).rstrip('/'))
106 except RenderError as e:
107 entry.fail('failed to render \'path\': %s' % e)
108 return
109 secret = None
110 if config['secret']:
111 secret = 'token:%s' % config['secret']
112 # handle torrent files
113 if 'torrent' in entry:
114 if secret:
115 return aria2.addTorrent(secret, xmlrpc.client.Binary(open(entry['file'], mode='rb').read()))
116 return aria2.addTorrent(xmlrpc.client.Binary(open(entry['file'], mode='rb').read()))
117 # handle everything else (except metalink -- which is unsupported)
118 # so magnets, https, http, ftp .. etc
119 if secret:
120 return aria2.addUri(secret, [entry['url']], options)
121 return aria2.addUri([entry['url']], options)
122
123
124 @event('plugin.register')
125 def register_plugin():
126 plugin.register(OutputAria2, 'aria2', api_ver=2)
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flexget/plugins/clients/aria2.py b/flexget/plugins/clients/aria2.py
--- a/flexget/plugins/clients/aria2.py
+++ b/flexget/plugins/clients/aria2.py
@@ -111,9 +111,17 @@
secret = 'token:%s' % config['secret']
# handle torrent files
if 'torrent' in entry:
+ if 'file' in entry:
+ torrent_file = entry['file']
+ elif 'location' in entry:
+ # in case download plugin moved the file elsewhere
+ torrent_file = entry['location']
+ else:
+ entry.fail('Cannot find torrent file')
+ return
if secret:
- return aria2.addTorrent(secret, xmlrpc.client.Binary(open(entry['file'], mode='rb').read()))
- return aria2.addTorrent(xmlrpc.client.Binary(open(entry['file'], mode='rb').read()))
+ return aria2.addTorrent(secret, xmlrpc.client.Binary(open(torrent_file, mode='rb').read()), [], options)
+ return aria2.addTorrent(xmlrpc.client.Binary(open(torrent_file, mode='rb').read()), [], options)
# handle everything else (except metalink -- which is unsupported)
# so magnets, https, http, ftp .. etc
if secret:
diff --git a/flexget/plugins/modify/convert_magnet.py b/flexget/plugins/modify/convert_magnet.py
--- a/flexget/plugins/modify/convert_magnet.py
+++ b/flexget/plugins/modify/convert_magnet.py
@@ -33,8 +33,10 @@
import libtorrent
params = libtorrent.parse_magnet_uri(magnet_uri)
session = libtorrent.session()
- # for some reason the info_hash needs to be bytes but it's a struct called sha1_hash
- params['info_hash'] = bytes(params['info_hash'])
+ lt_version = [int(v) for v in libtorrent.version.split('.')]
+ if lt_version > [0,16,13,0]:
+ # for some reason the info_hash needs to be bytes but it's a struct called sha1_hash
+ params['info_hash'] = params['info_hash'].to_bytes()
handle = libtorrent.add_magnet_uri(session, magnet_uri, params)
log.debug('Acquiring torrent metadata for magnet %s', magnet_uri)
timeout_value = timeout
| {"golden_diff": "diff --git a/flexget/plugins/clients/aria2.py b/flexget/plugins/clients/aria2.py\n--- a/flexget/plugins/clients/aria2.py\n+++ b/flexget/plugins/clients/aria2.py\n@@ -111,9 +111,17 @@\n secret = 'token:%s' % config['secret']\n # handle torrent files\n if 'torrent' in entry:\n+ if 'file' in entry:\n+ torrent_file = entry['file']\n+ elif 'location' in entry:\n+ # in case download plugin moved the file elsewhere\n+ torrent_file = entry['location']\n+ else:\n+ entry.fail('Cannot find torrent file')\n+ return\n if secret:\n- return aria2.addTorrent(secret, xmlrpc.client.Binary(open(entry['file'], mode='rb').read()))\n- return aria2.addTorrent(xmlrpc.client.Binary(open(entry['file'], mode='rb').read()))\n+ return aria2.addTorrent(secret, xmlrpc.client.Binary(open(torrent_file, mode='rb').read()), [], options)\n+ return aria2.addTorrent(xmlrpc.client.Binary(open(torrent_file, mode='rb').read()), [], options)\n # handle everything else (except metalink -- which is unsupported)\n # so magnets, https, http, ftp .. etc\n if secret:\ndiff --git a/flexget/plugins/modify/convert_magnet.py b/flexget/plugins/modify/convert_magnet.py\n--- a/flexget/plugins/modify/convert_magnet.py\n+++ b/flexget/plugins/modify/convert_magnet.py\n@@ -33,8 +33,10 @@\n import libtorrent\n params = libtorrent.parse_magnet_uri(magnet_uri)\n session = libtorrent.session()\n- # for some reason the info_hash needs to be bytes but it's a struct called sha1_hash\n- params['info_hash'] = bytes(params['info_hash'])\n+ lt_version = [int(v) for v in libtorrent.version.split('.')]\n+ if lt_version > [0,16,13,0]:\n+ # for some reason the info_hash needs to be bytes but it's a struct called sha1_hash\n+ params['info_hash'] = params['info_hash'].to_bytes()\n handle = libtorrent.add_magnet_uri(session, magnet_uri, params)\n log.debug('Acquiring torrent metadata for magnet %s', magnet_uri)\n timeout_value = timeout\n", "issue": "Crash when using convert_magnet with aria2.\nThere is a crash when using convert_magnet with aria2.\r\n\r\n```\r\n2017-01-02 19:51 CRITICAL task discover_movies_hd BUG: Unhandled error in plugin aria2: u'file'\r\n2017-01-02 19:51 CRITICAL manager discover_movies_hd An unexpected crash has occurred. Writing crash report to /home/wyrm/.flexget/crash_report.2017.01.02.195150778857.log. Please verify you are running the latest version of flexget by using \"flexget -V\" from CLI or by using version_checker plugin at http://flexget.com/wiki/Plugins/version_checker. You are currently using version 2.8.17.dev\r\n2017-01-02 19:51 WARNING task discover_movies_hd Aborting task (plugin: aria2)\r\n\r\n```\r\nI don't have a crashlog, sorry.\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\nimport os\nimport time\nimport logging\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils.tools import parse_timedelta\nfrom flexget.utils.pathscrub import pathscrub\n\nlog = logging.getLogger('convert_magnet')\n\n\nclass ConvertMagnet(object):\n \"\"\"Convert magnet only entries to a torrent file\"\"\"\n\n schema = {\n \"oneOf\": [\n # Allow convert_magnet: no form to turn off plugin altogether\n {\"type\": \"boolean\"},\n {\n \"type\": \"object\",\n \"properties\": {\n \"timeout\": {\"type\": \"string\", \"format\": \"interval\", \"default\": \"30 seconds\"},\n },\n \"additionalProperties\": False\n }\n ]\n }\n\n def magnet_to_torrent(self, magnet_uri, destination_folder, timeout):\n import libtorrent\n params = libtorrent.parse_magnet_uri(magnet_uri)\n session = libtorrent.session()\n # for some reason the info_hash needs to be bytes but it's a struct called sha1_hash\n params['info_hash'] = bytes(params['info_hash'])\n handle = libtorrent.add_magnet_uri(session, magnet_uri, params)\n log.debug('Acquiring torrent metadata for magnet %s', magnet_uri)\n timeout_value = timeout\n while not handle.has_metadata():\n time.sleep(0.1)\n timeout_value -= 0.1\n if timeout_value <= 0:\n raise plugin.PluginError('Timed out after {} seconds trying to magnetize'.format(timeout))\n log.debug('Metadata acquired')\n torrent_info = handle.get_torrent_info()\n torrent_file = libtorrent.create_torrent(torrent_info)\n torrent_path = pathscrub(os.path.join(destination_folder, torrent_info.name() + \".torrent\"))\n with open(torrent_path, \"wb\") as f:\n f.write(libtorrent.bencode(torrent_file.generate()))\n log.debug('Torrent file wrote to %s', torrent_path)\n return torrent_path\n\n def prepare_config(self, config):\n if not isinstance(config, dict):\n config = {}\n config.setdefault('timeout', '30 seconds')\n return config\n\n @plugin.priority(255)\n def on_task_start(self, task, config):\n if config is False:\n return\n try:\n import libtorrent # noqa\n except ImportError:\n raise plugin.DependencyError('convert_magnet', 'libtorrent', 'libtorrent package required', log)\n\n @plugin.priority(130)\n def on_task_download(self, task, config):\n if config is False:\n return\n config = self.prepare_config(config)\n # Create the conversion target directory\n converted_path = os.path.join(task.manager.config_base, 'converted')\n\n timeout = parse_timedelta(config['timeout']).total_seconds()\n\n if not os.path.isdir(converted_path):\n os.mkdir(converted_path)\n\n for entry in task.accepted:\n if entry['url'].startswith('magnet:'):\n entry.setdefault('urls', [entry['url']])\n try:\n log.info('Converting entry {} magnet URI to a torrent file'.format(entry['title']))\n torrent_file = self.magnet_to_torrent(entry['url'], converted_path, timeout)\n except (plugin.PluginError, TypeError) as e:\n log.error('Unable to convert Magnet URI for entry %s: %s', entry['title'], e)\n continue\n # Windows paths need an extra / prepended to them for url\n if not torrent_file.startswith('/'):\n torrent_file = '/' + torrent_file\n entry['url'] = torrent_file\n entry['file'] = torrent_file\n # make sure it's first in the list because of how download plugin works\n entry['urls'].insert(0, 'file://{}'.format(torrent_file))\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(ConvertMagnet, 'convert_magnet', api_ver=2)\n", "path": "flexget/plugins/modify/convert_magnet.py"}, {"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\n\nimport logging\nimport os\nimport xmlrpc.client\nfrom socket import error as socket_error\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils.template import RenderError\n\nlog = logging.getLogger('aria2')\n\n\nclass OutputAria2(object):\n \"\"\"\n Simple Aria2 output\n\n Example::\n\n aria2:\n path: ~/downloads/\n\n \"\"\"\n\n schema = {\n 'type': 'object',\n 'properties': {\n 'server': {'type': 'string', 'default': 'localhost'},\n 'port': {'type': 'integer', 'default': 6800},\n 'secret': {'type': 'string', 'default': ''},\n 'username': {'type': 'string', 'default': ''}, # NOTE: To be deprecated by aria2\n 'password': {'type': 'string', 'default': ''},\n 'path': {'type': 'string'},\n 'options': {\n 'type': 'object',\n 'additionalProperties': {'oneOf': [{'type': 'string'}, {'type': 'integer'}]}\n }\n\n },\n 'required': ['path'],\n 'additionalProperties': False\n }\n\n def aria2_connection(self, server, port, username=None, password=None):\n if username and password:\n userpass = '%s:%s@' % (username, password)\n else:\n userpass = ''\n url = 'http://%s%s:%s/rpc' % (userpass, server, port)\n log.debug('aria2 url: %s' % url)\n log.info('Connecting to daemon at %s', url)\n try:\n return xmlrpc.client.ServerProxy(url).aria2\n except xmlrpc.client.ProtocolError as err:\n raise plugin.PluginError('Could not connect to aria2 at %s. Protocol error %s: %s'\n % (url, err.errcode, err.errmsg), log)\n except xmlrpc.client.Fault as err:\n raise plugin.PluginError('XML-RPC fault: Unable to connect to aria2 daemon at %s: %s'\n % (url, err.faultString), log)\n except socket_error as e:\n raise plugin.PluginError('Socket connection issue with aria2 daemon at %s: %s' % (url, e), log)\n except:\n log.debug('Unexpected error during aria2 connection', exc_info=True)\n raise plugin.PluginError('Unidentified error during connection to aria2 daemon', log)\n\n def prepare_config(self, config):\n config.setdefault('server', 'localhost')\n config.setdefault('port', 6800)\n config.setdefault('username', '')\n config.setdefault('password', '')\n config.setdefault('secret', '')\n config.setdefault('options', {})\n return config\n\n def on_task_output(self, task, config):\n # don't add when learning\n if task.options.learn:\n return\n config = self.prepare_config(config)\n aria2 = self.aria2_connection(config['server'], config['port'],\n config['username'], config['password'])\n for entry in task.accepted:\n if task.options.test:\n log.verbose('Would add `%s` to aria2.', entry['title'])\n continue\n try:\n self.add_entry(aria2, entry, config)\n except socket_error as se:\n entry.fail('Unable to reach Aria2: %s', se)\n except xmlrpc.client.Fault as err:\n log.critical('Fault code %s message %s', err.faultCode, err.faultString)\n entry.fail('Aria2 communication Fault')\n except Exception as e:\n log.debug('Exception type %s', type(e), exc_info=True)\n raise\n\n def add_entry(self, aria2, entry, config):\n \"\"\"\n Add entry to Aria2\n \"\"\"\n options = config['options']\n try:\n options['dir'] = os.path.expanduser(entry.render(config['path']).rstrip('/'))\n except RenderError as e:\n entry.fail('failed to render \\'path\\': %s' % e)\n return\n secret = None\n if config['secret']:\n secret = 'token:%s' % config['secret']\n # handle torrent files\n if 'torrent' in entry:\n if secret:\n return aria2.addTorrent(secret, xmlrpc.client.Binary(open(entry['file'], mode='rb').read()))\n return aria2.addTorrent(xmlrpc.client.Binary(open(entry['file'], mode='rb').read()))\n # handle everything else (except metalink -- which is unsupported)\n # so magnets, https, http, ftp .. etc\n if secret:\n return aria2.addUri(secret, [entry['url']], options)\n return aria2.addUri([entry['url']], options)\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(OutputAria2, 'aria2', api_ver=2)\n", "path": "flexget/plugins/clients/aria2.py"}], "after_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\nimport os\nimport time\nimport logging\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils.tools import parse_timedelta\nfrom flexget.utils.pathscrub import pathscrub\n\nlog = logging.getLogger('convert_magnet')\n\n\nclass ConvertMagnet(object):\n \"\"\"Convert magnet only entries to a torrent file\"\"\"\n\n schema = {\n \"oneOf\": [\n # Allow convert_magnet: no form to turn off plugin altogether\n {\"type\": \"boolean\"},\n {\n \"type\": \"object\",\n \"properties\": {\n \"timeout\": {\"type\": \"string\", \"format\": \"interval\", \"default\": \"30 seconds\"},\n },\n \"additionalProperties\": False\n }\n ]\n }\n\n def magnet_to_torrent(self, magnet_uri, destination_folder, timeout):\n import libtorrent\n params = libtorrent.parse_magnet_uri(magnet_uri)\n session = libtorrent.session()\n lt_version = [int(v) for v in libtorrent.version.split('.')]\n if lt_version > [0,16,13,0]:\n # for some reason the info_hash needs to be bytes but it's a struct called sha1_hash\n params['info_hash'] = params['info_hash'].to_bytes()\n handle = libtorrent.add_magnet_uri(session, magnet_uri, params)\n log.debug('Acquiring torrent metadata for magnet %s', magnet_uri)\n timeout_value = timeout\n while not handle.has_metadata():\n time.sleep(0.1)\n timeout_value -= 0.1\n if timeout_value <= 0:\n raise plugin.PluginError('Timed out after {} seconds trying to magnetize'.format(timeout))\n log.debug('Metadata acquired')\n torrent_info = handle.get_torrent_info()\n torrent_file = libtorrent.create_torrent(torrent_info)\n torrent_path = pathscrub(os.path.join(destination_folder, torrent_info.name() + \".torrent\"))\n with open(torrent_path, \"wb\") as f:\n f.write(libtorrent.bencode(torrent_file.generate()))\n log.debug('Torrent file wrote to %s', torrent_path)\n return torrent_path\n\n def prepare_config(self, config):\n if not isinstance(config, dict):\n config = {}\n config.setdefault('timeout', '30 seconds')\n return config\n\n @plugin.priority(255)\n def on_task_start(self, task, config):\n if config is False:\n return\n try:\n import libtorrent # noqa\n except ImportError:\n raise plugin.DependencyError('convert_magnet', 'libtorrent', 'libtorrent package required', log)\n\n @plugin.priority(130)\n def on_task_download(self, task, config):\n if config is False:\n return\n config = self.prepare_config(config)\n # Create the conversion target directory\n converted_path = os.path.join(task.manager.config_base, 'converted')\n\n timeout = parse_timedelta(config['timeout']).total_seconds()\n\n if not os.path.isdir(converted_path):\n os.mkdir(converted_path)\n\n for entry in task.accepted:\n if entry['url'].startswith('magnet:'):\n entry.setdefault('urls', [entry['url']])\n try:\n log.info('Converting entry {} magnet URI to a torrent file'.format(entry['title']))\n torrent_file = self.magnet_to_torrent(entry['url'], converted_path, timeout)\n except (plugin.PluginError, TypeError) as e:\n log.error('Unable to convert Magnet URI for entry %s: %s', entry['title'], e)\n continue\n # Windows paths need an extra / prepended to them for url\n if not torrent_file.startswith('/'):\n torrent_file = '/' + torrent_file\n entry['url'] = torrent_file\n entry['file'] = torrent_file\n # make sure it's first in the list because of how download plugin works\n entry['urls'].insert(0, 'file://{}'.format(torrent_file))\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(ConvertMagnet, 'convert_magnet', api_ver=2)\n", "path": "flexget/plugins/modify/convert_magnet.py"}, {"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\n\nimport logging\nimport os\nimport xmlrpc.client\nfrom socket import error as socket_error\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils.template import RenderError\n\nlog = logging.getLogger('aria2')\n\n\nclass OutputAria2(object):\n \"\"\"\n Simple Aria2 output\n\n Example::\n\n aria2:\n path: ~/downloads/\n\n \"\"\"\n\n schema = {\n 'type': 'object',\n 'properties': {\n 'server': {'type': 'string', 'default': 'localhost'},\n 'port': {'type': 'integer', 'default': 6800},\n 'secret': {'type': 'string', 'default': ''},\n 'username': {'type': 'string', 'default': ''}, # NOTE: To be deprecated by aria2\n 'password': {'type': 'string', 'default': ''},\n 'path': {'type': 'string'},\n 'options': {\n 'type': 'object',\n 'additionalProperties': {'oneOf': [{'type': 'string'}, {'type': 'integer'}]}\n }\n\n },\n 'required': ['path'],\n 'additionalProperties': False\n }\n\n def aria2_connection(self, server, port, username=None, password=None):\n if username and password:\n userpass = '%s:%s@' % (username, password)\n else:\n userpass = ''\n url = 'http://%s%s:%s/rpc' % (userpass, server, port)\n log.debug('aria2 url: %s' % url)\n log.info('Connecting to daemon at %s', url)\n try:\n return xmlrpc.client.ServerProxy(url).aria2\n except xmlrpc.client.ProtocolError as err:\n raise plugin.PluginError('Could not connect to aria2 at %s. Protocol error %s: %s'\n % (url, err.errcode, err.errmsg), log)\n except xmlrpc.client.Fault as err:\n raise plugin.PluginError('XML-RPC fault: Unable to connect to aria2 daemon at %s: %s'\n % (url, err.faultString), log)\n except socket_error as e:\n raise plugin.PluginError('Socket connection issue with aria2 daemon at %s: %s' % (url, e), log)\n except:\n log.debug('Unexpected error during aria2 connection', exc_info=True)\n raise plugin.PluginError('Unidentified error during connection to aria2 daemon', log)\n\n def prepare_config(self, config):\n config.setdefault('server', 'localhost')\n config.setdefault('port', 6800)\n config.setdefault('username', '')\n config.setdefault('password', '')\n config.setdefault('secret', '')\n config.setdefault('options', {})\n return config\n\n def on_task_output(self, task, config):\n # don't add when learning\n if task.options.learn:\n return\n config = self.prepare_config(config)\n aria2 = self.aria2_connection(config['server'], config['port'],\n config['username'], config['password'])\n for entry in task.accepted:\n if task.options.test:\n log.verbose('Would add `%s` to aria2.', entry['title'])\n continue\n try:\n self.add_entry(aria2, entry, config)\n except socket_error as se:\n entry.fail('Unable to reach Aria2: %s', se)\n except xmlrpc.client.Fault as err:\n log.critical('Fault code %s message %s', err.faultCode, err.faultString)\n entry.fail('Aria2 communication Fault')\n except Exception as e:\n log.debug('Exception type %s', type(e), exc_info=True)\n raise\n\n def add_entry(self, aria2, entry, config):\n \"\"\"\n Add entry to Aria2\n \"\"\"\n options = config['options']\n try:\n options['dir'] = os.path.expanduser(entry.render(config['path']).rstrip('/'))\n except RenderError as e:\n entry.fail('failed to render \\'path\\': %s' % e)\n return\n secret = None\n if config['secret']:\n secret = 'token:%s' % config['secret']\n # handle torrent files\n if 'torrent' in entry:\n if 'file' in entry:\n torrent_file = entry['file']\n elif 'location' in entry:\n # in case download plugin moved the file elsewhere\n torrent_file = entry['location']\n else:\n entry.fail('Cannot find torrent file')\n return\n if secret:\n return aria2.addTorrent(secret, xmlrpc.client.Binary(open(torrent_file, mode='rb').read()), [], options)\n return aria2.addTorrent(xmlrpc.client.Binary(open(torrent_file, mode='rb').read()), [], options)\n # handle everything else (except metalink -- which is unsupported)\n # so magnets, https, http, ftp .. etc\n if secret:\n return aria2.addUri(secret, [entry['url']], options)\n return aria2.addUri([entry['url']], options)\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(OutputAria2, 'aria2', api_ver=2)\n", "path": "flexget/plugins/clients/aria2.py"}]} | 2,962 | 536 |
gh_patches_debug_16878 | rasdani/github-patches | git_diff | ansible__ansible-modules-core-4747 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ini_file - Insert missing option line before blank lines at the end of the section
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ini_file
##### ANSIBLE VERSION
```
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
New lines are added to an existing section after blank lines separating sections, but should instead be added before blank lines at the end of a section.
##### STEPS TO REPRODUCE
Use ini_file to add a new line to a file.
Given test.ini:
```
[sect1]
opt1 = val1
[sect2]
opt2 = val2
```
Run this test command:
```
ansible -c local -m ini_file -a 'dest=test.ini section=sect1 option=opt13 value=val13' localhost
```
##### EXPECTED RESULTS
test.ini should look like this:
```
[sect1]
opt1 = val1
opt3 = val3
[sect2]
opt2 = val2
```
##### ACTUAL RESULTS
This file is still technically correct but just looks a bit misleading as opt3 is grouped closer to [sect2].
```
[sect1]
opt1 = val1
opt3 = val3
[sect2]
opt2 = val2
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `files/ini_file.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
5 # (c) 2015, Ales Nosek <anosek.nosek () gmail.com>
6 #
7 # This file is part of Ansible
8 #
9 # Ansible is free software: you can redistribute it and/or modify
10 # it under the terms of the GNU General Public License as published by
11 # the Free Software Foundation, either version 3 of the License, or
12 # (at your option) any later version.
13 #
14 # Ansible is distributed in the hope that it will be useful,
15 # but WITHOUT ANY WARRANTY; without even the implied warranty of
16 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17 # GNU General Public License for more details.
18 #
19 # You should have received a copy of the GNU General Public License
20 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
21 #
22
23 DOCUMENTATION = '''
24 ---
25 module: ini_file
26 short_description: Tweak settings in INI files
27 extends_documentation_fragment: files
28 description:
29 - Manage (add, remove, change) individual settings in an INI-style file without having
30 to manage the file as a whole with, say, M(template) or M(assemble). Adds missing
31 sections if they don't exist.
32 - Before version 2.0, comments are discarded when the source file is read, and therefore will not show up in the destination file.
33 version_added: "0.9"
34 options:
35 dest:
36 description:
37 - Path to the INI-style file; this file is created if required
38 required: true
39 default: null
40 section:
41 description:
42 - Section name in INI file. This is added if C(state=present) automatically when
43 a single value is being set.
44 required: true
45 default: null
46 option:
47 description:
48 - if set (required for changing a I(value)), this is the name of the option.
49 - May be omitted if adding/removing a whole I(section).
50 required: false
51 default: null
52 value:
53 description:
54 - the string value to be associated with an I(option). May be omitted when removing an I(option).
55 required: false
56 default: null
57 backup:
58 description:
59 - Create a backup file including the timestamp information so you can get
60 the original file back if you somehow clobbered it incorrectly.
61 required: false
62 default: "no"
63 choices: [ "yes", "no" ]
64 others:
65 description:
66 - all arguments accepted by the M(file) module also work here
67 required: false
68 state:
69 description:
70 - If set to C(absent) the option or section will be removed if present instead of created.
71 required: false
72 default: "present"
73 choices: [ "present", "absent" ]
74 no_extra_spaces:
75 description:
76 - do not insert spaces before and after '=' symbol
77 required: false
78 default: false
79 version_added: "2.1"
80 notes:
81 - While it is possible to add an I(option) without specifying a I(value), this makes
82 no sense.
83 - A section named C(default) cannot be added by the module, but if it exists, individual
84 options within the section can be updated. (This is a limitation of Python's I(ConfigParser).)
85 Either use M(template) to create a base INI file with a C([default]) section, or use
86 M(lineinfile) to add the missing line.
87 requirements: [ ConfigParser ]
88 author:
89 - "Jan-Piet Mens (@jpmens)"
90 - "Ales Nosek (@noseka1)"
91 '''
92
93 EXAMPLES = '''
94 # Ensure "fav=lemonade is in section "[drinks]" in specified file
95 - ini_file: dest=/etc/conf section=drinks option=fav value=lemonade mode=0600 backup=yes
96
97 - ini_file: dest=/etc/anotherconf
98 section=drinks
99 option=temperature
100 value=cold
101 backup=yes
102 '''
103
104 import os
105 import re
106
107 # ==============================================================
108 # match_opt
109
110 def match_opt(option, line):
111 option = re.escape(option)
112 return re.match('%s( |\t)*=' % option, line) \
113 or re.match('# *%s( |\t)*=' % option, line) \
114 or re.match('; *%s( |\t)*=' % option, line)
115
116 # ==============================================================
117 # match_active_opt
118
119 def match_active_opt(option, line):
120 option = re.escape(option)
121 return re.match('%s( |\t)*=' % option, line)
122
123 # ==============================================================
124 # do_ini
125
126 def do_ini(module, filename, section=None, option=None, value=None, state='present', backup=False, no_extra_spaces=False):
127
128
129 if not os.path.exists(filename):
130 try:
131 open(filename,'w').close()
132 except:
133 module.fail_json(msg="Destination file %s not writable" % filename)
134 ini_file = open(filename, 'r')
135 try:
136 ini_lines = ini_file.readlines()
137 # append a fake section line to simplify the logic
138 ini_lines.append('[')
139 finally:
140 ini_file.close()
141
142 within_section = not section
143 section_start = 0
144 changed = False
145 if no_extra_spaces:
146 assignment_format = '%s=%s\n'
147 else:
148 assignment_format = '%s = %s\n'
149
150 for index, line in enumerate(ini_lines):
151 if line.startswith('[%s]' % section):
152 within_section = True
153 section_start = index
154 elif line.startswith('['):
155 if within_section:
156 if state == 'present':
157 # insert missing option line at the end of the section
158 ini_lines.insert(index, assignment_format % (option, value))
159 changed = True
160 elif state == 'absent' and not option:
161 # remove the entire section
162 del ini_lines[section_start:index]
163 changed = True
164 break
165 else:
166 if within_section and option:
167 if state == 'present':
168 # change the existing option line
169 if match_opt(option, line):
170 newline = assignment_format % (option, value)
171 changed = ini_lines[index] != newline
172 ini_lines[index] = newline
173 if changed:
174 # remove all possible option occurences from the rest of the section
175 index = index + 1
176 while index < len(ini_lines):
177 line = ini_lines[index]
178 if line.startswith('['):
179 break
180 if match_active_opt(option, line):
181 del ini_lines[index]
182 else:
183 index = index + 1
184 break
185 else:
186 # comment out the existing option line
187 if match_active_opt(option, line):
188 ini_lines[index] = '#%s' % ini_lines[index]
189 changed = True
190 break
191
192 # remove the fake section line
193 del ini_lines[-1:]
194
195 if not within_section and option and state == 'present':
196 ini_lines.append('[%s]\n' % section)
197 ini_lines.append(assignment_format % (option, value))
198 changed = True
199
200
201 backup_file = None
202 if changed and not module.check_mode:
203 if backup:
204 backup_file = module.backup_local(filename)
205 ini_file = open(filename, 'w')
206 try:
207 ini_file.writelines(ini_lines)
208 finally:
209 ini_file.close()
210
211 return (changed, backup_file)
212
213 # ==============================================================
214 # main
215
216 def main():
217
218 module = AnsibleModule(
219 argument_spec = dict(
220 dest = dict(required=True),
221 section = dict(required=True),
222 option = dict(required=False),
223 value = dict(required=False),
224 backup = dict(default='no', type='bool'),
225 state = dict(default='present', choices=['present', 'absent']),
226 no_extra_spaces = dict(required=False, default=False, type='bool')
227 ),
228 add_file_common_args = True,
229 supports_check_mode = True
230 )
231
232 dest = os.path.expanduser(module.params['dest'])
233 section = module.params['section']
234 option = module.params['option']
235 value = module.params['value']
236 state = module.params['state']
237 backup = module.params['backup']
238 no_extra_spaces = module.params['no_extra_spaces']
239
240 (changed,backup_file) = do_ini(module, dest, section, option, value, state, backup, no_extra_spaces)
241
242 file_args = module.load_file_common_arguments(module.params)
243 changed = module.set_fs_attributes_if_different(file_args, changed)
244
245 results = { 'changed': changed, 'msg': "OK", 'dest': dest }
246 if backup_file is not None:
247 results['backup_file'] = backup_file
248
249 # Mission complete
250 module.exit_json(**results)
251
252 # import module snippets
253 from ansible.module_utils.basic import *
254 if __name__ == '__main__':
255 main()
256
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/files/ini_file.py b/files/ini_file.py
--- a/files/ini_file.py
+++ b/files/ini_file.py
@@ -155,8 +155,12 @@
if within_section:
if state == 'present':
# insert missing option line at the end of the section
- ini_lines.insert(index, assignment_format % (option, value))
- changed = True
+ for i in range(index, 0, -1):
+ # search backwards for previous non-blank or non-comment line
+ if not re.match(r'^[ \t]*([#;].*)?$', ini_lines[i - 1]):
+ ini_lines.insert(i, assignment_format % (option, value))
+ changed = True
+ break
elif state == 'absent' and not option:
# remove the entire section
del ini_lines[section_start:index]
| {"golden_diff": "diff --git a/files/ini_file.py b/files/ini_file.py\n--- a/files/ini_file.py\n+++ b/files/ini_file.py\n@@ -155,8 +155,12 @@\n if within_section:\n if state == 'present':\n # insert missing option line at the end of the section\n- ini_lines.insert(index, assignment_format % (option, value))\n- changed = True\n+ for i in range(index, 0, -1):\n+ # search backwards for previous non-blank or non-comment line\n+ if not re.match(r'^[ \\t]*([#;].*)?$', ini_lines[i - 1]):\n+ ini_lines.insert(i, assignment_format % (option, value))\n+ changed = True\n+ break\n elif state == 'absent' and not option:\n # remove the entire section\n del ini_lines[section_start:index]\n", "issue": "ini_file - Insert missing option line before blank lines at the end of the section\n##### ISSUE TYPE\n- Bug Report\n##### COMPONENT NAME\n\nini_file\n##### ANSIBLE VERSION\n\n```\nansible 2.1.1.0\n config file = /etc/ansible/ansible.cfg\n configured module search path = Default w/o overrides\n```\n##### CONFIGURATION\n\nN/A\n##### OS / ENVIRONMENT\n\nN/A\n##### SUMMARY\n\n<!--- Explain the problem briefly -->\n\nNew lines are added to an existing section after blank lines separating sections, but should instead be added before blank lines at the end of a section.\n##### STEPS TO REPRODUCE\n\nUse ini_file to add a new line to a file.\n\nGiven test.ini:\n\n```\n[sect1]\nopt1 = val1\n\n[sect2]\nopt2 = val2\n```\n\nRun this test command:\n\n```\nansible -c local -m ini_file -a 'dest=test.ini section=sect1 option=opt13 value=val13' localhost\n```\n##### EXPECTED RESULTS\n\ntest.ini should look like this:\n\n```\n[sect1]\nopt1 = val1\nopt3 = val3\n\n[sect2]\nopt2 = val2\n```\n##### ACTUAL RESULTS\n\nThis file is still technically correct but just looks a bit misleading as opt3 is grouped closer to [sect2].\n\n```\n[sect1]\nopt1 = val1\n\nopt3 = val3\n[sect2]\nopt2 = val2\n```\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2012, Jan-Piet Mens <jpmens () gmail.com>\n# (c) 2015, Ales Nosek <anosek.nosek () gmail.com>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nDOCUMENTATION = '''\n---\nmodule: ini_file\nshort_description: Tweak settings in INI files\nextends_documentation_fragment: files\ndescription:\n - Manage (add, remove, change) individual settings in an INI-style file without having\n to manage the file as a whole with, say, M(template) or M(assemble). Adds missing\n sections if they don't exist.\n - Before version 2.0, comments are discarded when the source file is read, and therefore will not show up in the destination file.\nversion_added: \"0.9\"\noptions:\n dest:\n description:\n - Path to the INI-style file; this file is created if required\n required: true\n default: null\n section:\n description:\n - Section name in INI file. This is added if C(state=present) automatically when\n a single value is being set.\n required: true\n default: null\n option:\n description:\n - if set (required for changing a I(value)), this is the name of the option.\n - May be omitted if adding/removing a whole I(section).\n required: false\n default: null\n value:\n description:\n - the string value to be associated with an I(option). May be omitted when removing an I(option).\n required: false\n default: null\n backup:\n description:\n - Create a backup file including the timestamp information so you can get\n the original file back if you somehow clobbered it incorrectly.\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n others:\n description:\n - all arguments accepted by the M(file) module also work here\n required: false\n state:\n description:\n - If set to C(absent) the option or section will be removed if present instead of created.\n required: false\n default: \"present\"\n choices: [ \"present\", \"absent\" ]\n no_extra_spaces:\n description:\n - do not insert spaces before and after '=' symbol\n required: false\n default: false\n version_added: \"2.1\"\nnotes:\n - While it is possible to add an I(option) without specifying a I(value), this makes\n no sense.\n - A section named C(default) cannot be added by the module, but if it exists, individual\n options within the section can be updated. (This is a limitation of Python's I(ConfigParser).)\n Either use M(template) to create a base INI file with a C([default]) section, or use\n M(lineinfile) to add the missing line.\nrequirements: [ ConfigParser ]\nauthor:\n - \"Jan-Piet Mens (@jpmens)\"\n - \"Ales Nosek (@noseka1)\"\n'''\n\nEXAMPLES = '''\n# Ensure \"fav=lemonade is in section \"[drinks]\" in specified file\n- ini_file: dest=/etc/conf section=drinks option=fav value=lemonade mode=0600 backup=yes\n\n- ini_file: dest=/etc/anotherconf\n section=drinks\n option=temperature\n value=cold\n backup=yes\n'''\n\nimport os\nimport re\n\n# ==============================================================\n# match_opt\n\ndef match_opt(option, line):\n option = re.escape(option)\n return re.match('%s( |\\t)*=' % option, line) \\\n or re.match('# *%s( |\\t)*=' % option, line) \\\n or re.match('; *%s( |\\t)*=' % option, line)\n\n# ==============================================================\n# match_active_opt\n\ndef match_active_opt(option, line):\n option = re.escape(option)\n return re.match('%s( |\\t)*=' % option, line)\n\n# ==============================================================\n# do_ini\n\ndef do_ini(module, filename, section=None, option=None, value=None, state='present', backup=False, no_extra_spaces=False):\n\n\n if not os.path.exists(filename):\n try:\n open(filename,'w').close()\n except:\n module.fail_json(msg=\"Destination file %s not writable\" % filename)\n ini_file = open(filename, 'r')\n try:\n ini_lines = ini_file.readlines()\n # append a fake section line to simplify the logic\n ini_lines.append('[')\n finally:\n ini_file.close()\n\n within_section = not section\n section_start = 0\n changed = False\n if no_extra_spaces:\n assignment_format = '%s=%s\\n'\n else:\n assignment_format = '%s = %s\\n'\n\n for index, line in enumerate(ini_lines):\n if line.startswith('[%s]' % section):\n within_section = True\n section_start = index\n elif line.startswith('['):\n if within_section:\n if state == 'present':\n # insert missing option line at the end of the section\n ini_lines.insert(index, assignment_format % (option, value))\n changed = True\n elif state == 'absent' and not option:\n # remove the entire section\n del ini_lines[section_start:index]\n changed = True\n break\n else:\n if within_section and option:\n if state == 'present':\n # change the existing option line\n if match_opt(option, line):\n newline = assignment_format % (option, value)\n changed = ini_lines[index] != newline\n ini_lines[index] = newline\n if changed:\n # remove all possible option occurences from the rest of the section\n index = index + 1\n while index < len(ini_lines):\n line = ini_lines[index]\n if line.startswith('['):\n break\n if match_active_opt(option, line):\n del ini_lines[index]\n else:\n index = index + 1\n break\n else:\n # comment out the existing option line\n if match_active_opt(option, line):\n ini_lines[index] = '#%s' % ini_lines[index]\n changed = True\n break\n\n # remove the fake section line\n del ini_lines[-1:]\n\n if not within_section and option and state == 'present':\n ini_lines.append('[%s]\\n' % section)\n ini_lines.append(assignment_format % (option, value))\n changed = True\n\n\n backup_file = None\n if changed and not module.check_mode:\n if backup:\n backup_file = module.backup_local(filename)\n ini_file = open(filename, 'w')\n try:\n ini_file.writelines(ini_lines)\n finally:\n ini_file.close()\n\n return (changed, backup_file)\n\n# ==============================================================\n# main\n\ndef main():\n\n module = AnsibleModule(\n argument_spec = dict(\n dest = dict(required=True),\n section = dict(required=True),\n option = dict(required=False),\n value = dict(required=False),\n backup = dict(default='no', type='bool'),\n state = dict(default='present', choices=['present', 'absent']),\n no_extra_spaces = dict(required=False, default=False, type='bool')\n ),\n add_file_common_args = True,\n supports_check_mode = True\n )\n\n dest = os.path.expanduser(module.params['dest'])\n section = module.params['section']\n option = module.params['option']\n value = module.params['value']\n state = module.params['state']\n backup = module.params['backup']\n no_extra_spaces = module.params['no_extra_spaces']\n\n (changed,backup_file) = do_ini(module, dest, section, option, value, state, backup, no_extra_spaces)\n\n file_args = module.load_file_common_arguments(module.params)\n changed = module.set_fs_attributes_if_different(file_args, changed)\n\n results = { 'changed': changed, 'msg': \"OK\", 'dest': dest }\n if backup_file is not None:\n results['backup_file'] = backup_file\n\n # Mission complete\n module.exit_json(**results)\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nif __name__ == '__main__':\n main()\n", "path": "files/ini_file.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2012, Jan-Piet Mens <jpmens () gmail.com>\n# (c) 2015, Ales Nosek <anosek.nosek () gmail.com>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nDOCUMENTATION = '''\n---\nmodule: ini_file\nshort_description: Tweak settings in INI files\nextends_documentation_fragment: files\ndescription:\n - Manage (add, remove, change) individual settings in an INI-style file without having\n to manage the file as a whole with, say, M(template) or M(assemble). Adds missing\n sections if they don't exist.\n - Before version 2.0, comments are discarded when the source file is read, and therefore will not show up in the destination file.\nversion_added: \"0.9\"\noptions:\n dest:\n description:\n - Path to the INI-style file; this file is created if required\n required: true\n default: null\n section:\n description:\n - Section name in INI file. This is added if C(state=present) automatically when\n a single value is being set.\n required: true\n default: null\n option:\n description:\n - if set (required for changing a I(value)), this is the name of the option.\n - May be omitted if adding/removing a whole I(section).\n required: false\n default: null\n value:\n description:\n - the string value to be associated with an I(option). May be omitted when removing an I(option).\n required: false\n default: null\n backup:\n description:\n - Create a backup file including the timestamp information so you can get\n the original file back if you somehow clobbered it incorrectly.\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n others:\n description:\n - all arguments accepted by the M(file) module also work here\n required: false\n state:\n description:\n - If set to C(absent) the option or section will be removed if present instead of created.\n required: false\n default: \"present\"\n choices: [ \"present\", \"absent\" ]\n no_extra_spaces:\n description:\n - do not insert spaces before and after '=' symbol\n required: false\n default: false\n version_added: \"2.1\"\nnotes:\n - While it is possible to add an I(option) without specifying a I(value), this makes\n no sense.\n - A section named C(default) cannot be added by the module, but if it exists, individual\n options within the section can be updated. (This is a limitation of Python's I(ConfigParser).)\n Either use M(template) to create a base INI file with a C([default]) section, or use\n M(lineinfile) to add the missing line.\nrequirements: [ ConfigParser ]\nauthor:\n - \"Jan-Piet Mens (@jpmens)\"\n - \"Ales Nosek (@noseka1)\"\n'''\n\nEXAMPLES = '''\n# Ensure \"fav=lemonade is in section \"[drinks]\" in specified file\n- ini_file: dest=/etc/conf section=drinks option=fav value=lemonade mode=0600 backup=yes\n\n- ini_file: dest=/etc/anotherconf\n section=drinks\n option=temperature\n value=cold\n backup=yes\n'''\n\nimport os\nimport re\n\n# ==============================================================\n# match_opt\n\ndef match_opt(option, line):\n option = re.escape(option)\n return re.match('%s( |\\t)*=' % option, line) \\\n or re.match('# *%s( |\\t)*=' % option, line) \\\n or re.match('; *%s( |\\t)*=' % option, line)\n\n# ==============================================================\n# match_active_opt\n\ndef match_active_opt(option, line):\n option = re.escape(option)\n return re.match('%s( |\\t)*=' % option, line)\n\n# ==============================================================\n# do_ini\n\ndef do_ini(module, filename, section=None, option=None, value=None, state='present', backup=False, no_extra_spaces=False):\n\n\n if not os.path.exists(filename):\n try:\n open(filename,'w').close()\n except:\n module.fail_json(msg=\"Destination file %s not writable\" % filename)\n ini_file = open(filename, 'r')\n try:\n ini_lines = ini_file.readlines()\n # append a fake section line to simplify the logic\n ini_lines.append('[')\n finally:\n ini_file.close()\n\n within_section = not section\n section_start = 0\n changed = False\n if no_extra_spaces:\n assignment_format = '%s=%s\\n'\n else:\n assignment_format = '%s = %s\\n'\n\n for index, line in enumerate(ini_lines):\n if line.startswith('[%s]' % section):\n within_section = True\n section_start = index\n elif line.startswith('['):\n if within_section:\n if state == 'present':\n # insert missing option line at the end of the section\n for i in range(index, 0, -1):\n # search backwards for previous non-blank or non-comment line\n if not re.match(r'^[ \\t]*([#;].*)?$', ini_lines[i - 1]):\n ini_lines.insert(i, assignment_format % (option, value))\n changed = True\n break\n elif state == 'absent' and not option:\n # remove the entire section\n del ini_lines[section_start:index]\n changed = True\n break\n else:\n if within_section and option:\n if state == 'present':\n # change the existing option line\n if match_opt(option, line):\n newline = assignment_format % (option, value)\n changed = ini_lines[index] != newline\n ini_lines[index] = newline\n if changed:\n # remove all possible option occurences from the rest of the section\n index = index + 1\n while index < len(ini_lines):\n line = ini_lines[index]\n if line.startswith('['):\n break\n if match_active_opt(option, line):\n del ini_lines[index]\n else:\n index = index + 1\n break\n else:\n # comment out the existing option line\n if match_active_opt(option, line):\n ini_lines[index] = '#%s' % ini_lines[index]\n changed = True\n break\n\n # remove the fake section line\n del ini_lines[-1:]\n\n if not within_section and option and state == 'present':\n ini_lines.append('[%s]\\n' % section)\n ini_lines.append(assignment_format % (option, value))\n changed = True\n\n\n backup_file = None\n if changed and not module.check_mode:\n if backup:\n backup_file = module.backup_local(filename)\n ini_file = open(filename, 'w')\n try:\n ini_file.writelines(ini_lines)\n finally:\n ini_file.close()\n\n return (changed, backup_file)\n\n# ==============================================================\n# main\n\ndef main():\n\n module = AnsibleModule(\n argument_spec = dict(\n dest = dict(required=True),\n section = dict(required=True),\n option = dict(required=False),\n value = dict(required=False),\n backup = dict(default='no', type='bool'),\n state = dict(default='present', choices=['present', 'absent']),\n no_extra_spaces = dict(required=False, default=False, type='bool')\n ),\n add_file_common_args = True,\n supports_check_mode = True\n )\n\n dest = os.path.expanduser(module.params['dest'])\n section = module.params['section']\n option = module.params['option']\n value = module.params['value']\n state = module.params['state']\n backup = module.params['backup']\n no_extra_spaces = module.params['no_extra_spaces']\n\n (changed,backup_file) = do_ini(module, dest, section, option, value, state, backup, no_extra_spaces)\n\n file_args = module.load_file_common_arguments(module.params)\n changed = module.set_fs_attributes_if_different(file_args, changed)\n\n results = { 'changed': changed, 'msg': \"OK\", 'dest': dest }\n if backup_file is not None:\n results['backup_file'] = backup_file\n\n # Mission complete\n module.exit_json(**results)\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nif __name__ == '__main__':\n main()\n", "path": "files/ini_file.py"}]} | 3,192 | 200 |
gh_patches_debug_35792 | rasdani/github-patches | git_diff | joke2k__faker-677 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update user_agent chrome version
Right now provider **user_agent** can return chrome version between 13-15 which is too small (for example latest stable version is 63). I want to create PR to fix this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/user_agent/__init__.py`
Content:
```
1 # coding=utf-8
2
3 from __future__ import unicode_literals
4
5 from datetime import datetime
6
7 from .. import BaseProvider
8
9
10 class Provider(BaseProvider):
11 user_agents = (
12 'chrome', 'firefox', 'internet_explorer', 'opera', 'safari',
13 )
14
15 windows_platform_tokens = (
16 'Windows 95', 'Windows 98', 'Windows 98; Win 9x 4.90', 'Windows CE',
17 'Windows NT 4.0', 'Windows NT 5.0', 'Windows NT 5.01',
18 'Windows NT 5.1', 'Windows NT 5.2', 'Windows NT 6.0', 'Windows NT 6.1',
19 'Windows NT 6.2',
20 )
21
22 linux_processors = ('i686', 'x86_64',)
23
24 mac_processors = ('Intel', 'PPC', 'U; Intel', 'U; PPC',)
25
26 def mac_processor(self):
27 return self.random_element(self.mac_processors)
28
29 def linux_processor(self):
30 return self.random_element(self.linux_processors)
31
32 def user_agent(self):
33 name = self.random_element(self.user_agents)
34 return getattr(self, name)()
35
36 def chrome(self):
37 saf = str(self.generator.random.randint(531, 536)) + \
38 str(self.generator.random.randint(0, 2))
39 tmplt = '({0}) AppleWebKit/{1} (KHTML, like Gecko)' \
40 ' Chrome/{2}.0.{3}.0 Safari/{4}'
41 platforms = (
42 tmplt.format(self.linux_platform_token(),
43 saf,
44 self.generator.random.randint(13, 15),
45 self.generator.random.randint(800, 899),
46 saf),
47 tmplt.format(self.windows_platform_token(),
48 saf,
49 self.generator.random.randint(13, 15),
50 self.generator.random.randint(800, 899),
51 saf),
52 tmplt.format(self.mac_platform_token(),
53 saf,
54 self.generator.random.randint(13, 15),
55 self.generator.random.randint(800, 899),
56 saf),
57 )
58
59 return 'Mozilla/5.0 ' + self.random_element(platforms)
60
61 def firefox(self):
62 ver = (
63 'Gecko/{0} Firefox/{1}.0'.format(
64 self.generator.date_time_between(
65 datetime(2011, 1, 1)
66 ),
67 self.generator.random.randint(4, 15)
68 ),
69 'Gecko/{0} Firefox/3.6.{1}'.format(
70 self.generator.date_time_between(
71 datetime(2010, 1, 1)
72 ),
73 self.generator.random.randint(1, 20)),
74 'Gecko/{0} Firefox/3.8'.format(
75 self.generator.date_time_between(datetime(2010, 1, 1)),
76 ),
77 )
78 tmplt_win = '({0}; {1}; rv:1.9.{2}.20) {3}'
79 tmplt_lin = '({0}; rv:1.9.{1}.20) {2}'
80 tmplt_mac = '({0}; rv:1.9.{1}.20) {2}'
81 platforms = (
82 tmplt_win.format(self.windows_platform_token(),
83 self.generator.locale().replace('_', '-'),
84 self.generator.random.randint(0, 2),
85 self.generator.random.choice(ver)),
86 tmplt_lin.format(self.linux_platform_token(),
87 self.generator.random.randint(5, 7),
88 self.generator.random.choice(ver)),
89 tmplt_mac.format(self.mac_platform_token(),
90 self.generator.random.randint(2, 6),
91 self.generator.random.choice(ver)),
92 )
93
94 return 'Mozilla/5.0 ' + self.random_element(platforms)
95
96 def safari(self):
97 saf = "{0}.{1}.{2}".format(self.generator.random.randint(531, 535),
98 self.generator.random.randint(1, 50),
99 self.generator.random.randint(1, 7))
100 if not self.generator.random.getrandbits(1):
101 ver = "{0}.{1}".format(self.generator.random.randint(4, 5),
102 self.generator.random.randint(0, 1))
103 else:
104 ver = "{0}.0.{1}".format(self.generator.random.randint(4, 5),
105 self.generator.random.randint(1, 5))
106 tmplt_win = '(Windows; U; {0}) AppleWebKit/{1} (KHTML, like Gecko)' \
107 ' Version/{2} Safari/{3}'
108 tmplt_mac = '({0} rv:{1}.0; {2}) AppleWebKit/{3} (KHTML, like Gecko)' \
109 ' Version/{4} Safari/{5}'
110 tmplt_ipod = '(iPod; U; CPU iPhone OS {0}_{1} like Mac OS X; {2})' \
111 ' AppleWebKit/{3} (KHTML, like Gecko) Version/{4}.0.5' \
112 ' Mobile/8B{5} Safari/6{6}'
113 locale = self.generator.locale().replace('_', '-')
114 platforms = (
115 tmplt_win.format(self.windows_platform_token(),
116 saf,
117 ver,
118 saf),
119 tmplt_mac.format(self.mac_platform_token(),
120 self.generator.random.randint(2, 6),
121 locale,
122 saf,
123 ver,
124 saf),
125 tmplt_ipod.format(self.generator.random.randint(3, 4),
126 self.generator.random.randint(0, 3),
127 locale,
128 saf,
129 self.generator.random.randint(3, 4),
130 self.generator.random.randint(111, 119),
131 saf),
132 )
133
134 return 'Mozilla/5.0 ' + self.random_element(platforms)
135
136 def opera(self):
137 platform = '({0}; {1}) Presto/2.9.{2} Version/{3}.00'.format(
138 (
139 self.linux_platform_token()
140 if self.generator.random.getrandbits(1)
141 else self.windows_platform_token()
142 ),
143 self.generator.locale().replace('_', '-'),
144 self.generator.random.randint(160, 190),
145 self.generator.random.randint(10, 12),
146 )
147 return 'Opera/{0}.{1}.{2}'.format(
148 self.generator.random.randint(8, 9),
149 self.generator.random.randint(10, 99),
150 platform,
151 )
152
153 def internet_explorer(self):
154 tmplt = 'Mozilla/5.0 (compatible; MSIE {0}.0; {1}; Trident/{2}.{3})'
155 return tmplt.format(self.generator.random.randint(5, 9),
156 self.windows_platform_token(),
157 self.generator.random.randint(3, 5),
158 self.generator.random.randint(0, 1))
159
160 def windows_platform_token(self):
161 return self.random_element(self.windows_platform_tokens)
162
163 def linux_platform_token(self):
164 return 'X11; Linux {0}'.format(
165 self.random_element(self.linux_processors))
166
167 def mac_platform_token(self):
168 return 'Macintosh; {0} Mac OS X 10_{1}_{2}'.format(
169 self.random_element(self.mac_processors),
170 self.generator.random.randint(5, 8),
171 self.generator.random.randint(0, 9),
172 )
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/user_agent/__init__.py b/faker/providers/user_agent/__init__.py
--- a/faker/providers/user_agent/__init__.py
+++ b/faker/providers/user_agent/__init__.py
@@ -33,7 +33,8 @@
name = self.random_element(self.user_agents)
return getattr(self, name)()
- def chrome(self):
+ def chrome(self, version_from=13, version_to=63,
+ build_from=800, build_to=899):
saf = str(self.generator.random.randint(531, 536)) + \
str(self.generator.random.randint(0, 2))
tmplt = '({0}) AppleWebKit/{1} (KHTML, like Gecko)' \
@@ -41,18 +42,18 @@
platforms = (
tmplt.format(self.linux_platform_token(),
saf,
- self.generator.random.randint(13, 15),
- self.generator.random.randint(800, 899),
+ self.generator.random.randint(version_from, version_to),
+ self.generator.random.randint(build_from, build_to),
saf),
tmplt.format(self.windows_platform_token(),
saf,
- self.generator.random.randint(13, 15),
- self.generator.random.randint(800, 899),
+ self.generator.random.randint(version_from, version_to),
+ self.generator.random.randint(build_from, build_to),
saf),
tmplt.format(self.mac_platform_token(),
saf,
- self.generator.random.randint(13, 15),
- self.generator.random.randint(800, 899),
+ self.generator.random.randint(version_from, version_to),
+ self.generator.random.randint(build_from, build_to),
saf),
)
@@ -167,6 +168,6 @@
def mac_platform_token(self):
return 'Macintosh; {0} Mac OS X 10_{1}_{2}'.format(
self.random_element(self.mac_processors),
- self.generator.random.randint(5, 8),
+ self.generator.random.randint(5, 12),
self.generator.random.randint(0, 9),
)
| {"golden_diff": "diff --git a/faker/providers/user_agent/__init__.py b/faker/providers/user_agent/__init__.py\n--- a/faker/providers/user_agent/__init__.py\n+++ b/faker/providers/user_agent/__init__.py\n@@ -33,7 +33,8 @@\n name = self.random_element(self.user_agents)\n return getattr(self, name)()\n \n- def chrome(self):\n+ def chrome(self, version_from=13, version_to=63,\n+ build_from=800, build_to=899):\n saf = str(self.generator.random.randint(531, 536)) + \\\n str(self.generator.random.randint(0, 2))\n tmplt = '({0}) AppleWebKit/{1} (KHTML, like Gecko)' \\\n@@ -41,18 +42,18 @@\n platforms = (\n tmplt.format(self.linux_platform_token(),\n saf,\n- self.generator.random.randint(13, 15),\n- self.generator.random.randint(800, 899),\n+ self.generator.random.randint(version_from, version_to),\n+ self.generator.random.randint(build_from, build_to),\n saf),\n tmplt.format(self.windows_platform_token(),\n saf,\n- self.generator.random.randint(13, 15),\n- self.generator.random.randint(800, 899),\n+ self.generator.random.randint(version_from, version_to),\n+ self.generator.random.randint(build_from, build_to),\n saf),\n tmplt.format(self.mac_platform_token(),\n saf,\n- self.generator.random.randint(13, 15),\n- self.generator.random.randint(800, 899),\n+ self.generator.random.randint(version_from, version_to),\n+ self.generator.random.randint(build_from, build_to),\n saf),\n )\n \n@@ -167,6 +168,6 @@\n def mac_platform_token(self):\n return 'Macintosh; {0} Mac OS X 10_{1}_{2}'.format(\n self.random_element(self.mac_processors),\n- self.generator.random.randint(5, 8),\n+ self.generator.random.randint(5, 12),\n self.generator.random.randint(0, 9),\n )\n", "issue": "Update user_agent chrome version\nRight now provider **user_agent** can return chrome version between 13-15 which is too small (for example latest stable version is 63). I want to create PR to fix this.\n", "before_files": [{"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\n\nfrom datetime import datetime\n\nfrom .. import BaseProvider\n\n\nclass Provider(BaseProvider):\n user_agents = (\n 'chrome', 'firefox', 'internet_explorer', 'opera', 'safari',\n )\n\n windows_platform_tokens = (\n 'Windows 95', 'Windows 98', 'Windows 98; Win 9x 4.90', 'Windows CE',\n 'Windows NT 4.0', 'Windows NT 5.0', 'Windows NT 5.01',\n 'Windows NT 5.1', 'Windows NT 5.2', 'Windows NT 6.0', 'Windows NT 6.1',\n 'Windows NT 6.2',\n )\n\n linux_processors = ('i686', 'x86_64',)\n\n mac_processors = ('Intel', 'PPC', 'U; Intel', 'U; PPC',)\n\n def mac_processor(self):\n return self.random_element(self.mac_processors)\n\n def linux_processor(self):\n return self.random_element(self.linux_processors)\n\n def user_agent(self):\n name = self.random_element(self.user_agents)\n return getattr(self, name)()\n\n def chrome(self):\n saf = str(self.generator.random.randint(531, 536)) + \\\n str(self.generator.random.randint(0, 2))\n tmplt = '({0}) AppleWebKit/{1} (KHTML, like Gecko)' \\\n ' Chrome/{2}.0.{3}.0 Safari/{4}'\n platforms = (\n tmplt.format(self.linux_platform_token(),\n saf,\n self.generator.random.randint(13, 15),\n self.generator.random.randint(800, 899),\n saf),\n tmplt.format(self.windows_platform_token(),\n saf,\n self.generator.random.randint(13, 15),\n self.generator.random.randint(800, 899),\n saf),\n tmplt.format(self.mac_platform_token(),\n saf,\n self.generator.random.randint(13, 15),\n self.generator.random.randint(800, 899),\n saf),\n )\n\n return 'Mozilla/5.0 ' + self.random_element(platforms)\n\n def firefox(self):\n ver = (\n 'Gecko/{0} Firefox/{1}.0'.format(\n self.generator.date_time_between(\n datetime(2011, 1, 1)\n ),\n self.generator.random.randint(4, 15)\n ),\n 'Gecko/{0} Firefox/3.6.{1}'.format(\n self.generator.date_time_between(\n datetime(2010, 1, 1)\n ),\n self.generator.random.randint(1, 20)),\n 'Gecko/{0} Firefox/3.8'.format(\n self.generator.date_time_between(datetime(2010, 1, 1)),\n ),\n )\n tmplt_win = '({0}; {1}; rv:1.9.{2}.20) {3}'\n tmplt_lin = '({0}; rv:1.9.{1}.20) {2}'\n tmplt_mac = '({0}; rv:1.9.{1}.20) {2}'\n platforms = (\n tmplt_win.format(self.windows_platform_token(),\n self.generator.locale().replace('_', '-'),\n self.generator.random.randint(0, 2),\n self.generator.random.choice(ver)),\n tmplt_lin.format(self.linux_platform_token(),\n self.generator.random.randint(5, 7),\n self.generator.random.choice(ver)),\n tmplt_mac.format(self.mac_platform_token(),\n self.generator.random.randint(2, 6),\n self.generator.random.choice(ver)),\n )\n\n return 'Mozilla/5.0 ' + self.random_element(platforms)\n\n def safari(self):\n saf = \"{0}.{1}.{2}\".format(self.generator.random.randint(531, 535),\n self.generator.random.randint(1, 50),\n self.generator.random.randint(1, 7))\n if not self.generator.random.getrandbits(1):\n ver = \"{0}.{1}\".format(self.generator.random.randint(4, 5),\n self.generator.random.randint(0, 1))\n else:\n ver = \"{0}.0.{1}\".format(self.generator.random.randint(4, 5),\n self.generator.random.randint(1, 5))\n tmplt_win = '(Windows; U; {0}) AppleWebKit/{1} (KHTML, like Gecko)' \\\n ' Version/{2} Safari/{3}'\n tmplt_mac = '({0} rv:{1}.0; {2}) AppleWebKit/{3} (KHTML, like Gecko)' \\\n ' Version/{4} Safari/{5}'\n tmplt_ipod = '(iPod; U; CPU iPhone OS {0}_{1} like Mac OS X; {2})' \\\n ' AppleWebKit/{3} (KHTML, like Gecko) Version/{4}.0.5' \\\n ' Mobile/8B{5} Safari/6{6}'\n locale = self.generator.locale().replace('_', '-')\n platforms = (\n tmplt_win.format(self.windows_platform_token(),\n saf,\n ver,\n saf),\n tmplt_mac.format(self.mac_platform_token(),\n self.generator.random.randint(2, 6),\n locale,\n saf,\n ver,\n saf),\n tmplt_ipod.format(self.generator.random.randint(3, 4),\n self.generator.random.randint(0, 3),\n locale,\n saf,\n self.generator.random.randint(3, 4),\n self.generator.random.randint(111, 119),\n saf),\n )\n\n return 'Mozilla/5.0 ' + self.random_element(platforms)\n\n def opera(self):\n platform = '({0}; {1}) Presto/2.9.{2} Version/{3}.00'.format(\n (\n self.linux_platform_token()\n if self.generator.random.getrandbits(1)\n else self.windows_platform_token()\n ),\n self.generator.locale().replace('_', '-'),\n self.generator.random.randint(160, 190),\n self.generator.random.randint(10, 12),\n )\n return 'Opera/{0}.{1}.{2}'.format(\n self.generator.random.randint(8, 9),\n self.generator.random.randint(10, 99),\n platform,\n )\n\n def internet_explorer(self):\n tmplt = 'Mozilla/5.0 (compatible; MSIE {0}.0; {1}; Trident/{2}.{3})'\n return tmplt.format(self.generator.random.randint(5, 9),\n self.windows_platform_token(),\n self.generator.random.randint(3, 5),\n self.generator.random.randint(0, 1))\n\n def windows_platform_token(self):\n return self.random_element(self.windows_platform_tokens)\n\n def linux_platform_token(self):\n return 'X11; Linux {0}'.format(\n self.random_element(self.linux_processors))\n\n def mac_platform_token(self):\n return 'Macintosh; {0} Mac OS X 10_{1}_{2}'.format(\n self.random_element(self.mac_processors),\n self.generator.random.randint(5, 8),\n self.generator.random.randint(0, 9),\n )\n", "path": "faker/providers/user_agent/__init__.py"}], "after_files": [{"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\n\nfrom datetime import datetime\n\nfrom .. import BaseProvider\n\n\nclass Provider(BaseProvider):\n user_agents = (\n 'chrome', 'firefox', 'internet_explorer', 'opera', 'safari',\n )\n\n windows_platform_tokens = (\n 'Windows 95', 'Windows 98', 'Windows 98; Win 9x 4.90', 'Windows CE',\n 'Windows NT 4.0', 'Windows NT 5.0', 'Windows NT 5.01',\n 'Windows NT 5.1', 'Windows NT 5.2', 'Windows NT 6.0', 'Windows NT 6.1',\n 'Windows NT 6.2',\n )\n\n linux_processors = ('i686', 'x86_64',)\n\n mac_processors = ('Intel', 'PPC', 'U; Intel', 'U; PPC',)\n\n def mac_processor(self):\n return self.random_element(self.mac_processors)\n\n def linux_processor(self):\n return self.random_element(self.linux_processors)\n\n def user_agent(self):\n name = self.random_element(self.user_agents)\n return getattr(self, name)()\n\n def chrome(self, version_from=13, version_to=63,\n build_from=800, build_to=899):\n saf = str(self.generator.random.randint(531, 536)) + \\\n str(self.generator.random.randint(0, 2))\n tmplt = '({0}) AppleWebKit/{1} (KHTML, like Gecko)' \\\n ' Chrome/{2}.0.{3}.0 Safari/{4}'\n platforms = (\n tmplt.format(self.linux_platform_token(),\n saf,\n self.generator.random.randint(version_from, version_to),\n self.generator.random.randint(build_from, build_to),\n saf),\n tmplt.format(self.windows_platform_token(),\n saf,\n self.generator.random.randint(version_from, version_to),\n self.generator.random.randint(build_from, build_to),\n saf),\n tmplt.format(self.mac_platform_token(),\n saf,\n self.generator.random.randint(version_from, version_to),\n self.generator.random.randint(build_from, build_to),\n saf),\n )\n\n return 'Mozilla/5.0 ' + self.random_element(platforms)\n\n def firefox(self):\n ver = (\n 'Gecko/{0} Firefox/{1}.0'.format(\n self.generator.date_time_between(\n datetime(2011, 1, 1)\n ),\n self.generator.random.randint(4, 15)\n ),\n 'Gecko/{0} Firefox/3.6.{1}'.format(\n self.generator.date_time_between(\n datetime(2010, 1, 1)\n ),\n self.generator.random.randint(1, 20)),\n 'Gecko/{0} Firefox/3.8'.format(\n self.generator.date_time_between(datetime(2010, 1, 1)),\n ),\n )\n tmplt_win = '({0}; {1}; rv:1.9.{2}.20) {3}'\n tmplt_lin = '({0}; rv:1.9.{1}.20) {2}'\n tmplt_mac = '({0}; rv:1.9.{1}.20) {2}'\n platforms = (\n tmplt_win.format(self.windows_platform_token(),\n self.generator.locale().replace('_', '-'),\n self.generator.random.randint(0, 2),\n self.generator.random.choice(ver)),\n tmplt_lin.format(self.linux_platform_token(),\n self.generator.random.randint(5, 7),\n self.generator.random.choice(ver)),\n tmplt_mac.format(self.mac_platform_token(),\n self.generator.random.randint(2, 6),\n self.generator.random.choice(ver)),\n )\n\n return 'Mozilla/5.0 ' + self.random_element(platforms)\n\n def safari(self):\n saf = \"{0}.{1}.{2}\".format(self.generator.random.randint(531, 535),\n self.generator.random.randint(1, 50),\n self.generator.random.randint(1, 7))\n if not self.generator.random.getrandbits(1):\n ver = \"{0}.{1}\".format(self.generator.random.randint(4, 5),\n self.generator.random.randint(0, 1))\n else:\n ver = \"{0}.0.{1}\".format(self.generator.random.randint(4, 5),\n self.generator.random.randint(1, 5))\n tmplt_win = '(Windows; U; {0}) AppleWebKit/{1} (KHTML, like Gecko)' \\\n ' Version/{2} Safari/{3}'\n tmplt_mac = '({0} rv:{1}.0; {2}) AppleWebKit/{3} (KHTML, like Gecko)' \\\n ' Version/{4} Safari/{5}'\n tmplt_ipod = '(iPod; U; CPU iPhone OS {0}_{1} like Mac OS X; {2})' \\\n ' AppleWebKit/{3} (KHTML, like Gecko) Version/{4}.0.5' \\\n ' Mobile/8B{5} Safari/6{6}'\n locale = self.generator.locale().replace('_', '-')\n platforms = (\n tmplt_win.format(self.windows_platform_token(),\n saf,\n ver,\n saf),\n tmplt_mac.format(self.mac_platform_token(),\n self.generator.random.randint(2, 6),\n locale,\n saf,\n ver,\n saf),\n tmplt_ipod.format(self.generator.random.randint(3, 4),\n self.generator.random.randint(0, 3),\n locale,\n saf,\n self.generator.random.randint(3, 4),\n self.generator.random.randint(111, 119),\n saf),\n )\n\n return 'Mozilla/5.0 ' + self.random_element(platforms)\n\n def opera(self):\n platform = '({0}; {1}) Presto/2.9.{2} Version/{3}.00'.format(\n (\n self.linux_platform_token()\n if self.generator.random.getrandbits(1)\n else self.windows_platform_token()\n ),\n self.generator.locale().replace('_', '-'),\n self.generator.random.randint(160, 190),\n self.generator.random.randint(10, 12),\n )\n return 'Opera/{0}.{1}.{2}'.format(\n self.generator.random.randint(8, 9),\n self.generator.random.randint(10, 99),\n platform,\n )\n\n def internet_explorer(self):\n tmplt = 'Mozilla/5.0 (compatible; MSIE {0}.0; {1}; Trident/{2}.{3})'\n return tmplt.format(self.generator.random.randint(5, 9),\n self.windows_platform_token(),\n self.generator.random.randint(3, 5),\n self.generator.random.randint(0, 1))\n\n def windows_platform_token(self):\n return self.random_element(self.windows_platform_tokens)\n\n def linux_platform_token(self):\n return 'X11; Linux {0}'.format(\n self.random_element(self.linux_processors))\n\n def mac_platform_token(self):\n return 'Macintosh; {0} Mac OS X 10_{1}_{2}'.format(\n self.random_element(self.mac_processors),\n self.generator.random.randint(5, 12),\n self.generator.random.randint(0, 9),\n )\n", "path": "faker/providers/user_agent/__init__.py"}]} | 2,310 | 488 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.