problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_31851 | rasdani/github-patches | git_diff | coala__coala-2865 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add more strings to Constants
Our Constants are located in coala/coalib/misc/Constants.py and there we hold options for `TRUE_STRINGS`. I think we should expand these true strings with more options, such as : `yep`,`ja` or even `hell yeah` (who knows what the user might come up with). Feel free to add your own suggestions if you think they fit.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `coalib/misc/Constants.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import appdirs
4 import os
5 import re
6
7 # Start ignoring PyImportSortBear, PyLintBear as BUS_NAME is imported as a
8 # constant from other files.
9 from coalib import BUS_NAME
10 from coalib import VERSION
11 # Stop ignoring
12
13
14 THIS_IS_A_BUG = ('This is a bug. We are sorry for the inconvenience. '
15 'Please contact the developers for assistance.')
16
17 CRASH_MESSAGE = ('An unknown error occurred. This is a bug. We are '
18 'sorry for the inconvenience. Please contact the '
19 'developers for assistance. During execution of '
20 'coala an exception was raised. This should never '
21 'happen. When asked for, the following information '
22 'may help investigating:')
23
24 VERSION_CONFLICT_MESSAGE = ('There is a conflict in the version of a '
25 'dependency you have installed and the '
26 'requirements of coala. This may be resolved by '
27 'creating a separate virtual environment for '
28 'coala or running `pip install "%s"`. Be aware '
29 'that the latter solution might break other '
30 'python packages that depend on the currently '
31 'installed version.')
32
33 OBJ_NOT_ACCESSIBLE = '{} is not accessible and will be ignored!'
34
35 TRUE_STRINGS = ['1',
36 'on',
37 'y',
38 'yes',
39 'yeah',
40 'sure',
41 'true',
42 'definitely',
43 'yup',
44 'right',
45 'aye',
46 'positive']
47
48 FALSE_STRINGS = ['0',
49 'off',
50 'n',
51 'no',
52 'nope',
53 'nah',
54 'false',
55 'wrong',
56 'none',
57 'nay',
58 'negative']
59
60 # This string contains many unicode characters to challenge tests.
61 COMPLEX_TEST_STRING = ('4 r34l ch4ll3n63: 123 ÄÖü ABc @€¥ §&% {[( ←↓→↑ '
62 'ĦŊħ ß°^ \\\n\u2192')
63
64 # Path to the coalib directory
65 coalib_root = os.path.join(os.path.dirname(__file__),
66 os.path.pardir)
67
68 # Path to the language definition files
69 language_definitions = os.path.join(coalib_root,
70 'bearlib',
71 'languages',
72 'definitions')
73
74 system_coafile = os.path.join(coalib_root, 'default_coafile')
75
76 user_coafile = os.path.join(os.path.expanduser('~'), '.coarc')
77
78 default_coafile = '.coafile'
79
80 USER_DATA_DIR = appdirs.user_data_dir('coala', version=VERSION)
81
82 GLOBBING_SPECIAL_CHARS = '()[]|?*'
83
84 URL_REGEX = re.compile(
85 r'^(?:(?:http|ftp)[s]?://)?' # scheme
86 r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+' # domain name
87 r'(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|'
88 r'localhost|' # OR localhost
89 r'(?:\d{1,3}\.){3}\d{1,3})' # OR an ip
90 r'(?::\d+)?' # optional port number
91 r'(?:/?|[/?]\S+)$', # path
92 re.IGNORECASE)
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/coalib/misc/Constants.py b/coalib/misc/Constants.py
--- a/coalib/misc/Constants.py
+++ b/coalib/misc/Constants.py
@@ -34,27 +34,69 @@
TRUE_STRINGS = ['1',
'on',
+ 'okay',
+ 'ok',
+ 'okey-dokey',
'y',
'yes',
'yeah',
+ 'yea',
+ 'ya',
+ 'ye',
+ 'yessir',
'sure',
'true',
+ 'tru',
+ 'uh-huh',
'definitely',
'yup',
+ 'yep',
'right',
'aye',
+ 'alright',
+ 'alrighty',
+ 'hell yeah',
+ 'affirmative',
+ 'certainly',
+ 'definitely',
+ 'absolutely',
+ 'roger',
+ 'righto',
+ 'ja',
+ 'da',
+ 'si',
+ 'oui',
+ 'amen',
+ 'totally',
+ '10-4',
'positive']
FALSE_STRINGS = ['0',
'off',
'n',
'no',
+ 'nix',
'nope',
+ 'nop',
'nah',
+ 'nay',
'false',
+ 'uh-uh',
'wrong',
'none',
'nay',
+ 'hell no',
+ 'fat chance',
+ 'not a chance in hell',
+ 'not in a million years',
+ 'out of the question',
+ 'no siree',
+ 'no way',
+ 'nein',
+ 'njet',
+ 'nee',
+ 'non',
+ 'hakuna',
'negative']
# This string contains many unicode characters to challenge tests.
| {"golden_diff": "diff --git a/coalib/misc/Constants.py b/coalib/misc/Constants.py\n--- a/coalib/misc/Constants.py\n+++ b/coalib/misc/Constants.py\n@@ -34,27 +34,69 @@\n \n TRUE_STRINGS = ['1',\n 'on',\n+ 'okay',\n+ 'ok',\n+ 'okey-dokey',\n 'y',\n 'yes',\n 'yeah',\n+ 'yea',\n+ 'ya',\n+ 'ye',\n+ 'yessir',\n 'sure',\n 'true',\n+ 'tru',\n+ 'uh-huh',\n 'definitely',\n 'yup',\n+ 'yep',\n 'right',\n 'aye',\n+ 'alright',\n+ 'alrighty',\n+ 'hell yeah',\n+ 'affirmative',\n+ 'certainly',\n+ 'definitely',\n+ 'absolutely',\n+ 'roger',\n+ 'righto',\n+ 'ja',\n+ 'da',\n+ 'si',\n+ 'oui',\n+ 'amen',\n+ 'totally',\n+ '10-4',\n 'positive']\n \n FALSE_STRINGS = ['0',\n 'off',\n 'n',\n 'no',\n+ 'nix',\n 'nope',\n+ 'nop',\n 'nah',\n+ 'nay',\n 'false',\n+ 'uh-uh',\n 'wrong',\n 'none',\n 'nay',\n+ 'hell no',\n+ 'fat chance',\n+ 'not a chance in hell',\n+ 'not in a million years',\n+ 'out of the question',\n+ 'no siree',\n+ 'no way',\n+ 'nein',\n+ 'njet',\n+ 'nee',\n+ 'non',\n+ 'hakuna',\n 'negative']\n \n # This string contains many unicode characters to challenge tests.\n", "issue": "Add more strings to Constants\nOur Constants are located in coala/coalib/misc/Constants.py and there we hold options for `TRUE_STRINGS`. I think we should expand these true strings with more options, such as : `yep`,`ja` or even `hell yeah` (who knows what the user might come up with). Feel free to add your own suggestions if you think they fit.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport appdirs\nimport os\nimport re\n\n# Start ignoring PyImportSortBear, PyLintBear as BUS_NAME is imported as a\n# constant from other files.\nfrom coalib import BUS_NAME\nfrom coalib import VERSION\n# Stop ignoring\n\n\nTHIS_IS_A_BUG = ('This is a bug. We are sorry for the inconvenience. '\n 'Please contact the developers for assistance.')\n\nCRASH_MESSAGE = ('An unknown error occurred. This is a bug. We are '\n 'sorry for the inconvenience. Please contact the '\n 'developers for assistance. During execution of '\n 'coala an exception was raised. This should never '\n 'happen. When asked for, the following information '\n 'may help investigating:')\n\nVERSION_CONFLICT_MESSAGE = ('There is a conflict in the version of a '\n 'dependency you have installed and the '\n 'requirements of coala. This may be resolved by '\n 'creating a separate virtual environment for '\n 'coala or running `pip install \"%s\"`. Be aware '\n 'that the latter solution might break other '\n 'python packages that depend on the currently '\n 'installed version.')\n\nOBJ_NOT_ACCESSIBLE = '{} is not accessible and will be ignored!'\n\nTRUE_STRINGS = ['1',\n 'on',\n 'y',\n 'yes',\n 'yeah',\n 'sure',\n 'true',\n 'definitely',\n 'yup',\n 'right',\n 'aye',\n 'positive']\n\nFALSE_STRINGS = ['0',\n 'off',\n 'n',\n 'no',\n 'nope',\n 'nah',\n 'false',\n 'wrong',\n 'none',\n 'nay',\n 'negative']\n\n# This string contains many unicode characters to challenge tests.\nCOMPLEX_TEST_STRING = ('4 r34l ch4ll3n63: 123 \u00c4\u00d6\u00fc ABc @\u20ac\u00a5 \u00a7&% {[( \u2190\u2193\u2192\u2191 '\n '\u0126\u014a\u0127 \u00df\u00b0^ \\\\\\n\\u2192')\n\n# Path to the coalib directory\ncoalib_root = os.path.join(os.path.dirname(__file__),\n os.path.pardir)\n\n# Path to the language definition files\nlanguage_definitions = os.path.join(coalib_root,\n 'bearlib',\n 'languages',\n 'definitions')\n\nsystem_coafile = os.path.join(coalib_root, 'default_coafile')\n\nuser_coafile = os.path.join(os.path.expanduser('~'), '.coarc')\n\ndefault_coafile = '.coafile'\n\nUSER_DATA_DIR = appdirs.user_data_dir('coala', version=VERSION)\n\nGLOBBING_SPECIAL_CHARS = '()[]|?*'\n\nURL_REGEX = re.compile(\n r'^(?:(?:http|ftp)[s]?://)?' # scheme\n r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\\.)+' # domain name\n r'(?:[A-Z]{2,6}\\.?|[A-Z0-9-]{2,}\\.?)|'\n r'localhost|' # OR localhost\n r'(?:\\d{1,3}\\.){3}\\d{1,3})' # OR an ip\n r'(?::\\d+)?' # optional port number\n r'(?:/?|[/?]\\S+)$', # path\n re.IGNORECASE)\n", "path": "coalib/misc/Constants.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport appdirs\nimport os\nimport re\n\n# Start ignoring PyImportSortBear, PyLintBear as BUS_NAME is imported as a\n# constant from other files.\nfrom coalib import BUS_NAME\nfrom coalib import VERSION\n# Stop ignoring\n\n\nTHIS_IS_A_BUG = ('This is a bug. We are sorry for the inconvenience. '\n 'Please contact the developers for assistance.')\n\nCRASH_MESSAGE = ('An unknown error occurred. This is a bug. We are '\n 'sorry for the inconvenience. Please contact the '\n 'developers for assistance. During execution of '\n 'coala an exception was raised. This should never '\n 'happen. When asked for, the following information '\n 'may help investigating:')\n\nVERSION_CONFLICT_MESSAGE = ('There is a conflict in the version of a '\n 'dependency you have installed and the '\n 'requirements of coala. This may be resolved by '\n 'creating a separate virtual environment for '\n 'coala or running `pip install \"%s\"`. Be aware '\n 'that the latter solution might break other '\n 'python packages that depend on the currently '\n 'installed version.')\n\nOBJ_NOT_ACCESSIBLE = '{} is not accessible and will be ignored!'\n\nTRUE_STRINGS = ['1',\n 'on',\n 'okay',\n 'ok',\n 'okey-dokey',\n 'y',\n 'yes',\n 'yeah',\n 'yea',\n 'ya',\n 'ye',\n 'yessir',\n 'sure',\n 'true',\n 'tru',\n 'uh-huh',\n 'definitely',\n 'yup',\n 'yep',\n 'right',\n 'aye',\n 'alright',\n 'alrighty',\n 'hell yeah',\n 'affirmative',\n 'certainly',\n 'definitely',\n 'absolutely',\n 'roger',\n 'righto',\n 'ja',\n 'da',\n 'si',\n 'oui',\n 'amen',\n 'totally',\n '10-4',\n 'positive']\n\nFALSE_STRINGS = ['0',\n 'off',\n 'n',\n 'no',\n 'nix',\n 'nope',\n 'nop',\n 'nah',\n 'nay',\n 'false',\n 'uh-uh',\n 'wrong',\n 'none',\n 'nay',\n 'hell no',\n 'fat chance',\n 'not a chance in hell',\n 'not in a million years',\n 'out of the question',\n 'no siree',\n 'no way',\n 'nein',\n 'njet',\n 'nee',\n 'non',\n 'hakuna',\n 'negative']\n\n# This string contains many unicode characters to challenge tests.\nCOMPLEX_TEST_STRING = ('4 r34l ch4ll3n63: 123 \u00c4\u00d6\u00fc ABc @\u20ac\u00a5 \u00a7&% {[( \u2190\u2193\u2192\u2191 '\n '\u0126\u014a\u0127 \u00df\u00b0^ \\\\\\n\\u2192')\n\n# Path to the coalib directory\ncoalib_root = os.path.join(os.path.dirname(__file__),\n os.path.pardir)\n\n# Path to the language definition files\nlanguage_definitions = os.path.join(coalib_root,\n 'bearlib',\n 'languages',\n 'definitions')\n\nsystem_coafile = os.path.join(coalib_root, 'default_coafile')\n\nuser_coafile = os.path.join(os.path.expanduser('~'), '.coarc')\n\ndefault_coafile = '.coafile'\n\nUSER_DATA_DIR = appdirs.user_data_dir('coala', version=VERSION)\n\nGLOBBING_SPECIAL_CHARS = '()[]|?*'\n\nURL_REGEX = re.compile(\n r'^(?:(?:http|ftp)[s]?://)?' # scheme\n r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\\.)+' # domain name\n r'(?:[A-Z]{2,6}\\.?|[A-Z0-9-]{2,}\\.?)|'\n r'localhost|' # OR localhost\n r'(?:\\d{1,3}\\.){3}\\d{1,3})' # OR an ip\n r'(?::\\d+)?' # optional port number\n r'(?:/?|[/?]\\S+)$', # path\n re.IGNORECASE)\n", "path": "coalib/misc/Constants.py"}]} | 1,281 | 429 |
gh_patches_debug_37713 | rasdani/github-patches | git_diff | coala__coala-964 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
glob/collecting: Accept strings too
the methods there want a list but `glob("string")` makes sense too (and behaves very strangely), we should support that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `coalib/collecting/Collectors.py`
Content:
```
1 import os
2
3 from coalib.collecting.Importers import iimport_objects
4 from coalib.misc.Decorators import yield_once
5 from coalib.misc.i18n import _
6 from coalib.parsing.Globbing import iglob
7
8
9 def _yield_if_right_kind(bear_class, kinds):
10 try:
11 if bear_class.kind() in kinds:
12 yield bear_class
13 except NotImplementedError:
14 pass
15
16
17 def _import_bears(file_path, kinds):
18 # recursive imports:
19 for bear_list in iimport_objects(file_path,
20 names='__additional_bears__',
21 types=list):
22 for bear_class in bear_list:
23 for valid_bear_class in _yield_if_right_kind(bear_class, kinds):
24 yield valid_bear_class
25 # normal import
26 for bear_class in iimport_objects(file_path,
27 attributes='kind',
28 local=True):
29 for valid_bear_class in _yield_if_right_kind(bear_class, kinds):
30 yield valid_bear_class
31
32
33 @yield_once
34 def icollect(file_paths):
35 """
36 Evaluate globs in file paths and return all matching files.
37
38 :param file_paths: list of file paths that can include globs
39 :return: iterator that yields paths of all matching files
40 """
41 for file_path in file_paths:
42 for match in iglob(file_path):
43 yield match
44
45
46 def collect_files(file_paths):
47 """
48 Evaluate globs in file paths and return all matching files
49
50 :param file_paths: list of file paths that can include globs
51 :return: list of paths of all matching files
52 """
53 return list(filter(os.path.isfile, icollect(file_paths)))
54
55
56 def collect_dirs(dir_paths):
57 """
58 Evaluate globs in directory paths and return all matching directories
59
60 :param dir_paths: list of file paths that can include globs
61 :return: list of paths of all matching directories
62 """
63 return list(filter(os.path.isdir, icollect(dir_paths)))
64
65
66 @yield_once
67 def icollect_bears(bear_dirs, bear_names, kinds, log_printer):
68 """
69 Collect all bears from bear directories that have a matching kind.
70
71 :param bear_dirs: directories that can contain bears
72 :param bear_names: names of bears
73 :param kinds: list of bear kinds to be collected
74 :param log_printer: log_printer to handle logging
75 :return: iterator that yields bear classes
76 """
77 for bear_dir in filter(os.path.isdir, icollect(bear_dirs)):
78 for bear_name in bear_names:
79 for matching_file in iglob(
80 os.path.join(bear_dir, bear_name + '.py')):
81
82 try:
83 for bear in _import_bears(matching_file, kinds):
84 yield bear
85 except:
86 log_printer.warn(_("Unable to collect bears from {file}. "
87 "Probably the file is malformed or "
88 "the module code raises an exception.")
89 .format(file=matching_file))
90
91
92 def collect_bears(bear_dirs, bear_names, kinds, log_printer):
93 """
94 Collect all bears from bear directories that have a matching kind.
95
96 :param bear_dirs: directories that can contain bears
97 :param bear_names: names of bears
98 :param kinds: list of bear kinds to be collected
99 :param log_printer: log_printer to handle logging
100 :return: list of matching bear classes
101 """
102 return list(icollect_bears(bear_dirs, bear_names, kinds, log_printer))
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/coalib/collecting/Collectors.py b/coalib/collecting/Collectors.py
--- a/coalib/collecting/Collectors.py
+++ b/coalib/collecting/Collectors.py
@@ -35,9 +35,12 @@
"""
Evaluate globs in file paths and return all matching files.
- :param file_paths: list of file paths that can include globs
+ :param file_paths: file path or list of such that can include globs
:return: iterator that yields paths of all matching files
"""
+ if isinstance(file_paths, str):
+ file_paths = [file_paths]
+
for file_path in file_paths:
for match in iglob(file_path):
yield match
@@ -47,7 +50,7 @@
"""
Evaluate globs in file paths and return all matching files
- :param file_paths: list of file paths that can include globs
+ :param file_paths: file path or list of such that can include globs
:return: list of paths of all matching files
"""
return list(filter(os.path.isfile, icollect(file_paths)))
@@ -57,7 +60,7 @@
"""
Evaluate globs in directory paths and return all matching directories
- :param dir_paths: list of file paths that can include globs
+ :param dir_paths: file path or list of such that can include globs
:return: list of paths of all matching directories
"""
return list(filter(os.path.isdir, icollect(dir_paths)))
@@ -68,7 +71,7 @@
"""
Collect all bears from bear directories that have a matching kind.
- :param bear_dirs: directories that can contain bears
+ :param bear_dirs: directory name or list of such that can contain bears
:param bear_names: names of bears
:param kinds: list of bear kinds to be collected
:param log_printer: log_printer to handle logging
@@ -93,7 +96,7 @@
"""
Collect all bears from bear directories that have a matching kind.
- :param bear_dirs: directories that can contain bears
+ :param bear_dirs: directory name or list of such that can contain bears
:param bear_names: names of bears
:param kinds: list of bear kinds to be collected
:param log_printer: log_printer to handle logging
| {"golden_diff": "diff --git a/coalib/collecting/Collectors.py b/coalib/collecting/Collectors.py\n--- a/coalib/collecting/Collectors.py\n+++ b/coalib/collecting/Collectors.py\n@@ -35,9 +35,12 @@\n \"\"\"\n Evaluate globs in file paths and return all matching files.\n \n- :param file_paths: list of file paths that can include globs\n+ :param file_paths: file path or list of such that can include globs\n :return: iterator that yields paths of all matching files\n \"\"\"\n+ if isinstance(file_paths, str):\n+ file_paths = [file_paths]\n+\n for file_path in file_paths:\n for match in iglob(file_path):\n yield match\n@@ -47,7 +50,7 @@\n \"\"\"\n Evaluate globs in file paths and return all matching files\n \n- :param file_paths: list of file paths that can include globs\n+ :param file_paths: file path or list of such that can include globs\n :return: list of paths of all matching files\n \"\"\"\n return list(filter(os.path.isfile, icollect(file_paths)))\n@@ -57,7 +60,7 @@\n \"\"\"\n Evaluate globs in directory paths and return all matching directories\n \n- :param dir_paths: list of file paths that can include globs\n+ :param dir_paths: file path or list of such that can include globs\n :return: list of paths of all matching directories\n \"\"\"\n return list(filter(os.path.isdir, icollect(dir_paths)))\n@@ -68,7 +71,7 @@\n \"\"\"\n Collect all bears from bear directories that have a matching kind.\n \n- :param bear_dirs: directories that can contain bears\n+ :param bear_dirs: directory name or list of such that can contain bears\n :param bear_names: names of bears\n :param kinds: list of bear kinds to be collected\n :param log_printer: log_printer to handle logging\n@@ -93,7 +96,7 @@\n \"\"\"\n Collect all bears from bear directories that have a matching kind.\n \n- :param bear_dirs: directories that can contain bears\n+ :param bear_dirs: directory name or list of such that can contain bears\n :param bear_names: names of bears\n :param kinds: list of bear kinds to be collected\n :param log_printer: log_printer to handle logging\n", "issue": "glob/collecting: Accept strings too\nthe methods there want a list but `glob(\"string\")` makes sense too (and behaves very strangely), we should support that.\n\n", "before_files": [{"content": "import os\n\nfrom coalib.collecting.Importers import iimport_objects\nfrom coalib.misc.Decorators import yield_once\nfrom coalib.misc.i18n import _\nfrom coalib.parsing.Globbing import iglob\n\n\ndef _yield_if_right_kind(bear_class, kinds):\n try:\n if bear_class.kind() in kinds:\n yield bear_class\n except NotImplementedError:\n pass\n\n\ndef _import_bears(file_path, kinds):\n # recursive imports:\n for bear_list in iimport_objects(file_path,\n names='__additional_bears__',\n types=list):\n for bear_class in bear_list:\n for valid_bear_class in _yield_if_right_kind(bear_class, kinds):\n yield valid_bear_class\n # normal import\n for bear_class in iimport_objects(file_path,\n attributes='kind',\n local=True):\n for valid_bear_class in _yield_if_right_kind(bear_class, kinds):\n yield valid_bear_class\n\n\n@yield_once\ndef icollect(file_paths):\n \"\"\"\n Evaluate globs in file paths and return all matching files.\n\n :param file_paths: list of file paths that can include globs\n :return: iterator that yields paths of all matching files\n \"\"\"\n for file_path in file_paths:\n for match in iglob(file_path):\n yield match\n\n\ndef collect_files(file_paths):\n \"\"\"\n Evaluate globs in file paths and return all matching files\n\n :param file_paths: list of file paths that can include globs\n :return: list of paths of all matching files\n \"\"\"\n return list(filter(os.path.isfile, icollect(file_paths)))\n\n\ndef collect_dirs(dir_paths):\n \"\"\"\n Evaluate globs in directory paths and return all matching directories\n\n :param dir_paths: list of file paths that can include globs\n :return: list of paths of all matching directories\n \"\"\"\n return list(filter(os.path.isdir, icollect(dir_paths)))\n\n\n@yield_once\ndef icollect_bears(bear_dirs, bear_names, kinds, log_printer):\n \"\"\"\n Collect all bears from bear directories that have a matching kind.\n\n :param bear_dirs: directories that can contain bears\n :param bear_names: names of bears\n :param kinds: list of bear kinds to be collected\n :param log_printer: log_printer to handle logging\n :return: iterator that yields bear classes\n \"\"\"\n for bear_dir in filter(os.path.isdir, icollect(bear_dirs)):\n for bear_name in bear_names:\n for matching_file in iglob(\n os.path.join(bear_dir, bear_name + '.py')):\n\n try:\n for bear in _import_bears(matching_file, kinds):\n yield bear\n except:\n log_printer.warn(_(\"Unable to collect bears from {file}. \"\n \"Probably the file is malformed or \"\n \"the module code raises an exception.\")\n .format(file=matching_file))\n\n\ndef collect_bears(bear_dirs, bear_names, kinds, log_printer):\n \"\"\"\n Collect all bears from bear directories that have a matching kind.\n\n :param bear_dirs: directories that can contain bears\n :param bear_names: names of bears\n :param kinds: list of bear kinds to be collected\n :param log_printer: log_printer to handle logging\n :return: list of matching bear classes\n \"\"\"\n return list(icollect_bears(bear_dirs, bear_names, kinds, log_printer))\n", "path": "coalib/collecting/Collectors.py"}], "after_files": [{"content": "import os\n\nfrom coalib.collecting.Importers import iimport_objects\nfrom coalib.misc.Decorators import yield_once\nfrom coalib.misc.i18n import _\nfrom coalib.parsing.Globbing import iglob\n\n\ndef _yield_if_right_kind(bear_class, kinds):\n try:\n if bear_class.kind() in kinds:\n yield bear_class\n except NotImplementedError:\n pass\n\n\ndef _import_bears(file_path, kinds):\n # recursive imports:\n for bear_list in iimport_objects(file_path,\n names='__additional_bears__',\n types=list):\n for bear_class in bear_list:\n for valid_bear_class in _yield_if_right_kind(bear_class, kinds):\n yield valid_bear_class\n # normal import\n for bear_class in iimport_objects(file_path,\n attributes='kind',\n local=True):\n for valid_bear_class in _yield_if_right_kind(bear_class, kinds):\n yield valid_bear_class\n\n\n@yield_once\ndef icollect(file_paths):\n \"\"\"\n Evaluate globs in file paths and return all matching files.\n\n :param file_paths: file path or list of such that can include globs\n :return: iterator that yields paths of all matching files\n \"\"\"\n if isinstance(file_paths, str):\n file_paths = [file_paths]\n\n for file_path in file_paths:\n for match in iglob(file_path):\n yield match\n\n\ndef collect_files(file_paths):\n \"\"\"\n Evaluate globs in file paths and return all matching files\n\n :param file_paths: file path or list of such that can include globs\n :return: list of paths of all matching files\n \"\"\"\n return list(filter(os.path.isfile, icollect(file_paths)))\n\n\ndef collect_dirs(dir_paths):\n \"\"\"\n Evaluate globs in directory paths and return all matching directories\n\n :param dir_paths: file path or list of such that can include globs\n :return: list of paths of all matching directories\n \"\"\"\n return list(filter(os.path.isdir, icollect(dir_paths)))\n\n\n@yield_once\ndef icollect_bears(bear_dirs, bear_names, kinds, log_printer):\n \"\"\"\n Collect all bears from bear directories that have a matching kind.\n\n :param bear_dirs: directory name or list of such that can contain bears\n :param bear_names: names of bears\n :param kinds: list of bear kinds to be collected\n :param log_printer: log_printer to handle logging\n :return: iterator that yields bear classes\n \"\"\"\n for bear_dir in filter(os.path.isdir, icollect(bear_dirs)):\n for bear_name in bear_names:\n for matching_file in iglob(\n os.path.join(bear_dir, bear_name + '.py')):\n\n try:\n for bear in _import_bears(matching_file, kinds):\n yield bear\n except:\n log_printer.warn(_(\"Unable to collect bears from {file}. \"\n \"Probably the file is malformed or \"\n \"the module code raises an exception.\")\n .format(file=matching_file))\n\n\ndef collect_bears(bear_dirs, bear_names, kinds, log_printer):\n \"\"\"\n Collect all bears from bear directories that have a matching kind.\n\n :param bear_dirs: directory name or list of such that can contain bears\n :param bear_names: names of bears\n :param kinds: list of bear kinds to be collected\n :param log_printer: log_printer to handle logging\n :return: list of matching bear classes\n \"\"\"\n return list(icollect_bears(bear_dirs, bear_names, kinds, log_printer))\n", "path": "coalib/collecting/Collectors.py"}]} | 1,258 | 553 |
gh_patches_debug_11204 | rasdani/github-patches | git_diff | svthalia__concrexit-1736 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'Event' object has no attribute 'title_en'
Sentry Issue: [CONCREXIT-6D](https://sentry.io/organizations/thalia/issues/2465590057/?referrer=github_integration)
```
AttributeError: 'Event' object has no attribute 'title_en'
(9 additional frame(s) were not displayed)
...
File "rest_framework/mixins.py", line 68, in update
self.perform_update(serializer)
File "pizzas/api/v1/viewsets.py", line 105, in perform_update
self._update_payment(
File "pizzas/api/v1/viewsets.py", line 114, in _update_payment
order.payment = create_payment(order, processed_by, payment_type)
File "payments/services.py", line 67, in create_payment
notes=payable.payment_notes,
File "pizzas/payables.py", line 21, in payment_notes
f"Food order by {self.model.member_name} "
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/pizzas/payables.py`
Content:
```
1 from django.template.defaultfilters import date
2
3 from payments import Payable, payables
4 from pizzas.models import FoodOrder
5 from pizzas.services import can_change_order
6
7
8 class FoodOrderPayable(Payable):
9 @property
10 def payment_amount(self):
11 return self.model.product.price
12
13 @property
14 def payment_topic(self):
15 start_date = date(self.model.food_event.start, "Y-m-d")
16 return f"Food {self.model.food_event.event.title_en} [{start_date}]"
17
18 @property
19 def payment_notes(self):
20 return (
21 f"Food order by {self.model.member_name} "
22 f"for {self.model.food_event.event.title_en}"
23 )
24
25 @property
26 def payment_payer(self):
27 return self.model.member
28
29 def can_manage_payment(self, member):
30 return can_change_order(member, self.model.food_event)
31
32
33 def register():
34 payables.register(FoodOrder, FoodOrderPayable)
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/pizzas/payables.py b/website/pizzas/payables.py
--- a/website/pizzas/payables.py
+++ b/website/pizzas/payables.py
@@ -13,13 +13,13 @@
@property
def payment_topic(self):
start_date = date(self.model.food_event.start, "Y-m-d")
- return f"Food {self.model.food_event.event.title_en} [{start_date}]"
+ return f"Food {self.model.food_event.event.title} [{start_date}]"
@property
def payment_notes(self):
return (
f"Food order by {self.model.member_name} "
- f"for {self.model.food_event.event.title_en}"
+ f"for {self.model.food_event.event.title}"
)
@property
| {"golden_diff": "diff --git a/website/pizzas/payables.py b/website/pizzas/payables.py\n--- a/website/pizzas/payables.py\n+++ b/website/pizzas/payables.py\n@@ -13,13 +13,13 @@\n @property\n def payment_topic(self):\n start_date = date(self.model.food_event.start, \"Y-m-d\")\n- return f\"Food {self.model.food_event.event.title_en} [{start_date}]\"\n+ return f\"Food {self.model.food_event.event.title} [{start_date}]\"\n \n @property\n def payment_notes(self):\n return (\n f\"Food order by {self.model.member_name} \"\n- f\"for {self.model.food_event.event.title_en}\"\n+ f\"for {self.model.food_event.event.title}\"\n )\n \n @property\n", "issue": "AttributeError: 'Event' object has no attribute 'title_en'\nSentry Issue: [CONCREXIT-6D](https://sentry.io/organizations/thalia/issues/2465590057/?referrer=github_integration)\n\n```\nAttributeError: 'Event' object has no attribute 'title_en'\n(9 additional frame(s) were not displayed)\n...\n File \"rest_framework/mixins.py\", line 68, in update\n self.perform_update(serializer)\n File \"pizzas/api/v1/viewsets.py\", line 105, in perform_update\n self._update_payment(\n File \"pizzas/api/v1/viewsets.py\", line 114, in _update_payment\n order.payment = create_payment(order, processed_by, payment_type)\n File \"payments/services.py\", line 67, in create_payment\n notes=payable.payment_notes,\n File \"pizzas/payables.py\", line 21, in payment_notes\n f\"Food order by {self.model.member_name} \"\n```\n", "before_files": [{"content": "from django.template.defaultfilters import date\n\nfrom payments import Payable, payables\nfrom pizzas.models import FoodOrder\nfrom pizzas.services import can_change_order\n\n\nclass FoodOrderPayable(Payable):\n @property\n def payment_amount(self):\n return self.model.product.price\n\n @property\n def payment_topic(self):\n start_date = date(self.model.food_event.start, \"Y-m-d\")\n return f\"Food {self.model.food_event.event.title_en} [{start_date}]\"\n\n @property\n def payment_notes(self):\n return (\n f\"Food order by {self.model.member_name} \"\n f\"for {self.model.food_event.event.title_en}\"\n )\n\n @property\n def payment_payer(self):\n return self.model.member\n\n def can_manage_payment(self, member):\n return can_change_order(member, self.model.food_event)\n\n\ndef register():\n payables.register(FoodOrder, FoodOrderPayable)\n", "path": "website/pizzas/payables.py"}], "after_files": [{"content": "from django.template.defaultfilters import date\n\nfrom payments import Payable, payables\nfrom pizzas.models import FoodOrder\nfrom pizzas.services import can_change_order\n\n\nclass FoodOrderPayable(Payable):\n @property\n def payment_amount(self):\n return self.model.product.price\n\n @property\n def payment_topic(self):\n start_date = date(self.model.food_event.start, \"Y-m-d\")\n return f\"Food {self.model.food_event.event.title} [{start_date}]\"\n\n @property\n def payment_notes(self):\n return (\n f\"Food order by {self.model.member_name} \"\n f\"for {self.model.food_event.event.title}\"\n )\n\n @property\n def payment_payer(self):\n return self.model.member\n\n def can_manage_payment(self, member):\n return can_change_order(member, self.model.food_event)\n\n\ndef register():\n payables.register(FoodOrder, FoodOrderPayable)\n", "path": "website/pizzas/payables.py"}]} | 755 | 180 |
gh_patches_debug_21261 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-62 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnicodeEncodeError when the defualt value is used and contains non ascii characters.
Error occurs when the user uses the default unicode string.
Code:
```
if PY3:
cookiecutter_dict[key] = new_val
else:
cookiecutter_dict[key] = new_val.decode('utf-8')
```
Everything is okay in Python 3, but `new_val` is already unicode in 2.x.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cookiecutter/prompt.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 cookiecutter.prompt
6 ---------------------
7
8 Functions for prompting the user for project info.
9 """
10
11 from __future__ import unicode_literals
12 import sys
13
14 PY3 = sys.version > '3'
15 if PY3:
16 iteritems = lambda d: iter(d.items())
17 else:
18 input = raw_input
19 iteritems = lambda d: d.iteritems()
20
21 def prompt_for_config(context):
22 """
23 Prompts the user to enter new config, using context as a source for the
24 field names and sample values.
25 """
26 cookiecutter_dict = {}
27
28 for key, val in iteritems(context['cookiecutter']):
29 prompt = "{0} (default is \"{1}\")? ".format(key, val)
30 new_val = input(prompt.encode('utf-8'))
31 new_val = new_val.strip()
32
33 if new_val == '':
34 new_val = val
35
36 if PY3:
37 cookiecutter_dict[key] = new_val
38 else:
39 cookiecutter_dict[key] = new_val.decode('utf-8')
40 return cookiecutter_dict
41
42
43 def query_yes_no(question, default="yes"):
44 """
45 Ask a yes/no question via `raw_input()` and return their answer.
46
47 :param question: A string that is presented to the user.
48 :param default: The presumed answer if the user just hits <Enter>.
49 It must be "yes" (the default), "no" or None (meaning
50 an answer is required of the user).
51
52 The "answer" return value is one of "yes" or "no".
53
54 Adapted from
55 http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input
56 http://code.activestate.com/recipes/577058/
57
58 """
59 valid = {"yes": True, "y": True, "ye": True, "no": False, "n": False}
60 if default is None:
61 prompt = " [y/n] "
62 elif default == "yes":
63 prompt = " [Y/n] "
64 elif default == "no":
65 prompt = " [y/N] "
66 else:
67 raise ValueError("invalid default answer: '%s'" % default)
68
69 while True:
70 sys.stdout.write(question + prompt)
71 choice = input().lower()
72
73 if default is not None and choice == '':
74 return valid[default]
75 elif choice in valid:
76 return valid[choice]
77 else:
78 sys.stdout.write("Please respond with 'yes' or 'no' "
79 "(or 'y' or 'n').\n")
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py
--- a/cookiecutter/prompt.py
+++ b/cookiecutter/prompt.py
@@ -18,6 +18,7 @@
input = raw_input
iteritems = lambda d: d.iteritems()
+
def prompt_for_config(context):
"""
Prompts the user to enter new config, using context as a source for the
@@ -27,16 +28,18 @@
for key, val in iteritems(context['cookiecutter']):
prompt = "{0} (default is \"{1}\")? ".format(key, val)
- new_val = input(prompt.encode('utf-8'))
+
+ if PY3:
+ new_val = input(prompt.encode('utf-8'))
+ else:
+ new_val = input(prompt.encode('utf-8')).decode('utf-8')
+
new_val = new_val.strip()
if new_val == '':
new_val = val
- if PY3:
- cookiecutter_dict[key] = new_val
- else:
- cookiecutter_dict[key] = new_val.decode('utf-8')
+ cookiecutter_dict[key] = new_val
return cookiecutter_dict
| {"golden_diff": "diff --git a/cookiecutter/prompt.py b/cookiecutter/prompt.py\n--- a/cookiecutter/prompt.py\n+++ b/cookiecutter/prompt.py\n@@ -18,6 +18,7 @@\n input = raw_input\n iteritems = lambda d: d.iteritems()\n \n+\n def prompt_for_config(context):\n \"\"\"\n Prompts the user to enter new config, using context as a source for the\n@@ -27,16 +28,18 @@\n \n for key, val in iteritems(context['cookiecutter']):\n prompt = \"{0} (default is \\\"{1}\\\")? \".format(key, val)\n- new_val = input(prompt.encode('utf-8'))\n+\n+ if PY3:\n+ new_val = input(prompt.encode('utf-8'))\n+ else:\n+ new_val = input(prompt.encode('utf-8')).decode('utf-8')\n+\n new_val = new_val.strip()\n \n if new_val == '':\n new_val = val\n \n- if PY3:\n- cookiecutter_dict[key] = new_val\n- else:\n- cookiecutter_dict[key] = new_val.decode('utf-8')\n+ cookiecutter_dict[key] = new_val\n return cookiecutter_dict\n", "issue": "UnicodeEncodeError when the defualt value is used and contains non ascii characters.\nError occurs when the user uses the default unicode string.\n\nCode:\n\n```\nif PY3:\n cookiecutter_dict[key] = new_val\nelse:\n cookiecutter_dict[key] = new_val.decode('utf-8')\n```\n\nEverything is okay in Python 3, but `new_val` is already unicode in 2.x.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.prompt\n---------------------\n\nFunctions for prompting the user for project info.\n\"\"\"\n\nfrom __future__ import unicode_literals\nimport sys\n\nPY3 = sys.version > '3'\nif PY3:\n iteritems = lambda d: iter(d.items())\nelse:\n input = raw_input\n iteritems = lambda d: d.iteritems()\n\ndef prompt_for_config(context):\n \"\"\"\n Prompts the user to enter new config, using context as a source for the\n field names and sample values.\n \"\"\"\n cookiecutter_dict = {}\n\n for key, val in iteritems(context['cookiecutter']):\n prompt = \"{0} (default is \\\"{1}\\\")? \".format(key, val)\n new_val = input(prompt.encode('utf-8'))\n new_val = new_val.strip()\n\n if new_val == '':\n new_val = val\n\n if PY3:\n cookiecutter_dict[key] = new_val\n else:\n cookiecutter_dict[key] = new_val.decode('utf-8')\n return cookiecutter_dict\n\n\ndef query_yes_no(question, default=\"yes\"):\n \"\"\"\n Ask a yes/no question via `raw_input()` and return their answer.\n\n :param question: A string that is presented to the user.\n :param default: The presumed answer if the user just hits <Enter>.\n It must be \"yes\" (the default), \"no\" or None (meaning\n an answer is required of the user).\n\n The \"answer\" return value is one of \"yes\" or \"no\".\n\n Adapted from\n http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input\n http://code.activestate.com/recipes/577058/\n\n \"\"\"\n valid = {\"yes\": True, \"y\": True, \"ye\": True, \"no\": False, \"n\": False}\n if default is None:\n prompt = \" [y/n] \"\n elif default == \"yes\":\n prompt = \" [Y/n] \"\n elif default == \"no\":\n prompt = \" [y/N] \"\n else:\n raise ValueError(\"invalid default answer: '%s'\" % default)\n\n while True:\n sys.stdout.write(question + prompt)\n choice = input().lower()\n\n if default is not None and choice == '':\n return valid[default]\n elif choice in valid:\n return valid[choice]\n else:\n sys.stdout.write(\"Please respond with 'yes' or 'no' \"\n \"(or 'y' or 'n').\\n\")\n", "path": "cookiecutter/prompt.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.prompt\n---------------------\n\nFunctions for prompting the user for project info.\n\"\"\"\n\nfrom __future__ import unicode_literals\nimport sys\n\nPY3 = sys.version > '3'\nif PY3:\n iteritems = lambda d: iter(d.items())\nelse:\n input = raw_input\n iteritems = lambda d: d.iteritems()\n\n\ndef prompt_for_config(context):\n \"\"\"\n Prompts the user to enter new config, using context as a source for the\n field names and sample values.\n \"\"\"\n cookiecutter_dict = {}\n\n for key, val in iteritems(context['cookiecutter']):\n prompt = \"{0} (default is \\\"{1}\\\")? \".format(key, val)\n\n if PY3:\n new_val = input(prompt.encode('utf-8'))\n else:\n new_val = input(prompt.encode('utf-8')).decode('utf-8')\n\n new_val = new_val.strip()\n\n if new_val == '':\n new_val = val\n\n cookiecutter_dict[key] = new_val\n return cookiecutter_dict\n\n\ndef query_yes_no(question, default=\"yes\"):\n \"\"\"\n Ask a yes/no question via `raw_input()` and return their answer.\n\n :param question: A string that is presented to the user.\n :param default: The presumed answer if the user just hits <Enter>.\n It must be \"yes\" (the default), \"no\" or None (meaning\n an answer is required of the user).\n\n The \"answer\" return value is one of \"yes\" or \"no\".\n\n Adapted from\n http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input\n http://code.activestate.com/recipes/577058/\n\n \"\"\"\n valid = {\"yes\": True, \"y\": True, \"ye\": True, \"no\": False, \"n\": False}\n if default is None:\n prompt = \" [y/n] \"\n elif default == \"yes\":\n prompt = \" [Y/n] \"\n elif default == \"no\":\n prompt = \" [y/N] \"\n else:\n raise ValueError(\"invalid default answer: '%s'\" % default)\n\n while True:\n sys.stdout.write(question + prompt)\n choice = input().lower()\n\n if default is not None and choice == '':\n return valid[default]\n elif choice in valid:\n return valid[choice]\n else:\n sys.stdout.write(\"Please respond with 'yes' or 'no' \"\n \"(or 'y' or 'n').\\n\")\n", "path": "cookiecutter/prompt.py"}]} | 1,071 | 281 |
gh_patches_debug_32478 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-2080 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
format_baggage does not escape non-ascii in baggage keys
https://github.com/open-telemetry/opentelemetry-python/blob/4250078e43ddb24c88e19270c7af01ae63336fb9/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py#L100
The cpp implementation does this, and it looks like in python test strings there is url encoding.
https://github.com/open-telemetry/opentelemetry-cpp/blob/61d3c5e318830d10a0859befa046aa4847593764/api/include/opentelemetry/baggage/baggage.h#L174
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 import typing
16 import urllib.parse
17
18 from opentelemetry import baggage
19 from opentelemetry.context import get_current
20 from opentelemetry.context.context import Context
21 from opentelemetry.propagators import textmap
22
23
24 class W3CBaggagePropagator(textmap.TextMapPropagator):
25 """Extracts and injects Baggage which is used to annotate telemetry."""
26
27 _MAX_HEADER_LENGTH = 8192
28 _MAX_PAIR_LENGTH = 4096
29 _MAX_PAIRS = 180
30 _BAGGAGE_HEADER_NAME = "baggage"
31
32 def extract(
33 self,
34 carrier: textmap.CarrierT,
35 context: typing.Optional[Context] = None,
36 getter: textmap.Getter = textmap.default_getter,
37 ) -> Context:
38 """Extract Baggage from the carrier.
39
40 See
41 `opentelemetry.propagators.textmap.TextMapPropagator.extract`
42 """
43
44 if context is None:
45 context = get_current()
46
47 header = _extract_first_element(
48 getter.get(carrier, self._BAGGAGE_HEADER_NAME)
49 )
50
51 if not header or len(header) > self._MAX_HEADER_LENGTH:
52 return context
53
54 baggage_entries = header.split(",")
55 total_baggage_entries = self._MAX_PAIRS
56 for entry in baggage_entries:
57 if total_baggage_entries <= 0:
58 return context
59 total_baggage_entries -= 1
60 if len(entry) > self._MAX_PAIR_LENGTH:
61 continue
62 try:
63 name, value = entry.split("=", 1)
64 except Exception: # pylint: disable=broad-except
65 continue
66 context = baggage.set_baggage(
67 urllib.parse.unquote(name).strip(),
68 urllib.parse.unquote(value).strip(),
69 context=context,
70 )
71
72 return context
73
74 def inject(
75 self,
76 carrier: textmap.CarrierT,
77 context: typing.Optional[Context] = None,
78 setter: textmap.Setter = textmap.default_setter,
79 ) -> None:
80 """Injects Baggage into the carrier.
81
82 See
83 `opentelemetry.propagators.textmap.TextMapPropagator.inject`
84 """
85 baggage_entries = baggage.get_all(context=context)
86 if not baggage_entries:
87 return
88
89 baggage_string = _format_baggage(baggage_entries)
90 setter.set(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)
91
92 @property
93 def fields(self) -> typing.Set[str]:
94 """Returns a set with the fields set in `inject`."""
95 return {self._BAGGAGE_HEADER_NAME}
96
97
98 def _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:
99 return ",".join(
100 key + "=" + urllib.parse.quote_plus(str(value))
101 for key, value in baggage_entries.items()
102 )
103
104
105 def _extract_first_element(
106 items: typing.Optional[typing.Iterable[textmap.CarrierT]],
107 ) -> typing.Optional[textmap.CarrierT]:
108 if items is None:
109 return None
110 return next(iter(items), None)
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
--- a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
+++ b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
@@ -13,9 +13,9 @@
# limitations under the License.
#
import typing
-import urllib.parse
+from urllib.parse import quote_plus, unquote_plus
-from opentelemetry import baggage
+from opentelemetry.baggage import get_all, set_baggage
from opentelemetry.context import get_current
from opentelemetry.context.context import Context
from opentelemetry.propagators import textmap
@@ -63,9 +63,9 @@
name, value = entry.split("=", 1)
except Exception: # pylint: disable=broad-except
continue
- context = baggage.set_baggage(
- urllib.parse.unquote(name).strip(),
- urllib.parse.unquote(value).strip(),
+ context = set_baggage(
+ unquote_plus(name).strip(),
+ unquote_plus(value).strip(),
context=context,
)
@@ -82,7 +82,7 @@
See
`opentelemetry.propagators.textmap.TextMapPropagator.inject`
"""
- baggage_entries = baggage.get_all(context=context)
+ baggage_entries = get_all(context=context)
if not baggage_entries:
return
@@ -97,7 +97,7 @@
def _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:
return ",".join(
- key + "=" + urllib.parse.quote_plus(str(value))
+ quote_plus(str(key)) + "=" + quote_plus(str(value))
for key, value in baggage_entries.items()
)
| {"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n--- a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n+++ b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n@@ -13,9 +13,9 @@\n # limitations under the License.\n #\n import typing\n-import urllib.parse\n+from urllib.parse import quote_plus, unquote_plus\n \n-from opentelemetry import baggage\n+from opentelemetry.baggage import get_all, set_baggage\n from opentelemetry.context import get_current\n from opentelemetry.context.context import Context\n from opentelemetry.propagators import textmap\n@@ -63,9 +63,9 @@\n name, value = entry.split(\"=\", 1)\n except Exception: # pylint: disable=broad-except\n continue\n- context = baggage.set_baggage(\n- urllib.parse.unquote(name).strip(),\n- urllib.parse.unquote(value).strip(),\n+ context = set_baggage(\n+ unquote_plus(name).strip(),\n+ unquote_plus(value).strip(),\n context=context,\n )\n \n@@ -82,7 +82,7 @@\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.inject`\n \"\"\"\n- baggage_entries = baggage.get_all(context=context)\n+ baggage_entries = get_all(context=context)\n if not baggage_entries:\n return\n \n@@ -97,7 +97,7 @@\n \n def _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:\n return \",\".join(\n- key + \"=\" + urllib.parse.quote_plus(str(value))\n+ quote_plus(str(key)) + \"=\" + quote_plus(str(value))\n for key, value in baggage_entries.items()\n )\n", "issue": "format_baggage does not escape non-ascii in baggage keys\nhttps://github.com/open-telemetry/opentelemetry-python/blob/4250078e43ddb24c88e19270c7af01ae63336fb9/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py#L100\r\n\r\nThe cpp implementation does this, and it looks like in python test strings there is url encoding.\r\nhttps://github.com/open-telemetry/opentelemetry-cpp/blob/61d3c5e318830d10a0859befa046aa4847593764/api/include/opentelemetry/baggage/baggage.h#L174\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport typing\nimport urllib.parse\n\nfrom opentelemetry import baggage\nfrom opentelemetry.context import get_current\nfrom opentelemetry.context.context import Context\nfrom opentelemetry.propagators import textmap\n\n\nclass W3CBaggagePropagator(textmap.TextMapPropagator):\n \"\"\"Extracts and injects Baggage which is used to annotate telemetry.\"\"\"\n\n _MAX_HEADER_LENGTH = 8192\n _MAX_PAIR_LENGTH = 4096\n _MAX_PAIRS = 180\n _BAGGAGE_HEADER_NAME = \"baggage\"\n\n def extract(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n getter: textmap.Getter = textmap.default_getter,\n ) -> Context:\n \"\"\"Extract Baggage from the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.extract`\n \"\"\"\n\n if context is None:\n context = get_current()\n\n header = _extract_first_element(\n getter.get(carrier, self._BAGGAGE_HEADER_NAME)\n )\n\n if not header or len(header) > self._MAX_HEADER_LENGTH:\n return context\n\n baggage_entries = header.split(\",\")\n total_baggage_entries = self._MAX_PAIRS\n for entry in baggage_entries:\n if total_baggage_entries <= 0:\n return context\n total_baggage_entries -= 1\n if len(entry) > self._MAX_PAIR_LENGTH:\n continue\n try:\n name, value = entry.split(\"=\", 1)\n except Exception: # pylint: disable=broad-except\n continue\n context = baggage.set_baggage(\n urllib.parse.unquote(name).strip(),\n urllib.parse.unquote(value).strip(),\n context=context,\n )\n\n return context\n\n def inject(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n setter: textmap.Setter = textmap.default_setter,\n ) -> None:\n \"\"\"Injects Baggage into the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.inject`\n \"\"\"\n baggage_entries = baggage.get_all(context=context)\n if not baggage_entries:\n return\n\n baggage_string = _format_baggage(baggage_entries)\n setter.set(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)\n\n @property\n def fields(self) -> typing.Set[str]:\n \"\"\"Returns a set with the fields set in `inject`.\"\"\"\n return {self._BAGGAGE_HEADER_NAME}\n\n\ndef _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:\n return \",\".join(\n key + \"=\" + urllib.parse.quote_plus(str(value))\n for key, value in baggage_entries.items()\n )\n\n\ndef _extract_first_element(\n items: typing.Optional[typing.Iterable[textmap.CarrierT]],\n) -> typing.Optional[textmap.CarrierT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport typing\nfrom urllib.parse import quote_plus, unquote_plus\n\nfrom opentelemetry.baggage import get_all, set_baggage\nfrom opentelemetry.context import get_current\nfrom opentelemetry.context.context import Context\nfrom opentelemetry.propagators import textmap\n\n\nclass W3CBaggagePropagator(textmap.TextMapPropagator):\n \"\"\"Extracts and injects Baggage which is used to annotate telemetry.\"\"\"\n\n _MAX_HEADER_LENGTH = 8192\n _MAX_PAIR_LENGTH = 4096\n _MAX_PAIRS = 180\n _BAGGAGE_HEADER_NAME = \"baggage\"\n\n def extract(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n getter: textmap.Getter = textmap.default_getter,\n ) -> Context:\n \"\"\"Extract Baggage from the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.extract`\n \"\"\"\n\n if context is None:\n context = get_current()\n\n header = _extract_first_element(\n getter.get(carrier, self._BAGGAGE_HEADER_NAME)\n )\n\n if not header or len(header) > self._MAX_HEADER_LENGTH:\n return context\n\n baggage_entries = header.split(\",\")\n total_baggage_entries = self._MAX_PAIRS\n for entry in baggage_entries:\n if total_baggage_entries <= 0:\n return context\n total_baggage_entries -= 1\n if len(entry) > self._MAX_PAIR_LENGTH:\n continue\n try:\n name, value = entry.split(\"=\", 1)\n except Exception: # pylint: disable=broad-except\n continue\n context = set_baggage(\n unquote_plus(name).strip(),\n unquote_plus(value).strip(),\n context=context,\n )\n\n return context\n\n def inject(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n setter: textmap.Setter = textmap.default_setter,\n ) -> None:\n \"\"\"Injects Baggage into the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.inject`\n \"\"\"\n baggage_entries = get_all(context=context)\n if not baggage_entries:\n return\n\n baggage_string = _format_baggage(baggage_entries)\n setter.set(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)\n\n @property\n def fields(self) -> typing.Set[str]:\n \"\"\"Returns a set with the fields set in `inject`.\"\"\"\n return {self._BAGGAGE_HEADER_NAME}\n\n\ndef _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:\n return \",\".join(\n quote_plus(str(key)) + \"=\" + quote_plus(str(value))\n for key, value in baggage_entries.items()\n )\n\n\ndef _extract_first_element(\n items: typing.Optional[typing.Iterable[textmap.CarrierT]],\n) -> typing.Optional[textmap.CarrierT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py"}]} | 1,471 | 419 |
gh_patches_debug_9536 | rasdani/github-patches | git_diff | ibis-project__ibis-1949 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
More informative IntegrityError on duplicate columns
```
~/Workspace/ibis/ibis/expr/schema.py in __init__(self, names, types)
32
33 if len(self._name_locs) < len(self.names):
---> 34 raise com.IntegrityError('Duplicate column names')
35
36 def __repr__(self):
IntegrityError: Duplicate column names
```
List the particular columns at least.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ibis/expr/schema.py`
Content:
```
1 import collections
2
3 from multipledispatch import Dispatcher
4
5 import ibis.common.exceptions as com
6 import ibis.expr.datatypes as dt
7 import ibis.util as util
8
9
10 class Schema:
11
12 """An object for holding table schema information, i.e., column names and
13 types.
14
15 Parameters
16 ----------
17 names : Sequence[str]
18 A sequence of ``str`` indicating the name of each column.
19 types : Sequence[DataType]
20 A sequence of :class:`ibis.expr.datatypes.DataType` objects
21 representing type of each column.
22 """
23
24 __slots__ = 'names', 'types', '_name_locs'
25
26 def __init__(self, names, types):
27 if not isinstance(names, list):
28 names = list(names)
29
30 self.names = names
31 self.types = list(map(dt.dtype, types))
32
33 self._name_locs = dict((v, i) for i, v in enumerate(self.names))
34
35 if len(self._name_locs) < len(self.names):
36 raise com.IntegrityError('Duplicate column names')
37
38 def __repr__(self):
39 space = 2 + max(map(len, self.names), default=0)
40 return "ibis.Schema {{{}\n}}".format(
41 util.indent(
42 ''.join(
43 '\n{}{}'.format(name.ljust(space), str(type))
44 for name, type in zip(self.names, self.types)
45 ),
46 2,
47 )
48 )
49
50 def __hash__(self):
51 return hash((type(self), tuple(self.names), tuple(self.types)))
52
53 def __len__(self):
54 return len(self.names)
55
56 def __iter__(self):
57 return iter(self.names)
58
59 def __contains__(self, name):
60 return name in self._name_locs
61
62 def __getitem__(self, name):
63 return self.types[self._name_locs[name]]
64
65 def __getstate__(self):
66 return {slot: getattr(self, slot) for slot in self.__class__.__slots__}
67
68 def __setstate__(self, instance_dict):
69 for key, value in instance_dict.items():
70 setattr(self, key, value)
71
72 def delete(self, names_to_delete):
73 for name in names_to_delete:
74 if name not in self:
75 raise KeyError(name)
76
77 new_names, new_types = [], []
78 for name, type_ in zip(self.names, self.types):
79 if name in names_to_delete:
80 continue
81 new_names.append(name)
82 new_types.append(type_)
83
84 return Schema(new_names, new_types)
85
86 @classmethod
87 def from_tuples(cls, values):
88 if not isinstance(values, (list, tuple)):
89 values = list(values)
90
91 names, types = zip(*values) if values else ([], [])
92 return Schema(names, types)
93
94 @classmethod
95 def from_dict(cls, dictionary):
96 return Schema(*zip(*dictionary.items()))
97
98 def equals(self, other, cache=None):
99 return self.names == other.names and self.types == other.types
100
101 def __eq__(self, other):
102 return self.equals(other)
103
104 def __gt__(self, other):
105 return set(self.items()) > set(other.items())
106
107 def __ge__(self, other):
108 return set(self.items()) >= set(other.items())
109
110 def append(self, schema):
111 return Schema(self.names + schema.names, self.types + schema.types)
112
113 def items(self):
114 return zip(self.names, self.types)
115
116 def name_at_position(self, i):
117 """
118 """
119 upper = len(self.names) - 1
120 if not 0 <= i <= upper:
121 raise ValueError(
122 'Column index must be between 0 and {:d}, inclusive'.format(
123 upper
124 )
125 )
126 return self.names[i]
127
128
129 class HasSchema:
130
131 """
132 Base class representing a structured dataset with a well-defined
133 schema.
134
135 Base implementation is for tables that do not reference a particular
136 concrete dataset or database table.
137 """
138
139 def __repr__(self):
140 return '{}({})'.format(type(self).__name__, repr(self.schema))
141
142 def has_schema(self):
143 return True
144
145 def equals(self, other, cache=None):
146 return type(self) == type(other) and self.schema.equals(
147 other.schema, cache=cache
148 )
149
150 def root_tables(self):
151 return [self]
152
153 @property
154 def schema(self):
155 raise NotImplementedError
156
157
158 schema = Dispatcher('schema')
159 infer = Dispatcher('infer')
160
161
162 @schema.register(Schema)
163 def identity(s):
164 return s
165
166
167 @schema.register(collections.abc.Mapping)
168 def schema_from_mapping(d):
169 return Schema.from_dict(d)
170
171
172 @schema.register(collections.abc.Iterable)
173 def schema_from_pairs(lst):
174 return Schema.from_tuples(lst)
175
176
177 @schema.register(collections.abc.Iterable, collections.abc.Iterable)
178 def schema_from_names_types(names, types):
179 return Schema(names, types)
180
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ibis/expr/schema.py b/ibis/expr/schema.py
--- a/ibis/expr/schema.py
+++ b/ibis/expr/schema.py
@@ -33,7 +33,11 @@
self._name_locs = dict((v, i) for i, v in enumerate(self.names))
if len(self._name_locs) < len(self.names):
- raise com.IntegrityError('Duplicate column names')
+ duplicate_names = list(self.names)
+ for v in self._name_locs.keys():
+ duplicate_names.remove(v)
+ raise com.IntegrityError(
+ 'Duplicate column name(s): {}'.format(duplicate_names))
def __repr__(self):
space = 2 + max(map(len, self.names), default=0)
| {"golden_diff": "diff --git a/ibis/expr/schema.py b/ibis/expr/schema.py\n--- a/ibis/expr/schema.py\n+++ b/ibis/expr/schema.py\n@@ -33,7 +33,11 @@\n self._name_locs = dict((v, i) for i, v in enumerate(self.names))\n \n if len(self._name_locs) < len(self.names):\n- raise com.IntegrityError('Duplicate column names')\n+ duplicate_names = list(self.names)\n+ for v in self._name_locs.keys():\n+ duplicate_names.remove(v)\n+ raise com.IntegrityError(\n+ 'Duplicate column name(s): {}'.format(duplicate_names))\n \n def __repr__(self):\n space = 2 + max(map(len, self.names), default=0)\n", "issue": "More informative IntegrityError on duplicate columns\n```\r\n~/Workspace/ibis/ibis/expr/schema.py in __init__(self, names, types)\r\n 32 \r\n 33 if len(self._name_locs) < len(self.names):\r\n---> 34 raise com.IntegrityError('Duplicate column names')\r\n 35 \r\n 36 def __repr__(self):\r\n\r\nIntegrityError: Duplicate column names\r\n```\r\n\r\nList the particular columns at least.\n", "before_files": [{"content": "import collections\n\nfrom multipledispatch import Dispatcher\n\nimport ibis.common.exceptions as com\nimport ibis.expr.datatypes as dt\nimport ibis.util as util\n\n\nclass Schema:\n\n \"\"\"An object for holding table schema information, i.e., column names and\n types.\n\n Parameters\n ----------\n names : Sequence[str]\n A sequence of ``str`` indicating the name of each column.\n types : Sequence[DataType]\n A sequence of :class:`ibis.expr.datatypes.DataType` objects\n representing type of each column.\n \"\"\"\n\n __slots__ = 'names', 'types', '_name_locs'\n\n def __init__(self, names, types):\n if not isinstance(names, list):\n names = list(names)\n\n self.names = names\n self.types = list(map(dt.dtype, types))\n\n self._name_locs = dict((v, i) for i, v in enumerate(self.names))\n\n if len(self._name_locs) < len(self.names):\n raise com.IntegrityError('Duplicate column names')\n\n def __repr__(self):\n space = 2 + max(map(len, self.names), default=0)\n return \"ibis.Schema {{{}\\n}}\".format(\n util.indent(\n ''.join(\n '\\n{}{}'.format(name.ljust(space), str(type))\n for name, type in zip(self.names, self.types)\n ),\n 2,\n )\n )\n\n def __hash__(self):\n return hash((type(self), tuple(self.names), tuple(self.types)))\n\n def __len__(self):\n return len(self.names)\n\n def __iter__(self):\n return iter(self.names)\n\n def __contains__(self, name):\n return name in self._name_locs\n\n def __getitem__(self, name):\n return self.types[self._name_locs[name]]\n\n def __getstate__(self):\n return {slot: getattr(self, slot) for slot in self.__class__.__slots__}\n\n def __setstate__(self, instance_dict):\n for key, value in instance_dict.items():\n setattr(self, key, value)\n\n def delete(self, names_to_delete):\n for name in names_to_delete:\n if name not in self:\n raise KeyError(name)\n\n new_names, new_types = [], []\n for name, type_ in zip(self.names, self.types):\n if name in names_to_delete:\n continue\n new_names.append(name)\n new_types.append(type_)\n\n return Schema(new_names, new_types)\n\n @classmethod\n def from_tuples(cls, values):\n if not isinstance(values, (list, tuple)):\n values = list(values)\n\n names, types = zip(*values) if values else ([], [])\n return Schema(names, types)\n\n @classmethod\n def from_dict(cls, dictionary):\n return Schema(*zip(*dictionary.items()))\n\n def equals(self, other, cache=None):\n return self.names == other.names and self.types == other.types\n\n def __eq__(self, other):\n return self.equals(other)\n\n def __gt__(self, other):\n return set(self.items()) > set(other.items())\n\n def __ge__(self, other):\n return set(self.items()) >= set(other.items())\n\n def append(self, schema):\n return Schema(self.names + schema.names, self.types + schema.types)\n\n def items(self):\n return zip(self.names, self.types)\n\n def name_at_position(self, i):\n \"\"\"\n \"\"\"\n upper = len(self.names) - 1\n if not 0 <= i <= upper:\n raise ValueError(\n 'Column index must be between 0 and {:d}, inclusive'.format(\n upper\n )\n )\n return self.names[i]\n\n\nclass HasSchema:\n\n \"\"\"\n Base class representing a structured dataset with a well-defined\n schema.\n\n Base implementation is for tables that do not reference a particular\n concrete dataset or database table.\n \"\"\"\n\n def __repr__(self):\n return '{}({})'.format(type(self).__name__, repr(self.schema))\n\n def has_schema(self):\n return True\n\n def equals(self, other, cache=None):\n return type(self) == type(other) and self.schema.equals(\n other.schema, cache=cache\n )\n\n def root_tables(self):\n return [self]\n\n @property\n def schema(self):\n raise NotImplementedError\n\n\nschema = Dispatcher('schema')\ninfer = Dispatcher('infer')\n\n\[email protected](Schema)\ndef identity(s):\n return s\n\n\[email protected](collections.abc.Mapping)\ndef schema_from_mapping(d):\n return Schema.from_dict(d)\n\n\[email protected](collections.abc.Iterable)\ndef schema_from_pairs(lst):\n return Schema.from_tuples(lst)\n\n\[email protected](collections.abc.Iterable, collections.abc.Iterable)\ndef schema_from_names_types(names, types):\n return Schema(names, types)\n", "path": "ibis/expr/schema.py"}], "after_files": [{"content": "import collections\n\nfrom multipledispatch import Dispatcher\n\nimport ibis.common.exceptions as com\nimport ibis.expr.datatypes as dt\nimport ibis.util as util\n\n\nclass Schema:\n\n \"\"\"An object for holding table schema information, i.e., column names and\n types.\n\n Parameters\n ----------\n names : Sequence[str]\n A sequence of ``str`` indicating the name of each column.\n types : Sequence[DataType]\n A sequence of :class:`ibis.expr.datatypes.DataType` objects\n representing type of each column.\n \"\"\"\n\n __slots__ = 'names', 'types', '_name_locs'\n\n def __init__(self, names, types):\n if not isinstance(names, list):\n names = list(names)\n\n self.names = names\n self.types = list(map(dt.dtype, types))\n\n self._name_locs = dict((v, i) for i, v in enumerate(self.names))\n\n if len(self._name_locs) < len(self.names):\n duplicate_names = list(self.names)\n for v in self._name_locs.keys():\n duplicate_names.remove(v)\n raise com.IntegrityError(\n 'Duplicate column name(s): {}'.format(duplicate_names))\n\n def __repr__(self):\n space = 2 + max(map(len, self.names), default=0)\n return \"ibis.Schema {{{}\\n}}\".format(\n util.indent(\n ''.join(\n '\\n{}{}'.format(name.ljust(space), str(type))\n for name, type in zip(self.names, self.types)\n ),\n 2,\n )\n )\n\n def __hash__(self):\n return hash((type(self), tuple(self.names), tuple(self.types)))\n\n def __len__(self):\n return len(self.names)\n\n def __iter__(self):\n return iter(self.names)\n\n def __contains__(self, name):\n return name in self._name_locs\n\n def __getitem__(self, name):\n return self.types[self._name_locs[name]]\n\n def __getstate__(self):\n return {slot: getattr(self, slot) for slot in self.__class__.__slots__}\n\n def __setstate__(self, instance_dict):\n for key, value in instance_dict.items():\n setattr(self, key, value)\n\n def delete(self, names_to_delete):\n for name in names_to_delete:\n if name not in self:\n raise KeyError(name)\n\n new_names, new_types = [], []\n for name, type_ in zip(self.names, self.types):\n if name in names_to_delete:\n continue\n new_names.append(name)\n new_types.append(type_)\n\n return Schema(new_names, new_types)\n\n @classmethod\n def from_tuples(cls, values):\n if not isinstance(values, (list, tuple)):\n values = list(values)\n\n names, types = zip(*values) if values else ([], [])\n return Schema(names, types)\n\n @classmethod\n def from_dict(cls, dictionary):\n return Schema(*zip(*dictionary.items()))\n\n def equals(self, other, cache=None):\n return self.names == other.names and self.types == other.types\n\n def __eq__(self, other):\n return self.equals(other)\n\n def __gt__(self, other):\n return set(self.items()) > set(other.items())\n\n def __ge__(self, other):\n return set(self.items()) >= set(other.items())\n\n def append(self, schema):\n return Schema(self.names + schema.names, self.types + schema.types)\n\n def items(self):\n return zip(self.names, self.types)\n\n def name_at_position(self, i):\n \"\"\"\n \"\"\"\n upper = len(self.names) - 1\n if not 0 <= i <= upper:\n raise ValueError(\n 'Column index must be between 0 and {:d}, inclusive'.format(\n upper\n )\n )\n return self.names[i]\n\n\nclass HasSchema:\n\n \"\"\"\n Base class representing a structured dataset with a well-defined\n schema.\n\n Base implementation is for tables that do not reference a particular\n concrete dataset or database table.\n \"\"\"\n\n def __repr__(self):\n return '{}({})'.format(type(self).__name__, repr(self.schema))\n\n def has_schema(self):\n return True\n\n def equals(self, other, cache=None):\n return type(self) == type(other) and self.schema.equals(\n other.schema, cache=cache\n )\n\n def root_tables(self):\n return [self]\n\n @property\n def schema(self):\n raise NotImplementedError\n\n\nschema = Dispatcher('schema')\ninfer = Dispatcher('infer')\n\n\[email protected](Schema)\ndef identity(s):\n return s\n\n\[email protected](collections.abc.Mapping)\ndef schema_from_mapping(d):\n return Schema.from_dict(d)\n\n\[email protected](collections.abc.Iterable)\ndef schema_from_pairs(lst):\n return Schema.from_tuples(lst)\n\n\[email protected](collections.abc.Iterable, collections.abc.Iterable)\ndef schema_from_names_types(names, types):\n return Schema(names, types)\n", "path": "ibis/expr/schema.py"}]} | 1,863 | 174 |
gh_patches_debug_66455 | rasdani/github-patches | git_diff | pyca__cryptography-8319 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect docstrings in x25519 and x448 `.public_key()` methods
See:
https://github.com/pyca/cryptography/blob/127a2860740c77f45362e68e0ed7d2d108a39033/src/cryptography/hazmat/primitives/asymmetric/x25519.py#L60-L64
https://github.com/pyca/cryptography/blob/127a2860740c77f45362e68e0ed7d2d108a39033/src/cryptography/hazmat/primitives/asymmetric/x448.py#L60-L64
In both instances, the method does not return serialised bytes, but a public key object. The full [generated documentation](https://cryptography.io/en/latest/hazmat/primitives/asymmetric/x25519/#cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.public_key) is correct, as are the Ed* docstrings.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/hazmat/primitives/asymmetric/x448.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5
6 import abc
7
8 from cryptography.exceptions import UnsupportedAlgorithm, _Reasons
9 from cryptography.hazmat.primitives import _serialization
10
11
12 class X448PublicKey(metaclass=abc.ABCMeta):
13 @classmethod
14 def from_public_bytes(cls, data: bytes) -> "X448PublicKey":
15 from cryptography.hazmat.backends.openssl.backend import backend
16
17 if not backend.x448_supported():
18 raise UnsupportedAlgorithm(
19 "X448 is not supported by this version of OpenSSL.",
20 _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,
21 )
22
23 return backend.x448_load_public_bytes(data)
24
25 @abc.abstractmethod
26 def public_bytes(
27 self,
28 encoding: _serialization.Encoding,
29 format: _serialization.PublicFormat,
30 ) -> bytes:
31 """
32 The serialized bytes of the public key.
33 """
34
35
36 class X448PrivateKey(metaclass=abc.ABCMeta):
37 @classmethod
38 def generate(cls) -> "X448PrivateKey":
39 from cryptography.hazmat.backends.openssl.backend import backend
40
41 if not backend.x448_supported():
42 raise UnsupportedAlgorithm(
43 "X448 is not supported by this version of OpenSSL.",
44 _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,
45 )
46 return backend.x448_generate_key()
47
48 @classmethod
49 def from_private_bytes(cls, data: bytes) -> "X448PrivateKey":
50 from cryptography.hazmat.backends.openssl.backend import backend
51
52 if not backend.x448_supported():
53 raise UnsupportedAlgorithm(
54 "X448 is not supported by this version of OpenSSL.",
55 _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,
56 )
57
58 return backend.x448_load_private_bytes(data)
59
60 @abc.abstractmethod
61 def public_key(self) -> X448PublicKey:
62 """
63 The serialized bytes of the public key.
64 """
65
66 @abc.abstractmethod
67 def private_bytes(
68 self,
69 encoding: _serialization.Encoding,
70 format: _serialization.PrivateFormat,
71 encryption_algorithm: _serialization.KeySerializationEncryption,
72 ) -> bytes:
73 """
74 The serialized bytes of the private key.
75 """
76
77 @abc.abstractmethod
78 def exchange(self, peer_public_key: X448PublicKey) -> bytes:
79 """
80 Performs a key exchange operation using the provided peer's public key.
81 """
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cryptography/hazmat/primitives/asymmetric/x448.py b/src/cryptography/hazmat/primitives/asymmetric/x448.py
--- a/src/cryptography/hazmat/primitives/asymmetric/x448.py
+++ b/src/cryptography/hazmat/primitives/asymmetric/x448.py
@@ -60,7 +60,7 @@
@abc.abstractmethod
def public_key(self) -> X448PublicKey:
"""
- The serialized bytes of the public key.
+ Returns the public key associated with this private key
"""
@abc.abstractmethod
| {"golden_diff": "diff --git a/src/cryptography/hazmat/primitives/asymmetric/x448.py b/src/cryptography/hazmat/primitives/asymmetric/x448.py\n--- a/src/cryptography/hazmat/primitives/asymmetric/x448.py\n+++ b/src/cryptography/hazmat/primitives/asymmetric/x448.py\n@@ -60,7 +60,7 @@\n @abc.abstractmethod\n def public_key(self) -> X448PublicKey:\n \"\"\"\n- The serialized bytes of the public key.\n+ Returns the public key associated with this private key\n \"\"\"\n \n @abc.abstractmethod\n", "issue": "Incorrect docstrings in x25519 and x448 `.public_key()` methods\nSee:\r\n\r\nhttps://github.com/pyca/cryptography/blob/127a2860740c77f45362e68e0ed7d2d108a39033/src/cryptography/hazmat/primitives/asymmetric/x25519.py#L60-L64\r\n\r\nhttps://github.com/pyca/cryptography/blob/127a2860740c77f45362e68e0ed7d2d108a39033/src/cryptography/hazmat/primitives/asymmetric/x448.py#L60-L64\r\n\r\nIn both instances, the method does not return serialised bytes, but a public key object. The full [generated documentation](https://cryptography.io/en/latest/hazmat/primitives/asymmetric/x25519/#cryptography.hazmat.primitives.asymmetric.x25519.X25519PrivateKey.public_key) is correct, as are the Ed* docstrings.\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\n\nimport abc\n\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.primitives import _serialization\n\n\nclass X448PublicKey(metaclass=abc.ABCMeta):\n @classmethod\n def from_public_bytes(cls, data: bytes) -> \"X448PublicKey\":\n from cryptography.hazmat.backends.openssl.backend import backend\n\n if not backend.x448_supported():\n raise UnsupportedAlgorithm(\n \"X448 is not supported by this version of OpenSSL.\",\n _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,\n )\n\n return backend.x448_load_public_bytes(data)\n\n @abc.abstractmethod\n def public_bytes(\n self,\n encoding: _serialization.Encoding,\n format: _serialization.PublicFormat,\n ) -> bytes:\n \"\"\"\n The serialized bytes of the public key.\n \"\"\"\n\n\nclass X448PrivateKey(metaclass=abc.ABCMeta):\n @classmethod\n def generate(cls) -> \"X448PrivateKey\":\n from cryptography.hazmat.backends.openssl.backend import backend\n\n if not backend.x448_supported():\n raise UnsupportedAlgorithm(\n \"X448 is not supported by this version of OpenSSL.\",\n _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,\n )\n return backend.x448_generate_key()\n\n @classmethod\n def from_private_bytes(cls, data: bytes) -> \"X448PrivateKey\":\n from cryptography.hazmat.backends.openssl.backend import backend\n\n if not backend.x448_supported():\n raise UnsupportedAlgorithm(\n \"X448 is not supported by this version of OpenSSL.\",\n _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,\n )\n\n return backend.x448_load_private_bytes(data)\n\n @abc.abstractmethod\n def public_key(self) -> X448PublicKey:\n \"\"\"\n The serialized bytes of the public key.\n \"\"\"\n\n @abc.abstractmethod\n def private_bytes(\n self,\n encoding: _serialization.Encoding,\n format: _serialization.PrivateFormat,\n encryption_algorithm: _serialization.KeySerializationEncryption,\n ) -> bytes:\n \"\"\"\n The serialized bytes of the private key.\n \"\"\"\n\n @abc.abstractmethod\n def exchange(self, peer_public_key: X448PublicKey) -> bytes:\n \"\"\"\n Performs a key exchange operation using the provided peer's public key.\n \"\"\"\n", "path": "src/cryptography/hazmat/primitives/asymmetric/x448.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\n\nimport abc\n\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.primitives import _serialization\n\n\nclass X448PublicKey(metaclass=abc.ABCMeta):\n @classmethod\n def from_public_bytes(cls, data: bytes) -> \"X448PublicKey\":\n from cryptography.hazmat.backends.openssl.backend import backend\n\n if not backend.x448_supported():\n raise UnsupportedAlgorithm(\n \"X448 is not supported by this version of OpenSSL.\",\n _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,\n )\n\n return backend.x448_load_public_bytes(data)\n\n @abc.abstractmethod\n def public_bytes(\n self,\n encoding: _serialization.Encoding,\n format: _serialization.PublicFormat,\n ) -> bytes:\n \"\"\"\n The serialized bytes of the public key.\n \"\"\"\n\n\nclass X448PrivateKey(metaclass=abc.ABCMeta):\n @classmethod\n def generate(cls) -> \"X448PrivateKey\":\n from cryptography.hazmat.backends.openssl.backend import backend\n\n if not backend.x448_supported():\n raise UnsupportedAlgorithm(\n \"X448 is not supported by this version of OpenSSL.\",\n _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,\n )\n return backend.x448_generate_key()\n\n @classmethod\n def from_private_bytes(cls, data: bytes) -> \"X448PrivateKey\":\n from cryptography.hazmat.backends.openssl.backend import backend\n\n if not backend.x448_supported():\n raise UnsupportedAlgorithm(\n \"X448 is not supported by this version of OpenSSL.\",\n _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,\n )\n\n return backend.x448_load_private_bytes(data)\n\n @abc.abstractmethod\n def public_key(self) -> X448PublicKey:\n \"\"\"\n Returns the public key associated with this private key\n \"\"\"\n\n @abc.abstractmethod\n def private_bytes(\n self,\n encoding: _serialization.Encoding,\n format: _serialization.PrivateFormat,\n encryption_algorithm: _serialization.KeySerializationEncryption,\n ) -> bytes:\n \"\"\"\n The serialized bytes of the private key.\n \"\"\"\n\n @abc.abstractmethod\n def exchange(self, peer_public_key: X448PublicKey) -> bytes:\n \"\"\"\n Performs a key exchange operation using the provided peer's public key.\n \"\"\"\n", "path": "src/cryptography/hazmat/primitives/asymmetric/x448.py"}]} | 1,236 | 136 |
gh_patches_debug_38411 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-5275 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support for EFS Lifecycles
According to this, it's not an available action: https://cloudcustodian.io/docs/aws/resources/efs.html
Console and CLI instructions from AWS are here: https://docs.aws.amazon.com/efs/latest/ug/enable-lifecycle-management.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `c7n/resources/efs.py`
Content:
```
1 # Copyright 2015-2017 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 from c7n.actions import Action
17 from c7n.filters.kms import KmsRelatedFilter
18 from c7n.manager import resources
19 from c7n.filters.vpc import SecurityGroupFilter, SubnetFilter
20 from c7n.query import QueryResourceManager, ChildResourceManager, TypeInfo
21 from c7n.tags import universal_augment
22 from c7n.utils import local_session, type_schema, get_retry
23
24
25 @resources.register('efs')
26 class ElasticFileSystem(QueryResourceManager):
27
28 class resource_type(TypeInfo):
29 service = 'efs'
30 enum_spec = ('describe_file_systems', 'FileSystems', None)
31 id = 'FileSystemId'
32 name = 'Name'
33 date = 'CreationTime'
34 dimension = 'FileSystemId'
35 arn_type = 'file-system'
36 permission_prefix = arn_service = 'elasticfilesystem'
37 filter_name = 'FileSystemId'
38 filter_type = 'scalar'
39 universal_taggable = True
40
41 augment = universal_augment
42
43
44 @resources.register('efs-mount-target')
45 class ElasticFileSystemMountTarget(ChildResourceManager):
46
47 class resource_type(TypeInfo):
48 service = 'efs'
49 parent_spec = ('efs', 'FileSystemId', None)
50 enum_spec = ('describe_mount_targets', 'MountTargets', None)
51 permission_prefix = 'elasticfilesystem'
52 name = id = 'MountTargetId'
53 filter_name = 'MountTargetId'
54 filter_type = 'scalar'
55 arn = False
56
57
58 @ElasticFileSystemMountTarget.filter_registry.register('subnet')
59 class Subnet(SubnetFilter):
60
61 RelatedIdsExpression = "SubnetId"
62
63
64 @ElasticFileSystemMountTarget.filter_registry.register('security-group')
65 class SecurityGroup(SecurityGroupFilter):
66
67 efs_group_cache = None
68
69 RelatedIdsExpression = ""
70
71 def get_related_ids(self, resources):
72
73 if self.efs_group_cache:
74 group_ids = set()
75 for r in resources:
76 group_ids.update(
77 self.efs_group_cache.get(r['MountTargetId'], ()))
78 return list(group_ids)
79
80 client = local_session(self.manager.session_factory).client('efs')
81 groups = {}
82 group_ids = set()
83 retry = get_retry(('Throttled',), 12)
84
85 for r in resources:
86 groups[r['MountTargetId']] = retry(
87 client.describe_mount_target_security_groups,
88 MountTargetId=r['MountTargetId'])['SecurityGroups']
89 group_ids.update(groups[r['MountTargetId']])
90
91 self.efs_group_cache = groups
92 return list(group_ids)
93
94
95 @ElasticFileSystem.filter_registry.register('kms-key')
96 class KmsFilter(KmsRelatedFilter):
97 """
98 Filter a resource by its associcated kms key and optionally the aliasname
99 of the kms key by using 'c7n:AliasName'
100
101 :example:
102
103 .. code-block:: yaml
104
105 policies:
106 - name: efs-kms-key-filters
107 resource: efs
108 filters:
109 - type: kms-key
110 key: c7n:AliasName
111 value: "^(alias/aws/)"
112 op: regex
113 """
114 RelatedIdsExpression = 'KmsKeyId'
115
116
117 @ElasticFileSystem.action_registry.register('delete')
118 class Delete(Action):
119
120 schema = type_schema('delete')
121 permissions = ('elasticfilesystem:DescribeMountTargets',
122 'elasticfilesystem:DeleteMountTarget',
123 'elasticfilesystem:DeleteFileSystem')
124
125 def process(self, resources):
126 client = local_session(self.manager.session_factory).client('efs')
127 self.unmount_filesystems(resources)
128 retry = get_retry(('FileSystemInUse',), 12)
129 for r in resources:
130 retry(client.delete_file_system, FileSystemId=r['FileSystemId'])
131
132 def unmount_filesystems(self, resources):
133 client = local_session(self.manager.session_factory).client('efs')
134 for r in resources:
135 if not r['NumberOfMountTargets']:
136 continue
137 for t in client.describe_mount_targets(
138 FileSystemId=r['FileSystemId'])['MountTargets']:
139 client.delete_mount_target(MountTargetId=t['MountTargetId'])
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/c7n/resources/efs.py b/c7n/resources/efs.py
--- a/c7n/resources/efs.py
+++ b/c7n/resources/efs.py
@@ -13,13 +13,15 @@
# limitations under the License.
from __future__ import absolute_import, division, print_function, unicode_literals
-from c7n.actions import Action
+from c7n.actions import Action, BaseAction
+from c7n.exceptions import PolicyValidationError
from c7n.filters.kms import KmsRelatedFilter
from c7n.manager import resources
from c7n.filters.vpc import SecurityGroupFilter, SubnetFilter
from c7n.query import QueryResourceManager, ChildResourceManager, TypeInfo
from c7n.tags import universal_augment
from c7n.utils import local_session, type_schema, get_retry
+from .aws import shape_validate
@resources.register('efs')
@@ -137,3 +139,57 @@
for t in client.describe_mount_targets(
FileSystemId=r['FileSystemId'])['MountTargets']:
client.delete_mount_target(MountTargetId=t['MountTargetId'])
+
+
[email protected]_registry.register('configure-lifecycle-policy')
+class ConfigureLifecycle(BaseAction):
+ """Enable/disable lifecycle policy for efs.
+
+ :example:
+
+ .. code-block:: yaml
+
+ policies:
+ - name: efs-apply-lifecycle
+ resource: efs
+ actions:
+ - type: configure-lifecycle-policy
+ state: enable
+ rules:
+ - 'TransitionToIA': 'AFTER_7_DAYS'
+
+ """
+ schema = type_schema(
+ 'configure-lifecycle-policy',
+ state={'enum': ['enable', 'disable']},
+ rules={
+ 'type': 'array',
+ 'items': {'type': 'object'}},
+ required=['state'])
+
+ permissions = ('elasticfilesystem:PutLifecycleConfiguration',)
+ shape = 'PutLifecycleConfigurationRequest'
+
+ def validate(self):
+ if self.data.get('state') == 'enable' and 'rules' not in self.data:
+ raise PolicyValidationError(
+ 'rules are required to enable lifecycle configuration %s' % (self.manager.data))
+ if self.data.get('state') == 'disable' and 'rules' in self.data:
+ raise PolicyValidationError(
+ 'rules not required to disable lifecycle configuration %s' % (self.manager.data))
+ if self.data.get('rules'):
+ attrs = {}
+ attrs['LifecyclePolicies'] = self.data['rules']
+ attrs['FileSystemId'] = 'PolicyValidator'
+ return shape_validate(attrs, self.shape, 'efs')
+
+ def process(self, resources):
+ client = local_session(self.manager.session_factory).client('efs')
+ op_map = {'enable': self.data.get('rules'), 'disable': []}
+ for r in resources:
+ try:
+ client.put_lifecycle_configuration(
+ FileSystemId=r['FileSystemId'],
+ LifecyclePolicies=op_map.get(self.data.get('state')))
+ except client.exceptions.FileSystemNotFound:
+ continue
| {"golden_diff": "diff --git a/c7n/resources/efs.py b/c7n/resources/efs.py\n--- a/c7n/resources/efs.py\n+++ b/c7n/resources/efs.py\n@@ -13,13 +13,15 @@\n # limitations under the License.\n from __future__ import absolute_import, division, print_function, unicode_literals\n \n-from c7n.actions import Action\n+from c7n.actions import Action, BaseAction\n+from c7n.exceptions import PolicyValidationError\n from c7n.filters.kms import KmsRelatedFilter\n from c7n.manager import resources\n from c7n.filters.vpc import SecurityGroupFilter, SubnetFilter\n from c7n.query import QueryResourceManager, ChildResourceManager, TypeInfo\n from c7n.tags import universal_augment\n from c7n.utils import local_session, type_schema, get_retry\n+from .aws import shape_validate\n \n \n @resources.register('efs')\n@@ -137,3 +139,57 @@\n for t in client.describe_mount_targets(\n FileSystemId=r['FileSystemId'])['MountTargets']:\n client.delete_mount_target(MountTargetId=t['MountTargetId'])\n+\n+\[email protected]_registry.register('configure-lifecycle-policy')\n+class ConfigureLifecycle(BaseAction):\n+ \"\"\"Enable/disable lifecycle policy for efs.\n+\n+ :example:\n+\n+ .. code-block:: yaml\n+\n+ policies:\n+ - name: efs-apply-lifecycle\n+ resource: efs\n+ actions:\n+ - type: configure-lifecycle-policy\n+ state: enable\n+ rules:\n+ - 'TransitionToIA': 'AFTER_7_DAYS'\n+\n+ \"\"\"\n+ schema = type_schema(\n+ 'configure-lifecycle-policy',\n+ state={'enum': ['enable', 'disable']},\n+ rules={\n+ 'type': 'array',\n+ 'items': {'type': 'object'}},\n+ required=['state'])\n+\n+ permissions = ('elasticfilesystem:PutLifecycleConfiguration',)\n+ shape = 'PutLifecycleConfigurationRequest'\n+\n+ def validate(self):\n+ if self.data.get('state') == 'enable' and 'rules' not in self.data:\n+ raise PolicyValidationError(\n+ 'rules are required to enable lifecycle configuration %s' % (self.manager.data))\n+ if self.data.get('state') == 'disable' and 'rules' in self.data:\n+ raise PolicyValidationError(\n+ 'rules not required to disable lifecycle configuration %s' % (self.manager.data))\n+ if self.data.get('rules'):\n+ attrs = {}\n+ attrs['LifecyclePolicies'] = self.data['rules']\n+ attrs['FileSystemId'] = 'PolicyValidator'\n+ return shape_validate(attrs, self.shape, 'efs')\n+\n+ def process(self, resources):\n+ client = local_session(self.manager.session_factory).client('efs')\n+ op_map = {'enable': self.data.get('rules'), 'disable': []}\n+ for r in resources:\n+ try:\n+ client.put_lifecycle_configuration(\n+ FileSystemId=r['FileSystemId'],\n+ LifecyclePolicies=op_map.get(self.data.get('state')))\n+ except client.exceptions.FileSystemNotFound:\n+ continue\n", "issue": "Support for EFS Lifecycles\nAccording to this, it's not an available action: https://cloudcustodian.io/docs/aws/resources/efs.html\r\n\r\nConsole and CLI instructions from AWS are here: https://docs.aws.amazon.com/efs/latest/ug/enable-lifecycle-management.html\n", "before_files": [{"content": "# Copyright 2015-2017 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom c7n.actions import Action\nfrom c7n.filters.kms import KmsRelatedFilter\nfrom c7n.manager import resources\nfrom c7n.filters.vpc import SecurityGroupFilter, SubnetFilter\nfrom c7n.query import QueryResourceManager, ChildResourceManager, TypeInfo\nfrom c7n.tags import universal_augment\nfrom c7n.utils import local_session, type_schema, get_retry\n\n\[email protected]('efs')\nclass ElasticFileSystem(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = 'efs'\n enum_spec = ('describe_file_systems', 'FileSystems', None)\n id = 'FileSystemId'\n name = 'Name'\n date = 'CreationTime'\n dimension = 'FileSystemId'\n arn_type = 'file-system'\n permission_prefix = arn_service = 'elasticfilesystem'\n filter_name = 'FileSystemId'\n filter_type = 'scalar'\n universal_taggable = True\n\n augment = universal_augment\n\n\[email protected]('efs-mount-target')\nclass ElasticFileSystemMountTarget(ChildResourceManager):\n\n class resource_type(TypeInfo):\n service = 'efs'\n parent_spec = ('efs', 'FileSystemId', None)\n enum_spec = ('describe_mount_targets', 'MountTargets', None)\n permission_prefix = 'elasticfilesystem'\n name = id = 'MountTargetId'\n filter_name = 'MountTargetId'\n filter_type = 'scalar'\n arn = False\n\n\[email protected]_registry.register('subnet')\nclass Subnet(SubnetFilter):\n\n RelatedIdsExpression = \"SubnetId\"\n\n\[email protected]_registry.register('security-group')\nclass SecurityGroup(SecurityGroupFilter):\n\n efs_group_cache = None\n\n RelatedIdsExpression = \"\"\n\n def get_related_ids(self, resources):\n\n if self.efs_group_cache:\n group_ids = set()\n for r in resources:\n group_ids.update(\n self.efs_group_cache.get(r['MountTargetId'], ()))\n return list(group_ids)\n\n client = local_session(self.manager.session_factory).client('efs')\n groups = {}\n group_ids = set()\n retry = get_retry(('Throttled',), 12)\n\n for r in resources:\n groups[r['MountTargetId']] = retry(\n client.describe_mount_target_security_groups,\n MountTargetId=r['MountTargetId'])['SecurityGroups']\n group_ids.update(groups[r['MountTargetId']])\n\n self.efs_group_cache = groups\n return list(group_ids)\n\n\[email protected]_registry.register('kms-key')\nclass KmsFilter(KmsRelatedFilter):\n \"\"\"\n Filter a resource by its associcated kms key and optionally the aliasname\n of the kms key by using 'c7n:AliasName'\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: efs-kms-key-filters\n resource: efs\n filters:\n - type: kms-key\n key: c7n:AliasName\n value: \"^(alias/aws/)\"\n op: regex\n \"\"\"\n RelatedIdsExpression = 'KmsKeyId'\n\n\[email protected]_registry.register('delete')\nclass Delete(Action):\n\n schema = type_schema('delete')\n permissions = ('elasticfilesystem:DescribeMountTargets',\n 'elasticfilesystem:DeleteMountTarget',\n 'elasticfilesystem:DeleteFileSystem')\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('efs')\n self.unmount_filesystems(resources)\n retry = get_retry(('FileSystemInUse',), 12)\n for r in resources:\n retry(client.delete_file_system, FileSystemId=r['FileSystemId'])\n\n def unmount_filesystems(self, resources):\n client = local_session(self.manager.session_factory).client('efs')\n for r in resources:\n if not r['NumberOfMountTargets']:\n continue\n for t in client.describe_mount_targets(\n FileSystemId=r['FileSystemId'])['MountTargets']:\n client.delete_mount_target(MountTargetId=t['MountTargetId'])\n", "path": "c7n/resources/efs.py"}], "after_files": [{"content": "# Copyright 2015-2017 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom c7n.actions import Action, BaseAction\nfrom c7n.exceptions import PolicyValidationError\nfrom c7n.filters.kms import KmsRelatedFilter\nfrom c7n.manager import resources\nfrom c7n.filters.vpc import SecurityGroupFilter, SubnetFilter\nfrom c7n.query import QueryResourceManager, ChildResourceManager, TypeInfo\nfrom c7n.tags import universal_augment\nfrom c7n.utils import local_session, type_schema, get_retry\nfrom .aws import shape_validate\n\n\[email protected]('efs')\nclass ElasticFileSystem(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = 'efs'\n enum_spec = ('describe_file_systems', 'FileSystems', None)\n id = 'FileSystemId'\n name = 'Name'\n date = 'CreationTime'\n dimension = 'FileSystemId'\n arn_type = 'file-system'\n permission_prefix = arn_service = 'elasticfilesystem'\n filter_name = 'FileSystemId'\n filter_type = 'scalar'\n universal_taggable = True\n\n augment = universal_augment\n\n\[email protected]('efs-mount-target')\nclass ElasticFileSystemMountTarget(ChildResourceManager):\n\n class resource_type(TypeInfo):\n service = 'efs'\n parent_spec = ('efs', 'FileSystemId', None)\n enum_spec = ('describe_mount_targets', 'MountTargets', None)\n permission_prefix = 'elasticfilesystem'\n name = id = 'MountTargetId'\n filter_name = 'MountTargetId'\n filter_type = 'scalar'\n arn = False\n\n\[email protected]_registry.register('subnet')\nclass Subnet(SubnetFilter):\n\n RelatedIdsExpression = \"SubnetId\"\n\n\[email protected]_registry.register('security-group')\nclass SecurityGroup(SecurityGroupFilter):\n\n efs_group_cache = None\n\n RelatedIdsExpression = \"\"\n\n def get_related_ids(self, resources):\n\n if self.efs_group_cache:\n group_ids = set()\n for r in resources:\n group_ids.update(\n self.efs_group_cache.get(r['MountTargetId'], ()))\n return list(group_ids)\n\n client = local_session(self.manager.session_factory).client('efs')\n groups = {}\n group_ids = set()\n retry = get_retry(('Throttled',), 12)\n\n for r in resources:\n groups[r['MountTargetId']] = retry(\n client.describe_mount_target_security_groups,\n MountTargetId=r['MountTargetId'])['SecurityGroups']\n group_ids.update(groups[r['MountTargetId']])\n\n self.efs_group_cache = groups\n return list(group_ids)\n\n\[email protected]_registry.register('kms-key')\nclass KmsFilter(KmsRelatedFilter):\n \"\"\"\n Filter a resource by its associcated kms key and optionally the aliasname\n of the kms key by using 'c7n:AliasName'\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: efs-kms-key-filters\n resource: efs\n filters:\n - type: kms-key\n key: c7n:AliasName\n value: \"^(alias/aws/)\"\n op: regex\n \"\"\"\n RelatedIdsExpression = 'KmsKeyId'\n\n\[email protected]_registry.register('delete')\nclass Delete(Action):\n\n schema = type_schema('delete')\n permissions = ('elasticfilesystem:DescribeMountTargets',\n 'elasticfilesystem:DeleteMountTarget',\n 'elasticfilesystem:DeleteFileSystem')\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('efs')\n self.unmount_filesystems(resources)\n retry = get_retry(('FileSystemInUse',), 12)\n for r in resources:\n retry(client.delete_file_system, FileSystemId=r['FileSystemId'])\n\n def unmount_filesystems(self, resources):\n client = local_session(self.manager.session_factory).client('efs')\n for r in resources:\n if not r['NumberOfMountTargets']:\n continue\n for t in client.describe_mount_targets(\n FileSystemId=r['FileSystemId'])['MountTargets']:\n client.delete_mount_target(MountTargetId=t['MountTargetId'])\n\n\[email protected]_registry.register('configure-lifecycle-policy')\nclass ConfigureLifecycle(BaseAction):\n \"\"\"Enable/disable lifecycle policy for efs.\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: efs-apply-lifecycle\n resource: efs\n actions:\n - type: configure-lifecycle-policy\n state: enable\n rules:\n - 'TransitionToIA': 'AFTER_7_DAYS'\n\n \"\"\"\n schema = type_schema(\n 'configure-lifecycle-policy',\n state={'enum': ['enable', 'disable']},\n rules={\n 'type': 'array',\n 'items': {'type': 'object'}},\n required=['state'])\n\n permissions = ('elasticfilesystem:PutLifecycleConfiguration',)\n shape = 'PutLifecycleConfigurationRequest'\n\n def validate(self):\n if self.data.get('state') == 'enable' and 'rules' not in self.data:\n raise PolicyValidationError(\n 'rules are required to enable lifecycle configuration %s' % (self.manager.data))\n if self.data.get('state') == 'disable' and 'rules' in self.data:\n raise PolicyValidationError(\n 'rules not required to disable lifecycle configuration %s' % (self.manager.data))\n if self.data.get('rules'):\n attrs = {}\n attrs['LifecyclePolicies'] = self.data['rules']\n attrs['FileSystemId'] = 'PolicyValidator'\n return shape_validate(attrs, self.shape, 'efs')\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('efs')\n op_map = {'enable': self.data.get('rules'), 'disable': []}\n for r in resources:\n try:\n client.put_lifecycle_configuration(\n FileSystemId=r['FileSystemId'],\n LifecyclePolicies=op_map.get(self.data.get('state')))\n except client.exceptions.FileSystemNotFound:\n continue\n", "path": "c7n/resources/efs.py"}]} | 1,662 | 695 |
gh_patches_debug_28943 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1020 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Replace links in the challenge copying command
People might use full url links in the page HTML, use a regex to replace the links using the new challenge short name.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/challenges/management/commands/copy_challenge.py`
Content:
```
1 from django.core.management import BaseCommand, CommandError
2
3 from grandchallenge.challenges.models import Challenge
4 from grandchallenge.pages.models import Page
5
6
7 class Command(BaseCommand):
8 help = "Creates a copy of a challenge"
9
10 challenge_fields = [
11 "creator",
12 "description",
13 "educational",
14 "disclaimer",
15 "require_participant_review",
16 "use_registration_page",
17 "registration_page_text",
18 "use_evaluation",
19 "logo",
20 "banner",
21 ]
22
23 challenge_m2m_fields = [
24 "task_types",
25 "modalities",
26 "structures",
27 ]
28
29 config_fields = [
30 "use_teams",
31 "score_title",
32 "score_jsonpath",
33 "score_error_jsonpath",
34 "score_default_sort",
35 "score_decimal_places",
36 "extra_results_columns",
37 "scoring_method_choice",
38 "result_display_choice",
39 "allow_submission_comments",
40 "display_submission_comments",
41 "supplementary_file_choice",
42 "supplementary_file_label",
43 "supplementary_file_help_text",
44 "show_supplementary_file_link",
45 "publication_url_choice",
46 "show_publication_url",
47 "daily_submission_limit",
48 "submission_page_html",
49 "auto_publish_new_results",
50 "display_all_metrics",
51 "submission_join_key",
52 ]
53
54 page_fields = [
55 "title",
56 "permission_lvl",
57 "order",
58 "display_title",
59 "hidden",
60 "html",
61 ]
62
63 def add_arguments(self, parser):
64 parser.add_argument("source", type=str)
65 parser.add_argument("dest", type=str)
66
67 def handle(self, *args, **options):
68 src_name = options.pop("source")
69 dest_name = options.pop("dest")
70
71 if src_name.lower() == dest_name.lower():
72 raise CommandError("Source and dest names must be different")
73
74 src_challenge = Challenge.objects.get(short_name__iexact=src_name)
75 dest_challenge = self._create_new_challenge(
76 src_challenge=src_challenge, dest_name=dest_name
77 )
78
79 self._copy_m2m_fields(
80 src_challenge=src_challenge, dest_challenge=dest_challenge
81 )
82 self._copy_evaluation_config(
83 src_challenge=src_challenge, dest_challenge=dest_challenge
84 )
85 self._copy_pages(
86 src_challenge=src_challenge, dest_challenge=dest_challenge
87 )
88 self._copy_admins(
89 src_challenge=src_challenge, dest_challenge=dest_challenge
90 )
91
92 def _create_new_challenge(self, *, src_challenge, dest_name):
93 new_challenge = Challenge(
94 short_name=dest_name,
95 **{f: getattr(src_challenge, f) for f in self.challenge_fields},
96 )
97 new_challenge.full_clean()
98 new_challenge.save()
99 return new_challenge
100
101 def _copy_m2m_fields(self, *, src_challenge, dest_challenge):
102 for f in self.challenge_m2m_fields:
103 src_m2m = getattr(src_challenge, f)
104 dest_m2m = getattr(dest_challenge, f)
105 dest_m2m.set(src_m2m.all())
106
107 def _copy_evaluation_config(self, *, src_challenge, dest_challenge):
108 src_config = src_challenge.evaluation_config
109 dest_config = dest_challenge.evaluation_config
110
111 for attr in self.config_fields:
112 setattr(dest_config, attr, getattr(src_config, attr))
113
114 dest_config.save()
115
116 def _copy_pages(self, *, src_challenge, dest_challenge):
117 src_pages = src_challenge.page_set.all()
118
119 for src_page in src_pages:
120 Page.objects.create(
121 challenge=dest_challenge,
122 **{f: getattr(src_page, f) for f in self.page_fields},
123 )
124
125 def _copy_admins(self, *, src_challenge, dest_challenge):
126 for u in src_challenge.get_admins():
127 dest_challenge.add_admin(u)
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/challenges/management/commands/copy_challenge.py b/app/grandchallenge/challenges/management/commands/copy_challenge.py
--- a/app/grandchallenge/challenges/management/commands/copy_challenge.py
+++ b/app/grandchallenge/challenges/management/commands/copy_challenge.py
@@ -1,3 +1,6 @@
+import re
+
+from django.contrib.sites.models import Site
from django.core.management import BaseCommand, CommandError
from grandchallenge.challenges.models import Challenge
@@ -57,7 +60,6 @@
"order",
"display_title",
"hidden",
- "html",
]
def add_arguments(self, parser):
@@ -113,12 +115,24 @@
dest_config.save()
+ def _substitute_urls(self, html, domain, old, new):
+ quote_replace = r"href='([^']*)'"
+ regex = fr'href="[^/]*//{old}.{domain}([^""]*)"'
+ html = re.sub(quote_replace, r'href="\1"', html)
+ return re.sub(regex, fr'href="https://{new}.{domain}\1"', html,)
+
def _copy_pages(self, *, src_challenge, dest_challenge):
src_pages = src_challenge.page_set.all()
+ site = Site.objects.get_current()
+ domain = site.domain
+ old = src_challenge.short_name
+ new = dest_challenge.short_name
+
for src_page in src_pages:
Page.objects.create(
challenge=dest_challenge,
+ html=self._substitute_urls(src_page.html, domain, old, new),
**{f: getattr(src_page, f) for f in self.page_fields},
)
| {"golden_diff": "diff --git a/app/grandchallenge/challenges/management/commands/copy_challenge.py b/app/grandchallenge/challenges/management/commands/copy_challenge.py\n--- a/app/grandchallenge/challenges/management/commands/copy_challenge.py\n+++ b/app/grandchallenge/challenges/management/commands/copy_challenge.py\n@@ -1,3 +1,6 @@\n+import re\n+\n+from django.contrib.sites.models import Site\n from django.core.management import BaseCommand, CommandError\n \n from grandchallenge.challenges.models import Challenge\n@@ -57,7 +60,6 @@\n \"order\",\n \"display_title\",\n \"hidden\",\n- \"html\",\n ]\n \n def add_arguments(self, parser):\n@@ -113,12 +115,24 @@\n \n dest_config.save()\n \n+ def _substitute_urls(self, html, domain, old, new):\n+ quote_replace = r\"href='([^']*)'\"\n+ regex = fr'href=\"[^/]*//{old}.{domain}([^\"\"]*)\"'\n+ html = re.sub(quote_replace, r'href=\"\\1\"', html)\n+ return re.sub(regex, fr'href=\"https://{new}.{domain}\\1\"', html,)\n+\n def _copy_pages(self, *, src_challenge, dest_challenge):\n src_pages = src_challenge.page_set.all()\n \n+ site = Site.objects.get_current()\n+ domain = site.domain\n+ old = src_challenge.short_name\n+ new = dest_challenge.short_name\n+\n for src_page in src_pages:\n Page.objects.create(\n challenge=dest_challenge,\n+ html=self._substitute_urls(src_page.html, domain, old, new),\n **{f: getattr(src_page, f) for f in self.page_fields},\n )\n", "issue": "Replace links in the challenge copying command\nPeople might use full url links in the page HTML, use a regex to replace the links using the new challenge short name.\n", "before_files": [{"content": "from django.core.management import BaseCommand, CommandError\n\nfrom grandchallenge.challenges.models import Challenge\nfrom grandchallenge.pages.models import Page\n\n\nclass Command(BaseCommand):\n help = \"Creates a copy of a challenge\"\n\n challenge_fields = [\n \"creator\",\n \"description\",\n \"educational\",\n \"disclaimer\",\n \"require_participant_review\",\n \"use_registration_page\",\n \"registration_page_text\",\n \"use_evaluation\",\n \"logo\",\n \"banner\",\n ]\n\n challenge_m2m_fields = [\n \"task_types\",\n \"modalities\",\n \"structures\",\n ]\n\n config_fields = [\n \"use_teams\",\n \"score_title\",\n \"score_jsonpath\",\n \"score_error_jsonpath\",\n \"score_default_sort\",\n \"score_decimal_places\",\n \"extra_results_columns\",\n \"scoring_method_choice\",\n \"result_display_choice\",\n \"allow_submission_comments\",\n \"display_submission_comments\",\n \"supplementary_file_choice\",\n \"supplementary_file_label\",\n \"supplementary_file_help_text\",\n \"show_supplementary_file_link\",\n \"publication_url_choice\",\n \"show_publication_url\",\n \"daily_submission_limit\",\n \"submission_page_html\",\n \"auto_publish_new_results\",\n \"display_all_metrics\",\n \"submission_join_key\",\n ]\n\n page_fields = [\n \"title\",\n \"permission_lvl\",\n \"order\",\n \"display_title\",\n \"hidden\",\n \"html\",\n ]\n\n def add_arguments(self, parser):\n parser.add_argument(\"source\", type=str)\n parser.add_argument(\"dest\", type=str)\n\n def handle(self, *args, **options):\n src_name = options.pop(\"source\")\n dest_name = options.pop(\"dest\")\n\n if src_name.lower() == dest_name.lower():\n raise CommandError(\"Source and dest names must be different\")\n\n src_challenge = Challenge.objects.get(short_name__iexact=src_name)\n dest_challenge = self._create_new_challenge(\n src_challenge=src_challenge, dest_name=dest_name\n )\n\n self._copy_m2m_fields(\n src_challenge=src_challenge, dest_challenge=dest_challenge\n )\n self._copy_evaluation_config(\n src_challenge=src_challenge, dest_challenge=dest_challenge\n )\n self._copy_pages(\n src_challenge=src_challenge, dest_challenge=dest_challenge\n )\n self._copy_admins(\n src_challenge=src_challenge, dest_challenge=dest_challenge\n )\n\n def _create_new_challenge(self, *, src_challenge, dest_name):\n new_challenge = Challenge(\n short_name=dest_name,\n **{f: getattr(src_challenge, f) for f in self.challenge_fields},\n )\n new_challenge.full_clean()\n new_challenge.save()\n return new_challenge\n\n def _copy_m2m_fields(self, *, src_challenge, dest_challenge):\n for f in self.challenge_m2m_fields:\n src_m2m = getattr(src_challenge, f)\n dest_m2m = getattr(dest_challenge, f)\n dest_m2m.set(src_m2m.all())\n\n def _copy_evaluation_config(self, *, src_challenge, dest_challenge):\n src_config = src_challenge.evaluation_config\n dest_config = dest_challenge.evaluation_config\n\n for attr in self.config_fields:\n setattr(dest_config, attr, getattr(src_config, attr))\n\n dest_config.save()\n\n def _copy_pages(self, *, src_challenge, dest_challenge):\n src_pages = src_challenge.page_set.all()\n\n for src_page in src_pages:\n Page.objects.create(\n challenge=dest_challenge,\n **{f: getattr(src_page, f) for f in self.page_fields},\n )\n\n def _copy_admins(self, *, src_challenge, dest_challenge):\n for u in src_challenge.get_admins():\n dest_challenge.add_admin(u)\n", "path": "app/grandchallenge/challenges/management/commands/copy_challenge.py"}], "after_files": [{"content": "import re\n\nfrom django.contrib.sites.models import Site\nfrom django.core.management import BaseCommand, CommandError\n\nfrom grandchallenge.challenges.models import Challenge\nfrom grandchallenge.pages.models import Page\n\n\nclass Command(BaseCommand):\n help = \"Creates a copy of a challenge\"\n\n challenge_fields = [\n \"creator\",\n \"description\",\n \"educational\",\n \"disclaimer\",\n \"require_participant_review\",\n \"use_registration_page\",\n \"registration_page_text\",\n \"use_evaluation\",\n \"logo\",\n \"banner\",\n ]\n\n challenge_m2m_fields = [\n \"task_types\",\n \"modalities\",\n \"structures\",\n ]\n\n config_fields = [\n \"use_teams\",\n \"score_title\",\n \"score_jsonpath\",\n \"score_error_jsonpath\",\n \"score_default_sort\",\n \"score_decimal_places\",\n \"extra_results_columns\",\n \"scoring_method_choice\",\n \"result_display_choice\",\n \"allow_submission_comments\",\n \"display_submission_comments\",\n \"supplementary_file_choice\",\n \"supplementary_file_label\",\n \"supplementary_file_help_text\",\n \"show_supplementary_file_link\",\n \"publication_url_choice\",\n \"show_publication_url\",\n \"daily_submission_limit\",\n \"submission_page_html\",\n \"auto_publish_new_results\",\n \"display_all_metrics\",\n \"submission_join_key\",\n ]\n\n page_fields = [\n \"title\",\n \"permission_lvl\",\n \"order\",\n \"display_title\",\n \"hidden\",\n ]\n\n def add_arguments(self, parser):\n parser.add_argument(\"source\", type=str)\n parser.add_argument(\"dest\", type=str)\n\n def handle(self, *args, **options):\n src_name = options.pop(\"source\")\n dest_name = options.pop(\"dest\")\n\n if src_name.lower() == dest_name.lower():\n raise CommandError(\"Source and dest names must be different\")\n\n src_challenge = Challenge.objects.get(short_name__iexact=src_name)\n dest_challenge = self._create_new_challenge(\n src_challenge=src_challenge, dest_name=dest_name\n )\n\n self._copy_m2m_fields(\n src_challenge=src_challenge, dest_challenge=dest_challenge\n )\n self._copy_evaluation_config(\n src_challenge=src_challenge, dest_challenge=dest_challenge\n )\n self._copy_pages(\n src_challenge=src_challenge, dest_challenge=dest_challenge\n )\n self._copy_admins(\n src_challenge=src_challenge, dest_challenge=dest_challenge\n )\n\n def _create_new_challenge(self, *, src_challenge, dest_name):\n new_challenge = Challenge(\n short_name=dest_name,\n **{f: getattr(src_challenge, f) for f in self.challenge_fields},\n )\n new_challenge.full_clean()\n new_challenge.save()\n return new_challenge\n\n def _copy_m2m_fields(self, *, src_challenge, dest_challenge):\n for f in self.challenge_m2m_fields:\n src_m2m = getattr(src_challenge, f)\n dest_m2m = getattr(dest_challenge, f)\n dest_m2m.set(src_m2m.all())\n\n def _copy_evaluation_config(self, *, src_challenge, dest_challenge):\n src_config = src_challenge.evaluation_config\n dest_config = dest_challenge.evaluation_config\n\n for attr in self.config_fields:\n setattr(dest_config, attr, getattr(src_config, attr))\n\n dest_config.save()\n\n def _substitute_urls(self, html, domain, old, new):\n quote_replace = r\"href='([^']*)'\"\n regex = fr'href=\"[^/]*//{old}.{domain}([^\"\"]*)\"'\n html = re.sub(quote_replace, r'href=\"\\1\"', html)\n return re.sub(regex, fr'href=\"https://{new}.{domain}\\1\"', html,)\n\n def _copy_pages(self, *, src_challenge, dest_challenge):\n src_pages = src_challenge.page_set.all()\n\n site = Site.objects.get_current()\n domain = site.domain\n old = src_challenge.short_name\n new = dest_challenge.short_name\n\n for src_page in src_pages:\n Page.objects.create(\n challenge=dest_challenge,\n html=self._substitute_urls(src_page.html, domain, old, new),\n **{f: getattr(src_page, f) for f in self.page_fields},\n )\n\n def _copy_admins(self, *, src_challenge, dest_challenge):\n for u in src_challenge.get_admins():\n dest_challenge.add_admin(u)\n", "path": "app/grandchallenge/challenges/management/commands/copy_challenge.py"}]} | 1,399 | 386 |
gh_patches_debug_58008 | rasdani/github-patches | git_diff | marshmallow-code__webargs-385 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix compatibility with Falcon 2.0
Tests are currently failing when Falcon 2.0.0 is installed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import sys
3 import re
4 from setuptools import setup, find_packages
5
6 INSTALL_REQUIRES = ["marshmallow>=2.15.2"]
7 if sys.version_info[0] < 3:
8 INSTALL_REQUIRES.append("simplejson>=2.1.0")
9
10 FRAMEWORKS = [
11 "Flask>=0.12.2",
12 "Django>=1.11.16",
13 "bottle>=0.12.13",
14 "tornado>=4.5.2",
15 "pyramid>=1.9.1",
16 "webapp2>=3.0.0b1",
17 "falcon>=1.4.0",
18 'aiohttp>=3.0.0; python_version >= "3.5"',
19 ]
20 EXTRAS_REQUIRE = {
21 "frameworks": FRAMEWORKS,
22 "tests": [
23 "pytest",
24 "mock",
25 "webtest==2.0.32",
26 'webtest-aiohttp==2.0.0; python_version >= "3.5"',
27 'pytest-aiohttp>=0.3.0; python_version >= "3.5"',
28 ]
29 + FRAMEWORKS,
30 "lint": [
31 'mypy==0.650; python_version >= "3.5"',
32 "flake8==3.6.0",
33 'flake8-bugbear==18.8.0; python_version >= "3.5"',
34 "pre-commit==1.13.0",
35 ],
36 }
37 EXTRAS_REQUIRE["dev"] = EXTRAS_REQUIRE["tests"] + EXTRAS_REQUIRE["lint"] + ["tox"]
38
39
40 def find_version(fname):
41 """Attempts to find the version number in the file names fname.
42 Raises RuntimeError if not found.
43 """
44 version = ""
45 with open(fname, "r") as fp:
46 reg = re.compile(r'__version__ = [\'"]([^\'"]*)[\'"]')
47 for line in fp:
48 m = reg.match(line)
49 if m:
50 version = m.group(1)
51 break
52 if not version:
53 raise RuntimeError("Cannot find version information")
54 return version
55
56
57 def read(fname):
58 with open(fname) as fp:
59 content = fp.read()
60 return content
61
62
63 setup(
64 name="webargs",
65 version=find_version("src/webargs/__init__.py"),
66 description=(
67 "Declarative parsing and validation of HTTP request objects, "
68 "with built-in support for popular web frameworks, including "
69 "Flask, Django, Bottle, Tornado, Pyramid, webapp2, Falcon, and aiohttp."
70 ),
71 long_description=read("README.rst"),
72 author="Steven Loria",
73 author_email="[email protected]",
74 url="https://github.com/marshmallow-code/webargs",
75 packages=find_packages("src"),
76 package_dir={"": "src"},
77 install_requires=INSTALL_REQUIRES,
78 extras_require=EXTRAS_REQUIRE,
79 license="MIT",
80 zip_safe=False,
81 keywords=(
82 "webargs",
83 "http",
84 "flask",
85 "django",
86 "bottle",
87 "tornado",
88 "aiohttp",
89 "webapp2",
90 "request",
91 "arguments",
92 "validation",
93 "parameters",
94 "rest",
95 "api",
96 "marshmallow",
97 ),
98 classifiers=[
99 "Development Status :: 5 - Production/Stable",
100 "Intended Audience :: Developers",
101 "License :: OSI Approved :: MIT License",
102 "Natural Language :: English",
103 "Programming Language :: Python :: 2",
104 "Programming Language :: Python :: 2.7",
105 "Programming Language :: Python :: 3",
106 "Programming Language :: Python :: 3.5",
107 "Programming Language :: Python :: 3.6",
108 "Programming Language :: Python :: 3.7",
109 "Topic :: Internet :: WWW/HTTP :: Dynamic Content",
110 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
111 ],
112 test_suite="tests",
113 project_urls={
114 "Changelog": "https://webargs.readthedocs.io/en/latest/changelog.html",
115 "Issues": "https://github.com/marshmallow-code/webargs/issues",
116 "Funding": "https://opencollective.com/marshmallow",
117 "Tidelift": "https://tidelift.com/subscription/pkg/pypi-webargs?utm_source=pypi-marshmallow&utm_medium=pypi", # noqa
118 },
119 )
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -14,7 +14,7 @@
"tornado>=4.5.2",
"pyramid>=1.9.1",
"webapp2>=3.0.0b1",
- "falcon>=1.4.0",
+ "falcon>=1.4.0,<2.0",
'aiohttp>=3.0.0; python_version >= "3.5"',
]
EXTRAS_REQUIRE = {
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -14,7 +14,7 @@\n \"tornado>=4.5.2\",\n \"pyramid>=1.9.1\",\n \"webapp2>=3.0.0b1\",\n- \"falcon>=1.4.0\",\n+ \"falcon>=1.4.0,<2.0\",\n 'aiohttp>=3.0.0; python_version >= \"3.5\"',\n ]\n EXTRAS_REQUIRE = {\n", "issue": "Fix compatibility with Falcon 2.0\nTests are currently failing when Falcon 2.0.0 is installed.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport sys\nimport re\nfrom setuptools import setup, find_packages\n\nINSTALL_REQUIRES = [\"marshmallow>=2.15.2\"]\nif sys.version_info[0] < 3:\n INSTALL_REQUIRES.append(\"simplejson>=2.1.0\")\n\nFRAMEWORKS = [\n \"Flask>=0.12.2\",\n \"Django>=1.11.16\",\n \"bottle>=0.12.13\",\n \"tornado>=4.5.2\",\n \"pyramid>=1.9.1\",\n \"webapp2>=3.0.0b1\",\n \"falcon>=1.4.0\",\n 'aiohttp>=3.0.0; python_version >= \"3.5\"',\n]\nEXTRAS_REQUIRE = {\n \"frameworks\": FRAMEWORKS,\n \"tests\": [\n \"pytest\",\n \"mock\",\n \"webtest==2.0.32\",\n 'webtest-aiohttp==2.0.0; python_version >= \"3.5\"',\n 'pytest-aiohttp>=0.3.0; python_version >= \"3.5\"',\n ]\n + FRAMEWORKS,\n \"lint\": [\n 'mypy==0.650; python_version >= \"3.5\"',\n \"flake8==3.6.0\",\n 'flake8-bugbear==18.8.0; python_version >= \"3.5\"',\n \"pre-commit==1.13.0\",\n ],\n}\nEXTRAS_REQUIRE[\"dev\"] = EXTRAS_REQUIRE[\"tests\"] + EXTRAS_REQUIRE[\"lint\"] + [\"tox\"]\n\n\ndef find_version(fname):\n \"\"\"Attempts to find the version number in the file names fname.\n Raises RuntimeError if not found.\n \"\"\"\n version = \"\"\n with open(fname, \"r\") as fp:\n reg = re.compile(r'__version__ = [\\'\"]([^\\'\"]*)[\\'\"]')\n for line in fp:\n m = reg.match(line)\n if m:\n version = m.group(1)\n break\n if not version:\n raise RuntimeError(\"Cannot find version information\")\n return version\n\n\ndef read(fname):\n with open(fname) as fp:\n content = fp.read()\n return content\n\n\nsetup(\n name=\"webargs\",\n version=find_version(\"src/webargs/__init__.py\"),\n description=(\n \"Declarative parsing and validation of HTTP request objects, \"\n \"with built-in support for popular web frameworks, including \"\n \"Flask, Django, Bottle, Tornado, Pyramid, webapp2, Falcon, and aiohttp.\"\n ),\n long_description=read(\"README.rst\"),\n author=\"Steven Loria\",\n author_email=\"[email protected]\",\n url=\"https://github.com/marshmallow-code/webargs\",\n packages=find_packages(\"src\"),\n package_dir={\"\": \"src\"},\n install_requires=INSTALL_REQUIRES,\n extras_require=EXTRAS_REQUIRE,\n license=\"MIT\",\n zip_safe=False,\n keywords=(\n \"webargs\",\n \"http\",\n \"flask\",\n \"django\",\n \"bottle\",\n \"tornado\",\n \"aiohttp\",\n \"webapp2\",\n \"request\",\n \"arguments\",\n \"validation\",\n \"parameters\",\n \"rest\",\n \"api\",\n \"marshmallow\",\n ),\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Internet :: WWW/HTTP :: Dynamic Content\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n ],\n test_suite=\"tests\",\n project_urls={\n \"Changelog\": \"https://webargs.readthedocs.io/en/latest/changelog.html\",\n \"Issues\": \"https://github.com/marshmallow-code/webargs/issues\",\n \"Funding\": \"https://opencollective.com/marshmallow\",\n \"Tidelift\": \"https://tidelift.com/subscription/pkg/pypi-webargs?utm_source=pypi-marshmallow&utm_medium=pypi\", # noqa\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport sys\nimport re\nfrom setuptools import setup, find_packages\n\nINSTALL_REQUIRES = [\"marshmallow>=2.15.2\"]\nif sys.version_info[0] < 3:\n INSTALL_REQUIRES.append(\"simplejson>=2.1.0\")\n\nFRAMEWORKS = [\n \"Flask>=0.12.2\",\n \"Django>=1.11.16\",\n \"bottle>=0.12.13\",\n \"tornado>=4.5.2\",\n \"pyramid>=1.9.1\",\n \"webapp2>=3.0.0b1\",\n \"falcon>=1.4.0,<2.0\",\n 'aiohttp>=3.0.0; python_version >= \"3.5\"',\n]\nEXTRAS_REQUIRE = {\n \"frameworks\": FRAMEWORKS,\n \"tests\": [\n \"pytest\",\n \"mock\",\n \"webtest==2.0.32\",\n 'webtest-aiohttp==2.0.0; python_version >= \"3.5\"',\n 'pytest-aiohttp>=0.3.0; python_version >= \"3.5\"',\n ]\n + FRAMEWORKS,\n \"lint\": [\n 'mypy==0.650; python_version >= \"3.5\"',\n \"flake8==3.6.0\",\n 'flake8-bugbear==18.8.0; python_version >= \"3.5\"',\n \"pre-commit==1.13.0\",\n ],\n}\nEXTRAS_REQUIRE[\"dev\"] = EXTRAS_REQUIRE[\"tests\"] + EXTRAS_REQUIRE[\"lint\"] + [\"tox\"]\n\n\ndef find_version(fname):\n \"\"\"Attempts to find the version number in the file names fname.\n Raises RuntimeError if not found.\n \"\"\"\n version = \"\"\n with open(fname, \"r\") as fp:\n reg = re.compile(r'__version__ = [\\'\"]([^\\'\"]*)[\\'\"]')\n for line in fp:\n m = reg.match(line)\n if m:\n version = m.group(1)\n break\n if not version:\n raise RuntimeError(\"Cannot find version information\")\n return version\n\n\ndef read(fname):\n with open(fname) as fp:\n content = fp.read()\n return content\n\n\nsetup(\n name=\"webargs\",\n version=find_version(\"src/webargs/__init__.py\"),\n description=(\n \"Declarative parsing and validation of HTTP request objects, \"\n \"with built-in support for popular web frameworks, including \"\n \"Flask, Django, Bottle, Tornado, Pyramid, webapp2, Falcon, and aiohttp.\"\n ),\n long_description=read(\"README.rst\"),\n author=\"Steven Loria\",\n author_email=\"[email protected]\",\n url=\"https://github.com/marshmallow-code/webargs\",\n packages=find_packages(\"src\"),\n package_dir={\"\": \"src\"},\n install_requires=INSTALL_REQUIRES,\n extras_require=EXTRAS_REQUIRE,\n license=\"MIT\",\n zip_safe=False,\n keywords=(\n \"webargs\",\n \"http\",\n \"flask\",\n \"django\",\n \"bottle\",\n \"tornado\",\n \"aiohttp\",\n \"webapp2\",\n \"request\",\n \"arguments\",\n \"validation\",\n \"parameters\",\n \"rest\",\n \"api\",\n \"marshmallow\",\n ),\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Internet :: WWW/HTTP :: Dynamic Content\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n ],\n test_suite=\"tests\",\n project_urls={\n \"Changelog\": \"https://webargs.readthedocs.io/en/latest/changelog.html\",\n \"Issues\": \"https://github.com/marshmallow-code/webargs/issues\",\n \"Funding\": \"https://opencollective.com/marshmallow\",\n \"Tidelift\": \"https://tidelift.com/subscription/pkg/pypi-webargs?utm_source=pypi-marshmallow&utm_medium=pypi\", # noqa\n },\n)\n", "path": "setup.py"}]} | 1,518 | 123 |
gh_patches_debug_19737 | rasdani/github-patches | git_diff | biopython__biopython-3247 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Should we remove/replace Bio.Alphabet?
Filing this meta-issue, since we don't seem to have a single place discussing this on GitHub.
**My (likely biased) summary from those discussions is no one likes the current alphabet system, and most people ignore it.**
Biopython has a complicated hard to use legacy alphabet system in ``Bio.Alphabet`` which is used as a typing system (e.g. can't do reverse-complement on a protein), can store some useful information like the expected letters, if gapped and the gap character (although only one), and a stop codon symbol (although only one).
The objects in ``Bio.Alphabet`` cover three-letter alphabets as well as the more commonly used one-letter alphabets, although the ``Seq`` object effectively assumes the later only. Three-letter alphabets can be used with the array-based ``MutableSeq`` object, but it is very fragile and many things break - thus #1681.
Note we do not (currently) validate the expected letters when making a sequence object with an alphabet with an explicit set of expected letters - #1040.
Discussion on #1674 (hiding alphabets) has meant we now hide the alphabet in the ``Seq`` object representation if it is the default alphabet. Discussion there, and on #1681, and on the mailing list suggests going further and *removing* ``Bio.Alphabet`` entirely.
Note that removal is not as simple as it might sound - it will require some delicate modifications, for example several of the parsers in ``Bio.SeqIO`` use the alphabet to store the sequence type (important metadata in more than one file format).
I personally do like the typing system aspect of the alphabet system, but think we only need a much simpler DNA/RNA/nucleotide/protein/other system (more like an enum). I started looking at that on this branch which aimed to break as little existing code as possible: https://github.com/peterjc/biopython/tree/alpha_lite
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Bio/Alphabet/__init__.py`
Content:
```
1 # Copyright 2000-2002 by Andrew Dalke.
2 # Revisions copyright 2007-2010 by Peter Cock.
3 # All rights reserved.
4 #
5 # This file is part of the Biopython distribution and governed by your
6 # choice of the "Biopython License Agreement" or the "BSD 3-Clause License".
7 # Please see the LICENSE file that should have been included as part of this
8 # package.
9 """Alphabets were previously used to declare sequence type and letters (OBSOLETE).
10
11 The design of Bio.Aphabet included a number of historic design choices
12 which, with the benefit of hindsight, were regretable. Bio.Alphabet was
13 therefore removed from Biopython in release 1.78. Instead, the molecule type is
14 included as an annotation on SeqRecords where appropriate.
15
16 Please see
17 https://github.com/biopython/biopython/issues/3156
18 for examples showing how to transition from Bio.Alphabet to molecule type
19 annotations.
20 """
21
22 raise ImportError(
23 "Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://github.com/biopython/biopython/issues/3156 for more information."
24 )
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Bio/Alphabet/__init__.py b/Bio/Alphabet/__init__.py
--- a/Bio/Alphabet/__init__.py
+++ b/Bio/Alphabet/__init__.py
@@ -13,12 +13,10 @@
therefore removed from Biopython in release 1.78. Instead, the molecule type is
included as an annotation on SeqRecords where appropriate.
-Please see
-https://github.com/biopython/biopython/issues/3156
-for examples showing how to transition from Bio.Alphabet to molecule type
-annotations.
+Please see https://biopython.org/wiki/Alphabet for examples showing how to
+transition from Bio.Alphabet to molecule type annotations.
"""
raise ImportError(
- "Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://github.com/biopython/biopython/issues/3156 for more information."
+ "Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://biopython.org/wiki/Alphabet for more information."
)
| {"golden_diff": "diff --git a/Bio/Alphabet/__init__.py b/Bio/Alphabet/__init__.py\n--- a/Bio/Alphabet/__init__.py\n+++ b/Bio/Alphabet/__init__.py\n@@ -13,12 +13,10 @@\n therefore removed from Biopython in release 1.78. Instead, the molecule type is\n included as an annotation on SeqRecords where appropriate.\n \n-Please see\n-https://github.com/biopython/biopython/issues/3156\n-for examples showing how to transition from Bio.Alphabet to molecule type\n-annotations.\n+Please see https://biopython.org/wiki/Alphabet for examples showing how to\n+transition from Bio.Alphabet to molecule type annotations.\n \"\"\"\n \n raise ImportError(\n- \"Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://github.com/biopython/biopython/issues/3156 for more information.\"\n+ \"Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://biopython.org/wiki/Alphabet for more information.\"\n )\n", "issue": "Should we remove/replace Bio.Alphabet?\nFiling this meta-issue, since we don't seem to have a single place discussing this on GitHub.\r\n\r\n**My (likely biased) summary from those discussions is no one likes the current alphabet system, and most people ignore it.**\r\n\r\nBiopython has a complicated hard to use legacy alphabet system in ``Bio.Alphabet`` which is used as a typing system (e.g. can't do reverse-complement on a protein), can store some useful information like the expected letters, if gapped and the gap character (although only one), and a stop codon symbol (although only one).\r\n\r\nThe objects in ``Bio.Alphabet`` cover three-letter alphabets as well as the more commonly used one-letter alphabets, although the ``Seq`` object effectively assumes the later only. Three-letter alphabets can be used with the array-based ``MutableSeq`` object, but it is very fragile and many things break - thus #1681.\r\n\r\nNote we do not (currently) validate the expected letters when making a sequence object with an alphabet with an explicit set of expected letters - #1040.\r\n\r\nDiscussion on #1674 (hiding alphabets) has meant we now hide the alphabet in the ``Seq`` object representation if it is the default alphabet. Discussion there, and on #1681, and on the mailing list suggests going further and *removing* ``Bio.Alphabet`` entirely.\r\n\r\nNote that removal is not as simple as it might sound - it will require some delicate modifications, for example several of the parsers in ``Bio.SeqIO`` use the alphabet to store the sequence type (important metadata in more than one file format).\r\n\r\nI personally do like the typing system aspect of the alphabet system, but think we only need a much simpler DNA/RNA/nucleotide/protein/other system (more like an enum). I started looking at that on this branch which aimed to break as little existing code as possible: https://github.com/peterjc/biopython/tree/alpha_lite\r\n\n", "before_files": [{"content": "# Copyright 2000-2002 by Andrew Dalke.\n# Revisions copyright 2007-2010 by Peter Cock.\n# All rights reserved.\n#\n# This file is part of the Biopython distribution and governed by your\n# choice of the \"Biopython License Agreement\" or the \"BSD 3-Clause License\".\n# Please see the LICENSE file that should have been included as part of this\n# package.\n\"\"\"Alphabets were previously used to declare sequence type and letters (OBSOLETE).\n\nThe design of Bio.Aphabet included a number of historic design choices\nwhich, with the benefit of hindsight, were regretable. Bio.Alphabet was\ntherefore removed from Biopython in release 1.78. Instead, the molecule type is\nincluded as an annotation on SeqRecords where appropriate.\n\nPlease see\nhttps://github.com/biopython/biopython/issues/3156\nfor examples showing how to transition from Bio.Alphabet to molecule type\nannotations.\n\"\"\"\n\nraise ImportError(\n \"Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://github.com/biopython/biopython/issues/3156 for more information.\"\n)\n", "path": "Bio/Alphabet/__init__.py"}], "after_files": [{"content": "# Copyright 2000-2002 by Andrew Dalke.\n# Revisions copyright 2007-2010 by Peter Cock.\n# All rights reserved.\n#\n# This file is part of the Biopython distribution and governed by your\n# choice of the \"Biopython License Agreement\" or the \"BSD 3-Clause License\".\n# Please see the LICENSE file that should have been included as part of this\n# package.\n\"\"\"Alphabets were previously used to declare sequence type and letters (OBSOLETE).\n\nThe design of Bio.Aphabet included a number of historic design choices\nwhich, with the benefit of hindsight, were regretable. Bio.Alphabet was\ntherefore removed from Biopython in release 1.78. Instead, the molecule type is\nincluded as an annotation on SeqRecords where appropriate.\n\nPlease see https://biopython.org/wiki/Alphabet for examples showing how to\ntransition from Bio.Alphabet to molecule type annotations.\n\"\"\"\n\nraise ImportError(\n \"Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://biopython.org/wiki/Alphabet for more information.\"\n)\n", "path": "Bio/Alphabet/__init__.py"}]} | 1,039 | 329 |
gh_patches_debug_33949 | rasdani/github-patches | git_diff | xonsh__xonsh-522 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Feature-request: Static configuration file as a command line option
I suggest that an option to the commandline --config_file is added that allows the user to specify a file location for a static configuration file. (http://xonsh.org/xonshconfig.html)
This would allow portable usage of xonsh, e.g. install python and xonsh on a usb and bring the shell on a stick.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xonsh/main.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """The main xonsh script."""
3 import os
4 import sys
5 import shlex
6 import signal
7 import builtins
8 import subprocess
9 from argparse import ArgumentParser, Namespace
10 from contextlib import contextmanager
11
12 from xonsh import __version__
13 from xonsh.shell import Shell
14 from xonsh.pretty import pprint
15 from xonsh.jobs import ignore_sigtstp
16
17 parser = ArgumentParser(description='xonsh')
18 parser.add_argument('-V', '--version',
19 action='version',
20 version='/'.join(('xonsh', __version__)),
21 help='show version information and exit')
22 parser.add_argument('-c',
23 help="Run a single command and exit",
24 dest='command',
25 required=False,
26 default=None)
27 parser.add_argument('-i',
28 help='force running in interactive mode',
29 dest='force_interactive',
30 action='store_true',
31 default=False)
32 parser.add_argument('-l',
33 help='run as a login shell',
34 dest='login',
35 action='store_true',
36 default=False)
37 parser.add_argument('--no-rc',
38 help="Do not load the .xonshrc file",
39 dest='norc',
40 action='store_true',
41 default=False)
42 parser.add_argument('-D',
43 dest='defines',
44 help='define an environment variable, in the form of '
45 '-DNAME=VAL. May be used many times.',
46 metavar='ITEM',
47 nargs='*',
48 default=None)
49 parser.add_argument('--shell-type',
50 help='What kind of shell should be used. '
51 'Possible options: readline, prompt_toolkit. '
52 'Warning! If set this overrides $SHELL_TYPE variable.',
53 dest='shell_type',
54 choices=('readline', 'prompt_toolkit'),
55 default=None)
56 parser.add_argument('file',
57 metavar='script-file',
58 help='If present, execute the script in script-file'
59 ' and exit',
60 nargs='?',
61 default=None)
62 parser.add_argument('args',
63 metavar='args',
64 help='Additional arguments to the script specified'
65 ' by script-file',
66 nargs='*',
67 default=[])
68
69
70 def _pprint_displayhook(value):
71 if value is not None:
72 builtins._ = value
73 pprint(value)
74
75
76 def premain(argv=None):
77 """Setup for main xonsh entry point, returns parsed arguments."""
78 args = parser.parse_args(argv)
79 shell_kwargs = {'shell_type': args.shell_type}
80 if args.norc:
81 shell_kwargs['ctx'] = {}
82 setattr(sys, 'displayhook', _pprint_displayhook)
83 shell = builtins.__xonsh_shell__ = Shell(**shell_kwargs)
84 from xonsh import imphooks
85 env = builtins.__xonsh_env__
86 if args.defines is not None:
87 env.update([x.split('=', 1) for x in args.defines])
88 if args.login:
89 env['XONSH_LOGIN'] = True
90 env['XONSH_INTERACTIVE'] = False
91 return args
92
93
94 def main(argv=None):
95 """Main entry point for xonsh cli."""
96 args = premain(argv)
97 env = builtins.__xonsh_env__
98 shell = builtins.__xonsh_shell__
99 if args.command is not None:
100 # run a single command and exit
101 shell.default(args.command)
102 elif args.file is not None:
103 # run a script contained in a file
104 if os.path.isfile(args.file):
105 with open(args.file) as f:
106 code = f.read()
107 code = code if code.endswith('\n') else code + '\n'
108 env['ARGS'] = [args.file] + args.args
109 code = shell.execer.compile(code, mode='exec', glbs=shell.ctx)
110 shell.execer.exec(code, mode='exec', glbs=shell.ctx)
111 else:
112 print('xonsh: {0}: No such file or directory.'.format(args.file))
113 elif not sys.stdin.isatty() and not args.force_interactive:
114 # run a script given on stdin
115 code = sys.stdin.read()
116 code = code if code.endswith('\n') else code + '\n'
117 code = shell.execer.compile(code, mode='exec', glbs=shell.ctx)
118 shell.execer.exec(code, mode='exec', glbs=shell.ctx)
119 else:
120 # otherwise, enter the shell
121 env['XONSH_INTERACTIVE'] = True
122 ignore_sigtstp()
123 shell.cmdloop()
124 postmain(args)
125
126
127 def postmain(args=None):
128 """Teardown for main xonsh entry point, accepts parsed arguments."""
129 del builtins.__xonsh_shell__
130
131
132 @contextmanager
133 def main_context(argv=None):
134 """Generator that runs pre- and post-main() functions. This has two iterations.
135 The first yields the shell. The second returns None but cleans
136 up the shell.
137 """
138 args = premain(argv)
139 yield builtins.__xonsh_shell__
140 postmain(args)
141
142
143
144 if __name__ == '__main__':
145 main()
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/xonsh/main.py b/xonsh/main.py
--- a/xonsh/main.py
+++ b/xonsh/main.py
@@ -2,11 +2,8 @@
"""The main xonsh script."""
import os
import sys
-import shlex
-import signal
import builtins
-import subprocess
-from argparse import ArgumentParser, Namespace
+from argparse import ArgumentParser, ArgumentTypeError
from contextlib import contextmanager
from xonsh import __version__
@@ -14,6 +11,18 @@
from xonsh.pretty import pprint
from xonsh.jobs import ignore_sigtstp
+def path_argument(s):
+ """Return a path only if the path is actually legal
+
+ This is very similar to argparse.FileType, except that it doesn't return
+ an open file handle, but rather simply validates the path."""
+
+ s = os.path.abspath(os.path.expanduser(s))
+ if not os.path.isfile(s):
+ raise ArgumentTypeError('"%s" must be a valid path to a file' % s)
+ return s
+
+
parser = ArgumentParser(description='xonsh')
parser.add_argument('-V', '--version',
action='version',
@@ -34,6 +43,10 @@
dest='login',
action='store_true',
default=False)
+parser.add_argument('--config-path',
+ help='specify a custom static configuration file',
+ dest='config_path',
+ type=path_argument)
parser.add_argument('--no-rc',
help="Do not load the .xonshrc file",
dest='norc',
@@ -79,6 +92,8 @@
shell_kwargs = {'shell_type': args.shell_type}
if args.norc:
shell_kwargs['ctx'] = {}
+ if args.config_path:
+ shell_kwargs['ctx']= {'XONSHCONFIG': args.config_path}
setattr(sys, 'displayhook', _pprint_displayhook)
shell = builtins.__xonsh_shell__ = Shell(**shell_kwargs)
from xonsh import imphooks
| {"golden_diff": "diff --git a/xonsh/main.py b/xonsh/main.py\n--- a/xonsh/main.py\n+++ b/xonsh/main.py\n@@ -2,11 +2,8 @@\n \"\"\"The main xonsh script.\"\"\"\n import os\n import sys\n-import shlex\n-import signal\n import builtins\n-import subprocess\n-from argparse import ArgumentParser, Namespace\n+from argparse import ArgumentParser, ArgumentTypeError\n from contextlib import contextmanager\n \n from xonsh import __version__\n@@ -14,6 +11,18 @@\n from xonsh.pretty import pprint\n from xonsh.jobs import ignore_sigtstp\n \n+def path_argument(s):\n+ \"\"\"Return a path only if the path is actually legal\n+\n+ This is very similar to argparse.FileType, except that it doesn't return\n+ an open file handle, but rather simply validates the path.\"\"\"\n+\n+ s = os.path.abspath(os.path.expanduser(s))\n+ if not os.path.isfile(s):\n+ raise ArgumentTypeError('\"%s\" must be a valid path to a file' % s)\n+ return s\n+\n+\n parser = ArgumentParser(description='xonsh')\n parser.add_argument('-V', '--version',\n action='version',\n@@ -34,6 +43,10 @@\n dest='login',\n action='store_true',\n default=False)\n+parser.add_argument('--config-path',\n+ help='specify a custom static configuration file',\n+ dest='config_path',\n+ type=path_argument)\n parser.add_argument('--no-rc',\n help=\"Do not load the .xonshrc file\",\n dest='norc',\n@@ -79,6 +92,8 @@\n shell_kwargs = {'shell_type': args.shell_type}\n if args.norc:\n shell_kwargs['ctx'] = {}\n+ if args.config_path:\n+ shell_kwargs['ctx']= {'XONSHCONFIG': args.config_path}\n setattr(sys, 'displayhook', _pprint_displayhook)\n shell = builtins.__xonsh_shell__ = Shell(**shell_kwargs)\n from xonsh import imphooks\n", "issue": "Feature-request: Static configuration file as a command line option\nI suggest that an option to the commandline --config_file is added that allows the user to specify a file location for a static configuration file. (http://xonsh.org/xonshconfig.html)\n\nThis would allow portable usage of xonsh, e.g. install python and xonsh on a usb and bring the shell on a stick. \n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"The main xonsh script.\"\"\"\nimport os\nimport sys\nimport shlex\nimport signal\nimport builtins\nimport subprocess\nfrom argparse import ArgumentParser, Namespace\nfrom contextlib import contextmanager\n\nfrom xonsh import __version__\nfrom xonsh.shell import Shell\nfrom xonsh.pretty import pprint\nfrom xonsh.jobs import ignore_sigtstp\n\nparser = ArgumentParser(description='xonsh')\nparser.add_argument('-V', '--version',\n action='version',\n version='/'.join(('xonsh', __version__)),\n help='show version information and exit')\nparser.add_argument('-c',\n help=\"Run a single command and exit\",\n dest='command',\n required=False,\n default=None)\nparser.add_argument('-i',\n help='force running in interactive mode',\n dest='force_interactive',\n action='store_true',\n default=False)\nparser.add_argument('-l',\n help='run as a login shell',\n dest='login',\n action='store_true',\n default=False)\nparser.add_argument('--no-rc',\n help=\"Do not load the .xonshrc file\",\n dest='norc',\n action='store_true',\n default=False)\nparser.add_argument('-D',\n dest='defines',\n help='define an environment variable, in the form of '\n '-DNAME=VAL. May be used many times.',\n metavar='ITEM',\n nargs='*',\n default=None)\nparser.add_argument('--shell-type',\n help='What kind of shell should be used. '\n 'Possible options: readline, prompt_toolkit. '\n 'Warning! If set this overrides $SHELL_TYPE variable.',\n dest='shell_type',\n choices=('readline', 'prompt_toolkit'),\n default=None)\nparser.add_argument('file',\n metavar='script-file',\n help='If present, execute the script in script-file'\n ' and exit',\n nargs='?',\n default=None)\nparser.add_argument('args',\n metavar='args',\n help='Additional arguments to the script specified'\n ' by script-file',\n nargs='*',\n default=[])\n\n\ndef _pprint_displayhook(value):\n if value is not None:\n builtins._ = value\n pprint(value)\n\n\ndef premain(argv=None):\n \"\"\"Setup for main xonsh entry point, returns parsed arguments.\"\"\"\n args = parser.parse_args(argv)\n shell_kwargs = {'shell_type': args.shell_type}\n if args.norc:\n shell_kwargs['ctx'] = {}\n setattr(sys, 'displayhook', _pprint_displayhook)\n shell = builtins.__xonsh_shell__ = Shell(**shell_kwargs)\n from xonsh import imphooks\n env = builtins.__xonsh_env__\n if args.defines is not None:\n env.update([x.split('=', 1) for x in args.defines])\n if args.login:\n env['XONSH_LOGIN'] = True\n env['XONSH_INTERACTIVE'] = False\n return args\n\n\ndef main(argv=None):\n \"\"\"Main entry point for xonsh cli.\"\"\"\n args = premain(argv)\n env = builtins.__xonsh_env__\n shell = builtins.__xonsh_shell__\n if args.command is not None:\n # run a single command and exit\n shell.default(args.command)\n elif args.file is not None:\n # run a script contained in a file\n if os.path.isfile(args.file):\n with open(args.file) as f:\n code = f.read()\n code = code if code.endswith('\\n') else code + '\\n'\n env['ARGS'] = [args.file] + args.args\n code = shell.execer.compile(code, mode='exec', glbs=shell.ctx)\n shell.execer.exec(code, mode='exec', glbs=shell.ctx)\n else:\n print('xonsh: {0}: No such file or directory.'.format(args.file))\n elif not sys.stdin.isatty() and not args.force_interactive:\n # run a script given on stdin\n code = sys.stdin.read()\n code = code if code.endswith('\\n') else code + '\\n'\n code = shell.execer.compile(code, mode='exec', glbs=shell.ctx)\n shell.execer.exec(code, mode='exec', glbs=shell.ctx)\n else:\n # otherwise, enter the shell\n env['XONSH_INTERACTIVE'] = True\n ignore_sigtstp()\n shell.cmdloop()\n postmain(args)\n\n\ndef postmain(args=None):\n \"\"\"Teardown for main xonsh entry point, accepts parsed arguments.\"\"\"\n del builtins.__xonsh_shell__\n\n\n@contextmanager\ndef main_context(argv=None):\n \"\"\"Generator that runs pre- and post-main() functions. This has two iterations.\n The first yields the shell. The second returns None but cleans\n up the shell.\n \"\"\"\n args = premain(argv)\n yield builtins.__xonsh_shell__\n postmain(args)\n\n\n\nif __name__ == '__main__':\n main()\n", "path": "xonsh/main.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"The main xonsh script.\"\"\"\nimport os\nimport sys\nimport builtins\nfrom argparse import ArgumentParser, ArgumentTypeError\nfrom contextlib import contextmanager\n\nfrom xonsh import __version__\nfrom xonsh.shell import Shell\nfrom xonsh.pretty import pprint\nfrom xonsh.jobs import ignore_sigtstp\n\ndef path_argument(s):\n \"\"\"Return a path only if the path is actually legal\n\n This is very similar to argparse.FileType, except that it doesn't return\n an open file handle, but rather simply validates the path.\"\"\"\n\n s = os.path.abspath(os.path.expanduser(s))\n if not os.path.isfile(s):\n raise ArgumentTypeError('\"%s\" must be a valid path to a file' % s)\n return s\n\n\nparser = ArgumentParser(description='xonsh')\nparser.add_argument('-V', '--version',\n action='version',\n version='/'.join(('xonsh', __version__)),\n help='show version information and exit')\nparser.add_argument('-c',\n help=\"Run a single command and exit\",\n dest='command',\n required=False,\n default=None)\nparser.add_argument('-i',\n help='force running in interactive mode',\n dest='force_interactive',\n action='store_true',\n default=False)\nparser.add_argument('-l',\n help='run as a login shell',\n dest='login',\n action='store_true',\n default=False)\nparser.add_argument('--config-path',\n help='specify a custom static configuration file',\n dest='config_path',\n type=path_argument)\nparser.add_argument('--no-rc',\n help=\"Do not load the .xonshrc file\",\n dest='norc',\n action='store_true',\n default=False)\nparser.add_argument('-D',\n dest='defines',\n help='define an environment variable, in the form of '\n '-DNAME=VAL. May be used many times.',\n metavar='ITEM',\n nargs='*',\n default=None)\nparser.add_argument('--shell-type',\n help='What kind of shell should be used. '\n 'Possible options: readline, prompt_toolkit. '\n 'Warning! If set this overrides $SHELL_TYPE variable.',\n dest='shell_type',\n choices=('readline', 'prompt_toolkit'),\n default=None)\nparser.add_argument('file',\n metavar='script-file',\n help='If present, execute the script in script-file'\n ' and exit',\n nargs='?',\n default=None)\nparser.add_argument('args',\n metavar='args',\n help='Additional arguments to the script specified'\n ' by script-file',\n nargs='*',\n default=[])\n\n\ndef _pprint_displayhook(value):\n if value is not None:\n builtins._ = value\n pprint(value)\n\n\ndef premain(argv=None):\n \"\"\"Setup for main xonsh entry point, returns parsed arguments.\"\"\"\n args = parser.parse_args(argv)\n shell_kwargs = {'shell_type': args.shell_type}\n if args.norc:\n shell_kwargs['ctx'] = {}\n if args.config_path:\n shell_kwargs['ctx']= {'XONSHCONFIG': args.config_path}\n setattr(sys, 'displayhook', _pprint_displayhook)\n shell = builtins.__xonsh_shell__ = Shell(**shell_kwargs)\n from xonsh import imphooks\n env = builtins.__xonsh_env__\n if args.defines is not None:\n env.update([x.split('=', 1) for x in args.defines])\n if args.login:\n env['XONSH_LOGIN'] = True\n env['XONSH_INTERACTIVE'] = False\n return args\n\n\ndef main(argv=None):\n \"\"\"Main entry point for xonsh cli.\"\"\"\n args = premain(argv)\n env = builtins.__xonsh_env__\n shell = builtins.__xonsh_shell__\n if args.command is not None:\n # run a single command and exit\n shell.default(args.command)\n elif args.file is not None:\n # run a script contained in a file\n if os.path.isfile(args.file):\n with open(args.file) as f:\n code = f.read()\n code = code if code.endswith('\\n') else code + '\\n'\n env['ARGS'] = [args.file] + args.args\n code = shell.execer.compile(code, mode='exec', glbs=shell.ctx)\n shell.execer.exec(code, mode='exec', glbs=shell.ctx)\n else:\n print('xonsh: {0}: No such file or directory.'.format(args.file))\n elif not sys.stdin.isatty() and not args.force_interactive:\n # run a script given on stdin\n code = sys.stdin.read()\n code = code if code.endswith('\\n') else code + '\\n'\n code = shell.execer.compile(code, mode='exec', glbs=shell.ctx)\n shell.execer.exec(code, mode='exec', glbs=shell.ctx)\n else:\n # otherwise, enter the shell\n env['XONSH_INTERACTIVE'] = True\n ignore_sigtstp()\n shell.cmdloop()\n postmain(args)\n\n\ndef postmain(args=None):\n \"\"\"Teardown for main xonsh entry point, accepts parsed arguments.\"\"\"\n del builtins.__xonsh_shell__\n\n\n@contextmanager\ndef main_context(argv=None):\n \"\"\"Generator that runs pre- and post-main() functions. This has two iterations.\n The first yields the shell. The second returns None but cleans\n up the shell.\n \"\"\"\n args = premain(argv)\n yield builtins.__xonsh_shell__\n postmain(args)\n\n\n\nif __name__ == '__main__':\n main()\n", "path": "xonsh/main.py"}]} | 1,756 | 453 |
gh_patches_debug_23517 | rasdani/github-patches | git_diff | hydroshare__hydroshare-2550 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't add user via admin panel
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hs_core/admin.py`
Content:
```
1 from mezzanine.pages.admin import PageAdmin
2 from django.contrib.gis import admin
3 from django.contrib.contenttypes.admin import GenericTabularInline
4 from .models import *
5
6 class InlineResourceFiles(GenericTabularInline):
7 model = ResourceFile
8
9 class GenericResourceAdmin(PageAdmin):
10 inlines = PageAdmin.inlines + [InlineResourceFiles]
11
12 admin.site.register(GenericResource, GenericResourceAdmin)
13
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hs_core/admin.py b/hs_core/admin.py
--- a/hs_core/admin.py
+++ b/hs_core/admin.py
@@ -1,12 +1,35 @@
-from mezzanine.pages.admin import PageAdmin
+from django import forms
+from django.contrib.auth.admin import UserAdmin
+from django.contrib.auth.forms import UserCreationForm
+from django.contrib.auth.models import User
from django.contrib.gis import admin
from django.contrib.contenttypes.admin import GenericTabularInline
+from django.utils.translation import ugettext_lazy as _
+
+from mezzanine.pages.admin import PageAdmin
+
from .models import *
+
+class UserCreationFormExtended(UserCreationForm):
+ def __init__(self, *args, **kwargs):
+ super(UserCreationFormExtended, self).__init__(*args, **kwargs)
+ self.fields['email'] = forms.EmailField(label=_("E-mail"), max_length=75)
+
+UserAdmin.add_form = UserCreationFormExtended
+UserAdmin.add_fieldsets = (
+ (None, {
+ 'classes': ('wide',),
+ 'fields': ('email', 'username', 'password1', 'password2',)
+ }),
+)
+
class InlineResourceFiles(GenericTabularInline):
model = ResourceFile
class GenericResourceAdmin(PageAdmin):
inlines = PageAdmin.inlines + [InlineResourceFiles]
+admin.site.unregister(User)
+admin.site.register(User, UserAdmin)
admin.site.register(GenericResource, GenericResourceAdmin)
| {"golden_diff": "diff --git a/hs_core/admin.py b/hs_core/admin.py\n--- a/hs_core/admin.py\n+++ b/hs_core/admin.py\n@@ -1,12 +1,35 @@\n-from mezzanine.pages.admin import PageAdmin\n+from django import forms\n+from django.contrib.auth.admin import UserAdmin\n+from django.contrib.auth.forms import UserCreationForm\n+from django.contrib.auth.models import User\n from django.contrib.gis import admin\n from django.contrib.contenttypes.admin import GenericTabularInline\n+from django.utils.translation import ugettext_lazy as _\n+\n+from mezzanine.pages.admin import PageAdmin\n+\n from .models import *\n \n+\n+class UserCreationFormExtended(UserCreationForm):\n+ def __init__(self, *args, **kwargs):\n+ super(UserCreationFormExtended, self).__init__(*args, **kwargs)\n+ self.fields['email'] = forms.EmailField(label=_(\"E-mail\"), max_length=75)\n+\n+UserAdmin.add_form = UserCreationFormExtended\n+UserAdmin.add_fieldsets = (\n+ (None, {\n+ 'classes': ('wide',),\n+ 'fields': ('email', 'username', 'password1', 'password2',)\n+ }),\n+)\n+\n class InlineResourceFiles(GenericTabularInline):\n model = ResourceFile\n \n class GenericResourceAdmin(PageAdmin):\n inlines = PageAdmin.inlines + [InlineResourceFiles]\n \n+admin.site.unregister(User)\n+admin.site.register(User, UserAdmin)\n admin.site.register(GenericResource, GenericResourceAdmin)\n", "issue": "Can't add user via admin panel\n\n", "before_files": [{"content": "from mezzanine.pages.admin import PageAdmin\nfrom django.contrib.gis import admin\nfrom django.contrib.contenttypes.admin import GenericTabularInline\nfrom .models import *\n\nclass InlineResourceFiles(GenericTabularInline):\n model = ResourceFile\n\nclass GenericResourceAdmin(PageAdmin):\n inlines = PageAdmin.inlines + [InlineResourceFiles]\n\nadmin.site.register(GenericResource, GenericResourceAdmin)\n", "path": "hs_core/admin.py"}], "after_files": [{"content": "from django import forms\nfrom django.contrib.auth.admin import UserAdmin\nfrom django.contrib.auth.forms import UserCreationForm\nfrom django.contrib.auth.models import User\nfrom django.contrib.gis import admin\nfrom django.contrib.contenttypes.admin import GenericTabularInline\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom mezzanine.pages.admin import PageAdmin\n\nfrom .models import *\n\n\nclass UserCreationFormExtended(UserCreationForm):\n def __init__(self, *args, **kwargs):\n super(UserCreationFormExtended, self).__init__(*args, **kwargs)\n self.fields['email'] = forms.EmailField(label=_(\"E-mail\"), max_length=75)\n\nUserAdmin.add_form = UserCreationFormExtended\nUserAdmin.add_fieldsets = (\n (None, {\n 'classes': ('wide',),\n 'fields': ('email', 'username', 'password1', 'password2',)\n }),\n)\n\nclass InlineResourceFiles(GenericTabularInline):\n model = ResourceFile\n\nclass GenericResourceAdmin(PageAdmin):\n inlines = PageAdmin.inlines + [InlineResourceFiles]\n\nadmin.site.unregister(User)\nadmin.site.register(User, UserAdmin)\nadmin.site.register(GenericResource, GenericResourceAdmin)\n", "path": "hs_core/admin.py"}]} | 369 | 328 |
gh_patches_debug_14239 | rasdani/github-patches | git_diff | RUCAIBox__RecBole-692 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[🐛BUG] case_study.py 中, 输入的用户id只有一个时, full_sort_topk 报错
代码
```python3
import torch
import pandas as pd
from recbole.model.general_recommender.bpr import BPR
from recbole.config import Config
from recbole.data import create_dataset, data_preparation
from recbole.utils.case_study import full_sort_topk
param_dict = {
'use_gpu': False
}
# 加载 BPR 模型
bpr_model_path = "D:\\source\\recbole-0.2.0\\app\\ex\\saved\\BPR-Jan-18-2021_14-03-52.pth"
bpr_config = Config(model='BPR',
dataset='ml-100k',
config_dict=param_dict)
dataset = create_dataset(bpr_config)
train_data, valid_data, test_data = data_preparation(bpr_config, dataset)
bpr_model = BPR(bpr_config, train_data)
checkpoint = torch.load(bpr_model_path)
bpr_model.load_state_dict(checkpoint['state_dict'])
bpr_model.eval()
uid_series = dataset.token2id(dataset.uid_field, ['200']) # 原始数据集中的用户id,变换为训练内部使用的索引id
full_sort_topk(uid_series, bpr_model, test_data, 10)
```
报错信息
Traceback (most recent call last):
File "D:/source/recbole-0.2.0/app/ex/bpr_predict_ml100k.py", line 33, in <module>
full_sort_topk(uid_series, bpr_model, test_data, 10)
File "D:\source\recbole-0.2.0\recbole\utils\case_study.py", line 87, in full_sort_topk
scores = full_sort_scores(uid_series, model, test_data)
File "D:\Anaconda3\envs\pytorch\lib\site-packages\torch\autograd\grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "D:\source\recbole-0.2.0\recbole\utils\case_study.py", line 45, in full_sort_scores
history_row = torch.cat([torch.full_like(hist_iid, i) for i, hist_iid in enumerate(history_item)])
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recbole/utils/case_study.py`
Content:
```
1 # @Time : 2020/12/25
2 # @Author : Yushuo Chen
3 # @Email : [email protected]
4
5 # UPDATE
6 # @Time : 2020/12/25
7 # @Author : Yushuo Chen
8 # @email : [email protected]
9
10 """
11 recbole.utils.case_study
12 #####################################
13 """
14
15 import numpy as np
16 import torch
17
18 from recbole.data.dataloader.general_dataloader import GeneralFullDataLoader
19 from recbole.data.dataloader.sequential_dataloader import SequentialFullDataLoader
20
21
22 @torch.no_grad()
23 def full_sort_scores(uid_series, model, test_data):
24 """Calculate the scores of all items for each user in uid_series.
25
26 Note:
27 The score of [pad] and history items will be set into -inf.
28
29 Args:
30 uid_series (numpy.ndarray): User id series
31 model (AbstractRecommender): Model to predict
32 test_data (AbstractDataLoader): The test_data of model
33
34 Returns:
35 torch.Tensor: the scores of all items for each user in uid_series.
36 """
37 uid_field = test_data.dataset.uid_field
38 dataset = test_data.dataset
39 model.eval()
40
41 if isinstance(test_data, GeneralFullDataLoader):
42 index = np.isin(test_data.user_df[uid_field].numpy(), uid_series)
43 input_interaction = test_data.user_df[index]
44 history_item = test_data.uid2history_item[input_interaction[uid_field]]
45 history_row = torch.cat([torch.full_like(hist_iid, i) for i, hist_iid in enumerate(history_item)])
46 history_col = torch.cat(list(history_item))
47 history_index = history_row, history_col
48 elif isinstance(test_data, SequentialFullDataLoader):
49 index = np.isin(test_data.uid_list, uid_series)
50 input_interaction = test_data.augmentation(
51 test_data.item_list_index[index], test_data.target_index[index], test_data.item_list_length[index]
52 )
53 history_index = None
54 else:
55 raise NotImplementedError
56
57 # Get scores of all items
58 try:
59 scores = model.full_sort_predict(input_interaction)
60 except NotImplementedError:
61 input_interaction = input_interaction.repeat(dataset.item_num)
62 input_interaction.update(test_data.get_item_feature().repeat(len(uid_series)))
63 scores = model.predict(input_interaction)
64
65 scores = scores.view(-1, dataset.item_num)
66 scores[:, 0] = -np.inf # set scores of [pad] to -inf
67 if history_index is not None:
68 scores[history_index] = -np.inf # set scores of history items to -inf
69
70 return scores
71
72
73 def full_sort_topk(uid_series, model, test_data, k):
74 """Calculate the top-k items' scores and ids for each user in uid_series.
75
76 Args:
77 uid_series (numpy.ndarray): User id series
78 model (AbstractRecommender): Model to predict
79 test_data (AbstractDataLoader): The test_data of model
80 k (int): The top-k items.
81
82 Returns:
83 tuple:
84 - topk_scores (torch.Tensor): The scores of topk items.
85 - topk_index (torch.Tensor): The index of topk items, which is also the internal ids of items.
86 """
87 scores = full_sort_scores(uid_series, model, test_data)
88 return torch.topk(scores, k)
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recbole/utils/case_study.py b/recbole/utils/case_study.py
--- a/recbole/utils/case_study.py
+++ b/recbole/utils/case_study.py
@@ -41,7 +41,7 @@
if isinstance(test_data, GeneralFullDataLoader):
index = np.isin(test_data.user_df[uid_field].numpy(), uid_series)
input_interaction = test_data.user_df[index]
- history_item = test_data.uid2history_item[input_interaction[uid_field]]
+ history_item = test_data.uid2history_item[input_interaction[uid_field].numpy()]
history_row = torch.cat([torch.full_like(hist_iid, i) for i, hist_iid in enumerate(history_item)])
history_col = torch.cat(list(history_item))
history_index = history_row, history_col
| {"golden_diff": "diff --git a/recbole/utils/case_study.py b/recbole/utils/case_study.py\n--- a/recbole/utils/case_study.py\n+++ b/recbole/utils/case_study.py\n@@ -41,7 +41,7 @@\n if isinstance(test_data, GeneralFullDataLoader):\n index = np.isin(test_data.user_df[uid_field].numpy(), uid_series)\n input_interaction = test_data.user_df[index]\n- history_item = test_data.uid2history_item[input_interaction[uid_field]]\n+ history_item = test_data.uid2history_item[input_interaction[uid_field].numpy()]\n history_row = torch.cat([torch.full_like(hist_iid, i) for i, hist_iid in enumerate(history_item)])\n history_col = torch.cat(list(history_item))\n history_index = history_row, history_col\n", "issue": "[\ud83d\udc1bBUG] case_study.py \u4e2d\uff0c \u8f93\u5165\u7684\u7528\u6237id\u53ea\u6709\u4e00\u4e2a\u65f6\uff0c full_sort_topk \u62a5\u9519\n\u4ee3\u7801\r\n```python3\r\nimport torch\r\nimport pandas as pd\r\n\r\nfrom recbole.model.general_recommender.bpr import BPR\r\nfrom recbole.config import Config\r\nfrom recbole.data import create_dataset, data_preparation\r\nfrom recbole.utils.case_study import full_sort_topk\r\n\r\nparam_dict = {\r\n 'use_gpu': False\r\n}\r\n\r\n# \u52a0\u8f7d BPR \u6a21\u578b\r\nbpr_model_path = \"D:\\\\source\\\\recbole-0.2.0\\\\app\\\\ex\\\\saved\\\\BPR-Jan-18-2021_14-03-52.pth\"\r\nbpr_config = Config(model='BPR',\r\n dataset='ml-100k',\r\n config_dict=param_dict)\r\ndataset = create_dataset(bpr_config)\r\ntrain_data, valid_data, test_data = data_preparation(bpr_config, dataset)\r\n\r\nbpr_model = BPR(bpr_config, train_data)\r\ncheckpoint = torch.load(bpr_model_path)\r\nbpr_model.load_state_dict(checkpoint['state_dict'])\r\nbpr_model.eval()\r\n\r\nuid_series = dataset.token2id(dataset.uid_field, ['200']) # \u539f\u59cb\u6570\u636e\u96c6\u4e2d\u7684\u7528\u6237id\uff0c\u53d8\u6362\u4e3a\u8bad\u7ec3\u5185\u90e8\u4f7f\u7528\u7684\u7d22\u5f15id\r\n\r\nfull_sort_topk(uid_series, bpr_model, test_data, 10)\r\n```\r\n\r\n\u62a5\u9519\u4fe1\u606f\r\nTraceback (most recent call last):\r\n File \"D:/source/recbole-0.2.0/app/ex/bpr_predict_ml100k.py\", line 33, in <module>\r\n full_sort_topk(uid_series, bpr_model, test_data, 10)\r\n File \"D:\\source\\recbole-0.2.0\\recbole\\utils\\case_study.py\", line 87, in full_sort_topk\r\n scores = full_sort_scores(uid_series, model, test_data)\r\n File \"D:\\Anaconda3\\envs\\pytorch\\lib\\site-packages\\torch\\autograd\\grad_mode.py\", line 26, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"D:\\source\\recbole-0.2.0\\recbole\\utils\\case_study.py\", line 45, in full_sort_scores\r\n history_row = torch.cat([torch.full_like(hist_iid, i) for i, hist_iid in enumerate(history_item)])\r\nRuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated\n", "before_files": [{"content": "# @Time : 2020/12/25\n# @Author : Yushuo Chen\n# @Email : [email protected]\n\n# UPDATE\n# @Time : 2020/12/25\n# @Author : Yushuo Chen\n# @email : [email protected]\n\n\"\"\"\nrecbole.utils.case_study\n#####################################\n\"\"\"\n\nimport numpy as np\nimport torch\n\nfrom recbole.data.dataloader.general_dataloader import GeneralFullDataLoader\nfrom recbole.data.dataloader.sequential_dataloader import SequentialFullDataLoader\n\n\[email protected]_grad()\ndef full_sort_scores(uid_series, model, test_data):\n \"\"\"Calculate the scores of all items for each user in uid_series.\n\n Note:\n The score of [pad] and history items will be set into -inf.\n\n Args:\n uid_series (numpy.ndarray): User id series\n model (AbstractRecommender): Model to predict\n test_data (AbstractDataLoader): The test_data of model\n\n Returns:\n torch.Tensor: the scores of all items for each user in uid_series.\n \"\"\"\n uid_field = test_data.dataset.uid_field\n dataset = test_data.dataset\n model.eval()\n\n if isinstance(test_data, GeneralFullDataLoader):\n index = np.isin(test_data.user_df[uid_field].numpy(), uid_series)\n input_interaction = test_data.user_df[index]\n history_item = test_data.uid2history_item[input_interaction[uid_field]]\n history_row = torch.cat([torch.full_like(hist_iid, i) for i, hist_iid in enumerate(history_item)])\n history_col = torch.cat(list(history_item))\n history_index = history_row, history_col\n elif isinstance(test_data, SequentialFullDataLoader):\n index = np.isin(test_data.uid_list, uid_series)\n input_interaction = test_data.augmentation(\n test_data.item_list_index[index], test_data.target_index[index], test_data.item_list_length[index]\n )\n history_index = None\n else:\n raise NotImplementedError\n\n # Get scores of all items\n try:\n scores = model.full_sort_predict(input_interaction)\n except NotImplementedError:\n input_interaction = input_interaction.repeat(dataset.item_num)\n input_interaction.update(test_data.get_item_feature().repeat(len(uid_series)))\n scores = model.predict(input_interaction)\n\n scores = scores.view(-1, dataset.item_num)\n scores[:, 0] = -np.inf # set scores of [pad] to -inf\n if history_index is not None:\n scores[history_index] = -np.inf # set scores of history items to -inf\n\n return scores\n\n\ndef full_sort_topk(uid_series, model, test_data, k):\n \"\"\"Calculate the top-k items' scores and ids for each user in uid_series.\n\n Args:\n uid_series (numpy.ndarray): User id series\n model (AbstractRecommender): Model to predict\n test_data (AbstractDataLoader): The test_data of model\n k (int): The top-k items.\n\n Returns:\n tuple:\n - topk_scores (torch.Tensor): The scores of topk items.\n - topk_index (torch.Tensor): The index of topk items, which is also the internal ids of items.\n \"\"\"\n scores = full_sort_scores(uid_series, model, test_data)\n return torch.topk(scores, k)\n", "path": "recbole/utils/case_study.py"}], "after_files": [{"content": "# @Time : 2020/12/25\n# @Author : Yushuo Chen\n# @Email : [email protected]\n\n# UPDATE\n# @Time : 2020/12/25\n# @Author : Yushuo Chen\n# @email : [email protected]\n\n\"\"\"\nrecbole.utils.case_study\n#####################################\n\"\"\"\n\nimport numpy as np\nimport torch\n\nfrom recbole.data.dataloader.general_dataloader import GeneralFullDataLoader\nfrom recbole.data.dataloader.sequential_dataloader import SequentialFullDataLoader\n\n\[email protected]_grad()\ndef full_sort_scores(uid_series, model, test_data):\n \"\"\"Calculate the scores of all items for each user in uid_series.\n\n Note:\n The score of [pad] and history items will be set into -inf.\n\n Args:\n uid_series (numpy.ndarray): User id series\n model (AbstractRecommender): Model to predict\n test_data (AbstractDataLoader): The test_data of model\n\n Returns:\n torch.Tensor: the scores of all items for each user in uid_series.\n \"\"\"\n uid_field = test_data.dataset.uid_field\n dataset = test_data.dataset\n model.eval()\n\n if isinstance(test_data, GeneralFullDataLoader):\n index = np.isin(test_data.user_df[uid_field].numpy(), uid_series)\n input_interaction = test_data.user_df[index]\n history_item = test_data.uid2history_item[input_interaction[uid_field].numpy()]\n history_row = torch.cat([torch.full_like(hist_iid, i) for i, hist_iid in enumerate(history_item)])\n history_col = torch.cat(list(history_item))\n history_index = history_row, history_col\n elif isinstance(test_data, SequentialFullDataLoader):\n index = np.isin(test_data.uid_list, uid_series)\n input_interaction = test_data.augmentation(\n test_data.item_list_index[index], test_data.target_index[index], test_data.item_list_length[index]\n )\n history_index = None\n else:\n raise NotImplementedError\n\n # Get scores of all items\n try:\n scores = model.full_sort_predict(input_interaction)\n except NotImplementedError:\n input_interaction = input_interaction.repeat(dataset.item_num)\n input_interaction.update(test_data.get_item_feature().repeat(len(uid_series)))\n scores = model.predict(input_interaction)\n\n scores = scores.view(-1, dataset.item_num)\n scores[:, 0] = -np.inf # set scores of [pad] to -inf\n if history_index is not None:\n scores[history_index] = -np.inf # set scores of history items to -inf\n\n return scores\n\n\ndef full_sort_topk(uid_series, model, test_data, k):\n \"\"\"Calculate the top-k items' scores and ids for each user in uid_series.\n\n Args:\n uid_series (numpy.ndarray): User id series\n model (AbstractRecommender): Model to predict\n test_data (AbstractDataLoader): The test_data of model\n k (int): The top-k items.\n\n Returns:\n tuple:\n - topk_scores (torch.Tensor): The scores of topk items.\n - topk_index (torch.Tensor): The index of topk items, which is also the internal ids of items.\n \"\"\"\n scores = full_sort_scores(uid_series, model, test_data)\n return torch.topk(scores, k)\n", "path": "recbole/utils/case_study.py"}]} | 1,710 | 175 |
gh_patches_debug_57934 | rasdani/github-patches | git_diff | scrapy__scrapy-1905 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
response.body is duplicate
Access the [text page(not mine)](http://files.qidian.com/Author4/3615059/88542882.txt) by browsers or wget and you will find the response content is not duplicate, but scrapy's `response.body` is duplicate. I had tried set the scrapy's headers same as a real brower's, but it is still duplicate.
Just use the follow sample code, and you will find the issue.
```
scrapy shell "http://files.qidian.com/Author4/3615059/88542882.txt"
```
Sorry for my bad english.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/utils/gz.py`
Content:
```
1 import struct
2
3 try:
4 from cStringIO import StringIO as BytesIO
5 except ImportError:
6 from io import BytesIO
7 from gzip import GzipFile
8
9 import six
10 import re
11
12 # - Python>=3.5 GzipFile's read() has issues returning leftover
13 # uncompressed data when input is corrupted
14 # (regression or bug-fix compared to Python 3.4)
15 # - read1(), which fetches data before raising EOFError on next call
16 # works here but is only available from Python>=3.3
17 # - scrapy does not support Python 3.2
18 # - Python 2.7 GzipFile works fine with standard read() + extrabuf
19 if six.PY2:
20 def read1(gzf, size=-1):
21 return gzf.read(size)
22 else:
23 def read1(gzf, size=-1):
24 return gzf.read1(size)
25
26
27 def gunzip(data):
28 """Gunzip the given data and return as much data as possible.
29
30 This is resilient to CRC checksum errors.
31 """
32 f = GzipFile(fileobj=BytesIO(data))
33 output = b''
34 chunk = b'.'
35 while chunk:
36 try:
37 chunk = read1(f, 8196)
38 output += chunk
39 except (IOError, EOFError, struct.error):
40 # complete only if there is some data, otherwise re-raise
41 # see issue 87 about catching struct.error
42 # some pages are quite small so output is '' and f.extrabuf
43 # contains the whole page content
44 if output or getattr(f, 'extrabuf', None):
45 try:
46 output += f.extrabuf
47 finally:
48 break
49 else:
50 raise
51 return output
52
53 _is_gzipped = re.compile(br'^application/(x-)?gzip\b', re.I).search
54 _is_octetstream = re.compile(br'^(application|binary)/octet-stream\b', re.I).search
55
56 def is_gzipped(response):
57 """Return True if the response is gzipped, or False otherwise"""
58 ctype = response.headers.get('Content-Type', b'')
59 cenc = response.headers.get('Content-Encoding', b'').lower()
60 return (_is_gzipped(ctype) or
61 (_is_octetstream(ctype) and cenc in (b'gzip', b'x-gzip')))
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/utils/gz.py b/scrapy/utils/gz.py
--- a/scrapy/utils/gz.py
+++ b/scrapy/utils/gz.py
@@ -43,7 +43,7 @@
# contains the whole page content
if output or getattr(f, 'extrabuf', None):
try:
- output += f.extrabuf
+ output += f.extrabuf[-f.extrasize:]
finally:
break
else:
| {"golden_diff": "diff --git a/scrapy/utils/gz.py b/scrapy/utils/gz.py\n--- a/scrapy/utils/gz.py\n+++ b/scrapy/utils/gz.py\n@@ -43,7 +43,7 @@\n # contains the whole page content\n if output or getattr(f, 'extrabuf', None):\n try:\n- output += f.extrabuf\n+ output += f.extrabuf[-f.extrasize:]\n finally:\n break\n else:\n", "issue": "response.body is duplicate\nAccess the [text page(not mine)](http://files.qidian.com/Author4/3615059/88542882.txt) by browsers or wget and you will find the response content is not duplicate, but scrapy's `response.body` is duplicate. I had tried set the scrapy's headers same as a real brower's, but it is still duplicate.\n\nJust use the follow sample code, and you will find the issue.\n\n```\nscrapy shell \"http://files.qidian.com/Author4/3615059/88542882.txt\"\n```\n\nSorry for my bad english. \n\n", "before_files": [{"content": "import struct\n\ntry:\n from cStringIO import StringIO as BytesIO\nexcept ImportError:\n from io import BytesIO\nfrom gzip import GzipFile\n\nimport six\nimport re\n\n# - Python>=3.5 GzipFile's read() has issues returning leftover\n# uncompressed data when input is corrupted\n# (regression or bug-fix compared to Python 3.4)\n# - read1(), which fetches data before raising EOFError on next call\n# works here but is only available from Python>=3.3\n# - scrapy does not support Python 3.2\n# - Python 2.7 GzipFile works fine with standard read() + extrabuf\nif six.PY2:\n def read1(gzf, size=-1):\n return gzf.read(size)\nelse:\n def read1(gzf, size=-1):\n return gzf.read1(size)\n\n\ndef gunzip(data):\n \"\"\"Gunzip the given data and return as much data as possible.\n\n This is resilient to CRC checksum errors.\n \"\"\"\n f = GzipFile(fileobj=BytesIO(data))\n output = b''\n chunk = b'.'\n while chunk:\n try:\n chunk = read1(f, 8196)\n output += chunk\n except (IOError, EOFError, struct.error):\n # complete only if there is some data, otherwise re-raise\n # see issue 87 about catching struct.error\n # some pages are quite small so output is '' and f.extrabuf\n # contains the whole page content\n if output or getattr(f, 'extrabuf', None):\n try:\n output += f.extrabuf\n finally:\n break\n else:\n raise\n return output\n\n_is_gzipped = re.compile(br'^application/(x-)?gzip\\b', re.I).search\n_is_octetstream = re.compile(br'^(application|binary)/octet-stream\\b', re.I).search\n\ndef is_gzipped(response):\n \"\"\"Return True if the response is gzipped, or False otherwise\"\"\"\n ctype = response.headers.get('Content-Type', b'')\n cenc = response.headers.get('Content-Encoding', b'').lower()\n return (_is_gzipped(ctype) or\n (_is_octetstream(ctype) and cenc in (b'gzip', b'x-gzip')))\n", "path": "scrapy/utils/gz.py"}], "after_files": [{"content": "import struct\n\ntry:\n from cStringIO import StringIO as BytesIO\nexcept ImportError:\n from io import BytesIO\nfrom gzip import GzipFile\n\nimport six\nimport re\n\n# - Python>=3.5 GzipFile's read() has issues returning leftover\n# uncompressed data when input is corrupted\n# (regression or bug-fix compared to Python 3.4)\n# - read1(), which fetches data before raising EOFError on next call\n# works here but is only available from Python>=3.3\n# - scrapy does not support Python 3.2\n# - Python 2.7 GzipFile works fine with standard read() + extrabuf\nif six.PY2:\n def read1(gzf, size=-1):\n return gzf.read(size)\nelse:\n def read1(gzf, size=-1):\n return gzf.read1(size)\n\n\ndef gunzip(data):\n \"\"\"Gunzip the given data and return as much data as possible.\n\n This is resilient to CRC checksum errors.\n \"\"\"\n f = GzipFile(fileobj=BytesIO(data))\n output = b''\n chunk = b'.'\n while chunk:\n try:\n chunk = read1(f, 8196)\n output += chunk\n except (IOError, EOFError, struct.error):\n # complete only if there is some data, otherwise re-raise\n # see issue 87 about catching struct.error\n # some pages are quite small so output is '' and f.extrabuf\n # contains the whole page content\n if output or getattr(f, 'extrabuf', None):\n try:\n output += f.extrabuf[-f.extrasize:]\n finally:\n break\n else:\n raise\n return output\n\n_is_gzipped = re.compile(br'^application/(x-)?gzip\\b', re.I).search\n_is_octetstream = re.compile(br'^(application|binary)/octet-stream\\b', re.I).search\n\ndef is_gzipped(response):\n \"\"\"Return True if the response is gzipped, or False otherwise\"\"\"\n ctype = response.headers.get('Content-Type', b'')\n cenc = response.headers.get('Content-Encoding', b'').lower()\n return (_is_gzipped(ctype) or\n (_is_octetstream(ctype) and cenc in (b'gzip', b'x-gzip')))\n", "path": "scrapy/utils/gz.py"}]} | 1,044 | 107 |
gh_patches_debug_26561 | rasdani/github-patches | git_diff | OpenEnergyPlatform__oeplatform-611 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix OEO page and add OEO pages to navigation bar
- [x] Fix linebreak in link (Dipl.-Ing.)
- [x] Add Daniel to list
- [x] Add pages to navigation bar
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ontology/views.py`
Content:
```
1 from django.shortcuts import render, HttpResponse, redirect, Http404
2 from django.views import View
3 from rdflib import Graph, RDFS
4 from oeplatform.settings import ONTOLOGY_FOLDER
5 from collections import OrderedDict
6
7 import os
8 import re
9
10
11 def collect_modules(path):
12 modules = dict()
13 for file in os.listdir(path):
14 if not os.path.isdir(os.path.join(path,file)):
15 match = re.match("^(?P<filename>.*)\.(?P<extension>\w+)$", file)
16 filename, extension = match.groups()
17 if filename not in modules:
18 modules[filename] = dict(extensions=[], comment="No description found")
19 if extension == "owl":
20 g = Graph()
21 g.parse(os.path.join(path, file))
22 root = dict(g.namespaces())['']
23 comments = g.objects(root, RDFS.comment)
24 try:
25 modules[filename]["comment"] = next(comments)
26 except StopIteration:
27 modules[filename]["comment"] = "No description found"
28 modules[filename]["extensions"].append(extension)
29 return modules
30
31 class OntologyOverview(View):
32 def get(self, request, ontology, module_or_id=None, version=None, imports=False):
33 versions = os.listdir(f"{ONTOLOGY_FOLDER}/{ontology}")
34 if not version:
35 version = max((d for d in versions), key=lambda d:[int(x) for x in d.split(".")])
36
37 if "text/html" in request.headers.get("accept","").split(","):
38 main_module = collect_modules(f"{ONTOLOGY_FOLDER}/{ontology}/{version}")
39 main_module_name = list(main_module.keys())[0]
40 main_module = main_module[main_module_name]
41 main_module["name"] = main_module_name
42 submodules = collect_modules(f"{ONTOLOGY_FOLDER}/{ontology}/{version}/modules")
43 # Collect all file names
44
45 imports = collect_modules(f"{ONTOLOGY_FOLDER}/{ontology}/{version}/imports")
46
47 return render(request, "ontology/oeo.html", dict(
48 ontology=ontology,
49 version=version,
50 main_module=main_module,
51 submodules=submodules.items(),
52 imports=imports.items()
53 ))
54 else:
55 module_name = None
56 if module_or_id:
57 if imports:
58 submodules = collect_modules(f"{ONTOLOGY_FOLDER}/{ontology}/{version}/imports")
59 else:
60 submodules = collect_modules(f"{ONTOLOGY_FOLDER}/{ontology}/{version}/modules")
61 # If module_or_id is the name of a valid submodule, use this module
62 if module_or_id in submodules:
63 module_name = module_or_id
64 if imports:
65 return redirect(f"/ontology/{ontology}/releases/{version}/imports/{module_name}.owl")
66 else:
67 return redirect(f"/ontology/{ontology}/releases/{version}/{module_name}.owl")
68 # If no module was requested or the requested id was not a module, serve main ontology
69 if module_name is None:
70 main_module = collect_modules(f"{ONTOLOGY_FOLDER}/{ontology}/{version}")
71 module_name = list(main_module.keys())[0]
72 return redirect(f"/ontology/{ontology}/releases/{version}/{module_name}.owl")
73
74
75 class OntologyStatics(View):
76 def get(self, request, ontology, file, version=None, extension=None, imports=False):
77 """
78 Returns the requested file `{file}.{extension}` of version `version`
79 of ontology `ontology`
80
81 :param version: default: highest version in folder
82 :param extension: default: `.owl`
83 :return:
84 """
85
86 if not extension:
87 extension = "owl"
88 if not version:
89 version = max((d for d in os.listdir(f"{ONTOLOGY_FOLDER}/{ontology}")), key=lambda d:[int(x) for x in d.split(".")])
90 if imports:
91 file_path = f"{ONTOLOGY_FOLDER}/{ontology}/{version}/imports/{file}.{extension}"
92 else:
93 file_path = f"{ONTOLOGY_FOLDER}/{ontology}/{version}/{file}.{extension}"
94 if os.path.exists(file_path):
95 with open(file_path, "br") as f:
96 response = HttpResponse(f, content_type="application/rdf+xml; charset=utf-8")
97 response["Content-Disposition"] = f'attachment; filename="{file}.{extension}"'
98 return response
99 else:
100 file_path = f"{ONTOLOGY_FOLDER}/{ontology}/{version}/modules/{file}.{extension}"
101 if not os.path.exists(file_path):
102 raise Http404
103 with open(file_path, "br") as f:
104 response = HttpResponse(f, content_type="application/rdf+xml; charset=utf-8")
105 response["Content-Disposition"] = f'attachment; filename="{file}.{extension}"'
106 return response
```
Path: `ontology/urls.py`
Content:
```
1 from django.conf.urls import url
2 from django.conf.urls.static import static
3 from django.views.generic import TemplateView, RedirectView
4 from os import walk
5 from ontology import views
6 from oeplatform import settings
7
8 urlpatterns = [
9 url(r"^$", TemplateView.as_view(template_name="ontology/about.html")),
10 url(r"^ontology/oeo-steering-committee$",
11 TemplateView.as_view(template_name="ontology/oeo-steering-committee.html"),
12 name="oeo-s-c"),
13 url(r"^(?P<ontology>[\w_-]+)\/releases(\/v?(?P<version>[\d\.]+))?\/imports\/(?P<file>[\w_-]+)(.(?P<extension>[\w_-]+))?$",
14 views.OntologyStatics.as_view(), {"imports": True}),
15
16 url(r"^(?P<ontology>[\w_-]+)\/releases(\/v?(?P<version>[\d\.]+))?\/(?P<file>[\w_-]+)(.(?P<extension>[\w_-]+))?$",
17 views.OntologyStatics.as_view()),
18
19 url(r"^(?P<ontology>[\w_-]+)\/imports\/(?P<module_or_id>[\w\d_-]+)",
20 views.OntologyOverview.as_view(), {"imports": True}),
21
22 url(r"^(?P<ontology>[\w_-]+)(/(?P<module_or_id>[\w\d_-]+))?",
23 views.OntologyOverview.as_view()),
24
25 ]
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ontology/urls.py b/ontology/urls.py
--- a/ontology/urls.py
+++ b/ontology/urls.py
@@ -7,9 +7,11 @@
urlpatterns = [
url(r"^$", TemplateView.as_view(template_name="ontology/about.html")),
- url(r"^ontology/oeo-steering-committee$",
+ url(r"^ontology/$", TemplateView.as_view(template_name="ontology/about.html")),
+ url(r"^oeo-steering-committee/$",
TemplateView.as_view(template_name="ontology/oeo-steering-committee.html"),
name="oeo-s-c"),
+ url(r"^ontology/oeo-steering-committee/$", TemplateView.as_view(template_name="ontology/oeo-steering-committee.html")),
url(r"^(?P<ontology>[\w_-]+)\/releases(\/v?(?P<version>[\d\.]+))?\/imports\/(?P<file>[\w_-]+)(.(?P<extension>[\w_-]+))?$",
views.OntologyStatics.as_view(), {"imports": True}),
diff --git a/ontology/views.py b/ontology/views.py
--- a/ontology/views.py
+++ b/ontology/views.py
@@ -30,6 +30,8 @@
class OntologyOverview(View):
def get(self, request, ontology, module_or_id=None, version=None, imports=False):
+ if not os.path.exists(f"{ONTOLOGY_FOLDER}/{ontology}"):
+ raise Http404
versions = os.listdir(f"{ONTOLOGY_FOLDER}/{ontology}")
if not version:
version = max((d for d in versions), key=lambda d:[int(x) for x in d.split(".")])
| {"golden_diff": "diff --git a/ontology/urls.py b/ontology/urls.py\n--- a/ontology/urls.py\n+++ b/ontology/urls.py\n@@ -7,9 +7,11 @@\n \n urlpatterns = [\n url(r\"^$\", TemplateView.as_view(template_name=\"ontology/about.html\")),\n- url(r\"^ontology/oeo-steering-committee$\",\n+ url(r\"^ontology/$\", TemplateView.as_view(template_name=\"ontology/about.html\")),\n+ url(r\"^oeo-steering-committee/$\",\n TemplateView.as_view(template_name=\"ontology/oeo-steering-committee.html\"),\n name=\"oeo-s-c\"),\n+ url(r\"^ontology/oeo-steering-committee/$\", TemplateView.as_view(template_name=\"ontology/oeo-steering-committee.html\")),\n url(r\"^(?P<ontology>[\\w_-]+)\\/releases(\\/v?(?P<version>[\\d\\.]+))?\\/imports\\/(?P<file>[\\w_-]+)(.(?P<extension>[\\w_-]+))?$\",\n views.OntologyStatics.as_view(), {\"imports\": True}),\n \ndiff --git a/ontology/views.py b/ontology/views.py\n--- a/ontology/views.py\n+++ b/ontology/views.py\n@@ -30,6 +30,8 @@\n \n class OntologyOverview(View):\n def get(self, request, ontology, module_or_id=None, version=None, imports=False):\n+ if not os.path.exists(f\"{ONTOLOGY_FOLDER}/{ontology}\"):\n+ raise Http404\n versions = os.listdir(f\"{ONTOLOGY_FOLDER}/{ontology}\")\n if not version:\n version = max((d for d in versions), key=lambda d:[int(x) for x in d.split(\".\")])\n", "issue": "Fix OEO page and add OEO pages to navigation bar\n- [x] Fix linebreak in link (Dipl.-Ing.)\r\n- [x] Add Daniel to list\r\n- [x] Add pages to navigation bar\n", "before_files": [{"content": "from django.shortcuts import render, HttpResponse, redirect, Http404\nfrom django.views import View\nfrom rdflib import Graph, RDFS\nfrom oeplatform.settings import ONTOLOGY_FOLDER\nfrom collections import OrderedDict\n\nimport os\nimport re\n\n\ndef collect_modules(path):\n modules = dict()\n for file in os.listdir(path):\n if not os.path.isdir(os.path.join(path,file)):\n match = re.match(\"^(?P<filename>.*)\\.(?P<extension>\\w+)$\", file)\n filename, extension = match.groups()\n if filename not in modules:\n modules[filename] = dict(extensions=[], comment=\"No description found\")\n if extension == \"owl\":\n g = Graph()\n g.parse(os.path.join(path, file))\n root = dict(g.namespaces())['']\n comments = g.objects(root, RDFS.comment)\n try:\n modules[filename][\"comment\"] = next(comments)\n except StopIteration:\n modules[filename][\"comment\"] = \"No description found\"\n modules[filename][\"extensions\"].append(extension)\n return modules\n\nclass OntologyOverview(View):\n def get(self, request, ontology, module_or_id=None, version=None, imports=False):\n versions = os.listdir(f\"{ONTOLOGY_FOLDER}/{ontology}\")\n if not version:\n version = max((d for d in versions), key=lambda d:[int(x) for x in d.split(\".\")])\n\n if \"text/html\" in request.headers.get(\"accept\",\"\").split(\",\"):\n main_module = collect_modules(f\"{ONTOLOGY_FOLDER}/{ontology}/{version}\")\n main_module_name = list(main_module.keys())[0]\n main_module = main_module[main_module_name]\n main_module[\"name\"] = main_module_name\n submodules = collect_modules(f\"{ONTOLOGY_FOLDER}/{ontology}/{version}/modules\")\n # Collect all file names\n\n imports = collect_modules(f\"{ONTOLOGY_FOLDER}/{ontology}/{version}/imports\")\n\n return render(request, \"ontology/oeo.html\", dict(\n ontology=ontology,\n version=version,\n main_module=main_module,\n submodules=submodules.items(),\n imports=imports.items()\n ))\n else:\n module_name = None\n if module_or_id:\n if imports:\n submodules = collect_modules(f\"{ONTOLOGY_FOLDER}/{ontology}/{version}/imports\")\n else:\n submodules = collect_modules(f\"{ONTOLOGY_FOLDER}/{ontology}/{version}/modules\")\n # If module_or_id is the name of a valid submodule, use this module\n if module_or_id in submodules:\n module_name = module_or_id\n if imports:\n return redirect(f\"/ontology/{ontology}/releases/{version}/imports/{module_name}.owl\")\n else:\n return redirect(f\"/ontology/{ontology}/releases/{version}/{module_name}.owl\")\n # If no module was requested or the requested id was not a module, serve main ontology\n if module_name is None:\n main_module = collect_modules(f\"{ONTOLOGY_FOLDER}/{ontology}/{version}\")\n module_name = list(main_module.keys())[0]\n return redirect(f\"/ontology/{ontology}/releases/{version}/{module_name}.owl\")\n\n\nclass OntologyStatics(View):\n def get(self, request, ontology, file, version=None, extension=None, imports=False):\n \"\"\"\n Returns the requested file `{file}.{extension}` of version `version`\n of ontology `ontology`\n\n :param version: default: highest version in folder\n :param extension: default: `.owl`\n :return:\n \"\"\"\n\n if not extension:\n extension = \"owl\"\n if not version:\n version = max((d for d in os.listdir(f\"{ONTOLOGY_FOLDER}/{ontology}\")), key=lambda d:[int(x) for x in d.split(\".\")])\n if imports:\n file_path = f\"{ONTOLOGY_FOLDER}/{ontology}/{version}/imports/{file}.{extension}\"\n else:\n file_path = f\"{ONTOLOGY_FOLDER}/{ontology}/{version}/{file}.{extension}\"\n if os.path.exists(file_path):\n with open(file_path, \"br\") as f:\n response = HttpResponse(f, content_type=\"application/rdf+xml; charset=utf-8\")\n response[\"Content-Disposition\"] = f'attachment; filename=\"{file}.{extension}\"'\n return response\n else:\n file_path = f\"{ONTOLOGY_FOLDER}/{ontology}/{version}/modules/{file}.{extension}\"\n if not os.path.exists(file_path):\n raise Http404\n with open(file_path, \"br\") as f:\n response = HttpResponse(f, content_type=\"application/rdf+xml; charset=utf-8\")\n response[\"Content-Disposition\"] = f'attachment; filename=\"{file}.{extension}\"'\n return response", "path": "ontology/views.py"}, {"content": "from django.conf.urls import url\nfrom django.conf.urls.static import static\nfrom django.views.generic import TemplateView, RedirectView\nfrom os import walk\nfrom ontology import views\nfrom oeplatform import settings\n\nurlpatterns = [\n url(r\"^$\", TemplateView.as_view(template_name=\"ontology/about.html\")),\n url(r\"^ontology/oeo-steering-committee$\",\n TemplateView.as_view(template_name=\"ontology/oeo-steering-committee.html\"),\n name=\"oeo-s-c\"),\n url(r\"^(?P<ontology>[\\w_-]+)\\/releases(\\/v?(?P<version>[\\d\\.]+))?\\/imports\\/(?P<file>[\\w_-]+)(.(?P<extension>[\\w_-]+))?$\",\n views.OntologyStatics.as_view(), {\"imports\": True}),\n\n url(r\"^(?P<ontology>[\\w_-]+)\\/releases(\\/v?(?P<version>[\\d\\.]+))?\\/(?P<file>[\\w_-]+)(.(?P<extension>[\\w_-]+))?$\",\n views.OntologyStatics.as_view()),\n\n url(r\"^(?P<ontology>[\\w_-]+)\\/imports\\/(?P<module_or_id>[\\w\\d_-]+)\",\n views.OntologyOverview.as_view(), {\"imports\": True}),\n\n url(r\"^(?P<ontology>[\\w_-]+)(/(?P<module_or_id>[\\w\\d_-]+))?\",\n views.OntologyOverview.as_view()),\n\n]\n", "path": "ontology/urls.py"}], "after_files": [{"content": "from django.shortcuts import render, HttpResponse, redirect, Http404\nfrom django.views import View\nfrom rdflib import Graph, RDFS\nfrom oeplatform.settings import ONTOLOGY_FOLDER\nfrom collections import OrderedDict\n\nimport os\nimport re\n\n\ndef collect_modules(path):\n modules = dict()\n for file in os.listdir(path):\n if not os.path.isdir(os.path.join(path,file)):\n match = re.match(\"^(?P<filename>.*)\\.(?P<extension>\\w+)$\", file)\n filename, extension = match.groups()\n if filename not in modules:\n modules[filename] = dict(extensions=[], comment=\"No description found\")\n if extension == \"owl\":\n g = Graph()\n g.parse(os.path.join(path, file))\n root = dict(g.namespaces())['']\n comments = g.objects(root, RDFS.comment)\n try:\n modules[filename][\"comment\"] = next(comments)\n except StopIteration:\n modules[filename][\"comment\"] = \"No description found\"\n modules[filename][\"extensions\"].append(extension)\n return modules\n\nclass OntologyOverview(View):\n def get(self, request, ontology, module_or_id=None, version=None, imports=False):\n if not os.path.exists(f\"{ONTOLOGY_FOLDER}/{ontology}\"):\n raise Http404\n versions = os.listdir(f\"{ONTOLOGY_FOLDER}/{ontology}\")\n if not version:\n version = max((d for d in versions), key=lambda d:[int(x) for x in d.split(\".\")])\n\n if \"text/html\" in request.headers.get(\"accept\",\"\").split(\",\"):\n main_module = collect_modules(f\"{ONTOLOGY_FOLDER}/{ontology}/{version}\")\n main_module_name = list(main_module.keys())[0]\n main_module = main_module[main_module_name]\n main_module[\"name\"] = main_module_name\n submodules = collect_modules(f\"{ONTOLOGY_FOLDER}/{ontology}/{version}/modules\")\n # Collect all file names\n\n imports = collect_modules(f\"{ONTOLOGY_FOLDER}/{ontology}/{version}/imports\")\n\n return render(request, \"ontology/oeo.html\", dict(\n ontology=ontology,\n version=version,\n main_module=main_module,\n submodules=submodules.items(),\n imports=imports.items()\n ))\n else:\n module_name = None\n if module_or_id:\n if imports:\n submodules = collect_modules(f\"{ONTOLOGY_FOLDER}/{ontology}/{version}/imports\")\n else:\n submodules = collect_modules(f\"{ONTOLOGY_FOLDER}/{ontology}/{version}/modules\")\n # If module_or_id is the name of a valid submodule, use this module\n if module_or_id in submodules:\n module_name = module_or_id\n if imports:\n return redirect(f\"/ontology/{ontology}/releases/{version}/imports/{module_name}.owl\")\n else:\n return redirect(f\"/ontology/{ontology}/releases/{version}/{module_name}.owl\")\n # If no module was requested or the requested id was not a module, serve main ontology\n if module_name is None:\n main_module = collect_modules(f\"{ONTOLOGY_FOLDER}/{ontology}/{version}\")\n module_name = list(main_module.keys())[0]\n return redirect(f\"/ontology/{ontology}/releases/{version}/{module_name}.owl\")\n\n\nclass OntologyStatics(View):\n def get(self, request, ontology, file, version=None, extension=None, imports=False):\n \"\"\"\n Returns the requested file `{file}.{extension}` of version `version`\n of ontology `ontology`\n\n :param version: default: highest version in folder\n :param extension: default: `.owl`\n :return:\n \"\"\"\n\n if not extension:\n extension = \"owl\"\n if not version:\n version = max((d for d in os.listdir(f\"{ONTOLOGY_FOLDER}/{ontology}\")), key=lambda d:[int(x) for x in d.split(\".\")])\n if imports:\n file_path = f\"{ONTOLOGY_FOLDER}/{ontology}/{version}/imports/{file}.{extension}\"\n else:\n file_path = f\"{ONTOLOGY_FOLDER}/{ontology}/{version}/{file}.{extension}\"\n if os.path.exists(file_path):\n with open(file_path, \"br\") as f:\n response = HttpResponse(f, content_type=\"application/rdf+xml; charset=utf-8\")\n response[\"Content-Disposition\"] = f'attachment; filename=\"{file}.{extension}\"'\n return response\n else:\n file_path = f\"{ONTOLOGY_FOLDER}/{ontology}/{version}/modules/{file}.{extension}\"\n if not os.path.exists(file_path):\n raise Http404\n with open(file_path, \"br\") as f:\n response = HttpResponse(f, content_type=\"application/rdf+xml; charset=utf-8\")\n response[\"Content-Disposition\"] = f'attachment; filename=\"{file}.{extension}\"'\n return response", "path": "ontology/views.py"}, {"content": "from django.conf.urls import url\nfrom django.conf.urls.static import static\nfrom django.views.generic import TemplateView, RedirectView\nfrom os import walk\nfrom ontology import views\nfrom oeplatform import settings\n\nurlpatterns = [\n url(r\"^$\", TemplateView.as_view(template_name=\"ontology/about.html\")),\n url(r\"^ontology/$\", TemplateView.as_view(template_name=\"ontology/about.html\")),\n url(r\"^oeo-steering-committee/$\",\n TemplateView.as_view(template_name=\"ontology/oeo-steering-committee.html\"),\n name=\"oeo-s-c\"),\n url(r\"^ontology/oeo-steering-committee/$\", TemplateView.as_view(template_name=\"ontology/oeo-steering-committee.html\")),\n url(r\"^(?P<ontology>[\\w_-]+)\\/releases(\\/v?(?P<version>[\\d\\.]+))?\\/imports\\/(?P<file>[\\w_-]+)(.(?P<extension>[\\w_-]+))?$\",\n views.OntologyStatics.as_view(), {\"imports\": True}),\n\n url(r\"^(?P<ontology>[\\w_-]+)\\/releases(\\/v?(?P<version>[\\d\\.]+))?\\/(?P<file>[\\w_-]+)(.(?P<extension>[\\w_-]+))?$\",\n views.OntologyStatics.as_view()),\n\n url(r\"^(?P<ontology>[\\w_-]+)\\/imports\\/(?P<module_or_id>[\\w\\d_-]+)\",\n views.OntologyOverview.as_view(), {\"imports\": True}),\n\n url(r\"^(?P<ontology>[\\w_-]+)(/(?P<module_or_id>[\\w\\d_-]+))?\",\n views.OntologyOverview.as_view()),\n\n]\n", "path": "ontology/urls.py"}]} | 1,912 | 377 |
gh_patches_debug_36747 | rasdani/github-patches | git_diff | CiviWiki__OpenCiviWiki-1037 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Move forms from `api/forms.py` to the `accounts/forms.py`
## Idea summary
There are several account/profile forms defined in [`api/forms.py`](https://github.com/CiviWiki/OpenCiviWiki/blob/develop/project/api/forms.py). Those forms should be moved to [`accounts/forms.py`](https://github.com/CiviWiki/OpenCiviWiki/blob/develop/project/accounts/forms.py) or deleted if they are duplicate code.
**Update:** the code in `api/forms.py` is actually redundant, so may simply be deleted.
## Task
The steps to complete this task are:
- [x] [fork this repository](https://docs.github.com/en/get-started/quickstart/fork-a-repo) and clone it to your local computer
- [x] set up a local development environment as [outlined in our Contributing Guide](https://github.com/CiviWiki/OpenCiviWiki/blob/develop/CONTRIBUTING.md#development)
- [x] delete the file `api/forms.py`
- [x] commit your changes
- [x] push your code to GitHub
- [x] [open a pull request](https://docs.github.com/en/github/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) against the `main` branch in this repository
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `project/api/forms.py`
Content:
```
1 from django import forms
2 from django.core.files.images import get_image_dimensions
3 from django.contrib.auth import get_user_model
4 from accounts.models import Profile
5
6
7 class UpdatePassword(forms.ModelForm):
8 """
9 Form for updating User Password
10 """
11
12 class Meta:
13 model = get_user_model()
14 fields = ["password", "verify"]
15
16 password = forms.CharField(
17 label="Password",
18 widget=forms.PasswordInput(
19 attrs={
20 "class": "form-control",
21 "placeholder": "Password",
22 "required": "True",
23 }
24 ),
25 )
26 verify = forms.CharField(
27 label="Password Verify",
28 widget=forms.PasswordInput(
29 attrs={
30 "class": "form-control",
31 "placeholder": "Password Verify",
32 "required": "True",
33 }
34 ),
35 help_text="Please retype your password.",
36 )
37
38 def clean(self):
39 """
40 Verifies that the passwords match
41 """
42 clean_data = super(UpdatePassword, self).clean()
43 if "password" in clean_data and "verify" in clean_data:
44 if clean_data["password"] != clean_data["verify"]:
45 raise forms.ValidationError("Passwords don't match.")
46 else:
47 raise forms.ValidationError("Both password fields need to be filled out.")
48 return clean_data
49
50
51 class UpdateProfileImage(forms.ModelForm):
52 """
53 Form for updating profile image
54 """
55
56 class Meta:
57 model = Profile
58 fields = ["profile_image"]
59
60 profile_image = forms.ImageField()
61
62 def clean_profile_image(self):
63 """
64 This function is used to make sure that profile images follow Civiwiki standards.
65
66 Requirements:
67 - Height cannot exceed 960px
68 - Width cannot exceed 1280px
69 - Image must be (jpg, jpeg, pjeg, png)
70 - File size cannot exceed 2MB
71 """
72 profile_image = self.cleaned_data["profile_image"]
73
74 try:
75 w, h = get_image_dimensions(profile_image)
76
77 # validate dimensions
78 max_height = 960
79 max_width = 1280
80 if w > max_width or h > max_height:
81 raise forms.ValidationError(
82 u"Please use an image that is {w} x {h} pixels or smaller.".format(
83 w=max_width, h=max_height
84 )
85 )
86
87 # validate content type
88 main, sub = profile_image.content_type.split("/")
89 if not (main == "image" and sub in ["jpg", "jpeg", "pjpeg", "png"]):
90 raise forms.ValidationError(u"Please use a JPEG or PNG image.")
91
92 # validate file size
93 if len(profile_image) > (2000 * 1024):
94 raise forms.ValidationError(
95 "Profile image file size may not exceed 2MB."
96 )
97
98 except AttributeError:
99 pass
100
101 return profile_image
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/project/api/forms.py b/project/api/forms.py
deleted file mode 100644
--- a/project/api/forms.py
+++ /dev/null
@@ -1,101 +0,0 @@
-from django import forms
-from django.core.files.images import get_image_dimensions
-from django.contrib.auth import get_user_model
-from accounts.models import Profile
-
-
-class UpdatePassword(forms.ModelForm):
- """
- Form for updating User Password
- """
-
- class Meta:
- model = get_user_model()
- fields = ["password", "verify"]
-
- password = forms.CharField(
- label="Password",
- widget=forms.PasswordInput(
- attrs={
- "class": "form-control",
- "placeholder": "Password",
- "required": "True",
- }
- ),
- )
- verify = forms.CharField(
- label="Password Verify",
- widget=forms.PasswordInput(
- attrs={
- "class": "form-control",
- "placeholder": "Password Verify",
- "required": "True",
- }
- ),
- help_text="Please retype your password.",
- )
-
- def clean(self):
- """
- Verifies that the passwords match
- """
- clean_data = super(UpdatePassword, self).clean()
- if "password" in clean_data and "verify" in clean_data:
- if clean_data["password"] != clean_data["verify"]:
- raise forms.ValidationError("Passwords don't match.")
- else:
- raise forms.ValidationError("Both password fields need to be filled out.")
- return clean_data
-
-
-class UpdateProfileImage(forms.ModelForm):
- """
- Form for updating profile image
- """
-
- class Meta:
- model = Profile
- fields = ["profile_image"]
-
- profile_image = forms.ImageField()
-
- def clean_profile_image(self):
- """
- This function is used to make sure that profile images follow Civiwiki standards.
-
- Requirements:
- - Height cannot exceed 960px
- - Width cannot exceed 1280px
- - Image must be (jpg, jpeg, pjeg, png)
- - File size cannot exceed 2MB
- """
- profile_image = self.cleaned_data["profile_image"]
-
- try:
- w, h = get_image_dimensions(profile_image)
-
- # validate dimensions
- max_height = 960
- max_width = 1280
- if w > max_width or h > max_height:
- raise forms.ValidationError(
- u"Please use an image that is {w} x {h} pixels or smaller.".format(
- w=max_width, h=max_height
- )
- )
-
- # validate content type
- main, sub = profile_image.content_type.split("/")
- if not (main == "image" and sub in ["jpg", "jpeg", "pjpeg", "png"]):
- raise forms.ValidationError(u"Please use a JPEG or PNG image.")
-
- # validate file size
- if len(profile_image) > (2000 * 1024):
- raise forms.ValidationError(
- "Profile image file size may not exceed 2MB."
- )
-
- except AttributeError:
- pass
-
- return profile_image
| {"golden_diff": "diff --git a/project/api/forms.py b/project/api/forms.py\ndeleted file mode 100644\n--- a/project/api/forms.py\n+++ /dev/null\n@@ -1,101 +0,0 @@\n-from django import forms\n-from django.core.files.images import get_image_dimensions\n-from django.contrib.auth import get_user_model\n-from accounts.models import Profile\n-\n-\n-class UpdatePassword(forms.ModelForm):\n- \"\"\"\n- Form for updating User Password\n- \"\"\"\n-\n- class Meta:\n- model = get_user_model()\n- fields = [\"password\", \"verify\"]\n-\n- password = forms.CharField(\n- label=\"Password\",\n- widget=forms.PasswordInput(\n- attrs={\n- \"class\": \"form-control\",\n- \"placeholder\": \"Password\",\n- \"required\": \"True\",\n- }\n- ),\n- )\n- verify = forms.CharField(\n- label=\"Password Verify\",\n- widget=forms.PasswordInput(\n- attrs={\n- \"class\": \"form-control\",\n- \"placeholder\": \"Password Verify\",\n- \"required\": \"True\",\n- }\n- ),\n- help_text=\"Please retype your password.\",\n- )\n-\n- def clean(self):\n- \"\"\"\n- Verifies that the passwords match\n- \"\"\"\n- clean_data = super(UpdatePassword, self).clean()\n- if \"password\" in clean_data and \"verify\" in clean_data:\n- if clean_data[\"password\"] != clean_data[\"verify\"]:\n- raise forms.ValidationError(\"Passwords don't match.\")\n- else:\n- raise forms.ValidationError(\"Both password fields need to be filled out.\")\n- return clean_data\n-\n-\n-class UpdateProfileImage(forms.ModelForm):\n- \"\"\"\n- Form for updating profile image\n- \"\"\"\n-\n- class Meta:\n- model = Profile\n- fields = [\"profile_image\"]\n-\n- profile_image = forms.ImageField()\n-\n- def clean_profile_image(self):\n- \"\"\"\n- This function is used to make sure that profile images follow Civiwiki standards.\n-\n- Requirements:\n- - Height cannot exceed 960px\n- - Width cannot exceed 1280px\n- - Image must be (jpg, jpeg, pjeg, png)\n- - File size cannot exceed 2MB\n- \"\"\"\n- profile_image = self.cleaned_data[\"profile_image\"]\n-\n- try:\n- w, h = get_image_dimensions(profile_image)\n-\n- # validate dimensions\n- max_height = 960\n- max_width = 1280\n- if w > max_width or h > max_height:\n- raise forms.ValidationError(\n- u\"Please use an image that is {w} x {h} pixels or smaller.\".format(\n- w=max_width, h=max_height\n- )\n- )\n-\n- # validate content type\n- main, sub = profile_image.content_type.split(\"/\")\n- if not (main == \"image\" and sub in [\"jpg\", \"jpeg\", \"pjpeg\", \"png\"]):\n- raise forms.ValidationError(u\"Please use a JPEG or PNG image.\")\n-\n- # validate file size\n- if len(profile_image) > (2000 * 1024):\n- raise forms.ValidationError(\n- \"Profile image file size may not exceed 2MB.\"\n- )\n-\n- except AttributeError:\n- pass\n-\n- return profile_image\n", "issue": "Move forms from `api/forms.py` to the `accounts/forms.py`\n## Idea summary\r\n\r\nThere are several account/profile forms defined in [`api/forms.py`](https://github.com/CiviWiki/OpenCiviWiki/blob/develop/project/api/forms.py). Those forms should be moved to [`accounts/forms.py`](https://github.com/CiviWiki/OpenCiviWiki/blob/develop/project/accounts/forms.py) or deleted if they are duplicate code.\r\n\r\n**Update:** the code in `api/forms.py` is actually redundant, so may simply be deleted.\r\n\r\n## Task\r\n\r\nThe steps to complete this task are:\r\n\r\n- [x] [fork this repository](https://docs.github.com/en/get-started/quickstart/fork-a-repo) and clone it to your local computer\r\n- [x] set up a local development environment as [outlined in our Contributing Guide](https://github.com/CiviWiki/OpenCiviWiki/blob/develop/CONTRIBUTING.md#development)\r\n- [x] delete the file `api/forms.py`\r\n- [x] commit your changes\r\n- [x] push your code to GitHub\r\n- [x] [open a pull request](https://docs.github.com/en/github/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) against the `main` branch in this repository\n", "before_files": [{"content": "from django import forms\nfrom django.core.files.images import get_image_dimensions\nfrom django.contrib.auth import get_user_model\nfrom accounts.models import Profile\n\n\nclass UpdatePassword(forms.ModelForm):\n \"\"\"\n Form for updating User Password\n \"\"\"\n\n class Meta:\n model = get_user_model()\n fields = [\"password\", \"verify\"]\n\n password = forms.CharField(\n label=\"Password\",\n widget=forms.PasswordInput(\n attrs={\n \"class\": \"form-control\",\n \"placeholder\": \"Password\",\n \"required\": \"True\",\n }\n ),\n )\n verify = forms.CharField(\n label=\"Password Verify\",\n widget=forms.PasswordInput(\n attrs={\n \"class\": \"form-control\",\n \"placeholder\": \"Password Verify\",\n \"required\": \"True\",\n }\n ),\n help_text=\"Please retype your password.\",\n )\n\n def clean(self):\n \"\"\"\n Verifies that the passwords match\n \"\"\"\n clean_data = super(UpdatePassword, self).clean()\n if \"password\" in clean_data and \"verify\" in clean_data:\n if clean_data[\"password\"] != clean_data[\"verify\"]:\n raise forms.ValidationError(\"Passwords don't match.\")\n else:\n raise forms.ValidationError(\"Both password fields need to be filled out.\")\n return clean_data\n\n\nclass UpdateProfileImage(forms.ModelForm):\n \"\"\"\n Form for updating profile image\n \"\"\"\n\n class Meta:\n model = Profile\n fields = [\"profile_image\"]\n\n profile_image = forms.ImageField()\n\n def clean_profile_image(self):\n \"\"\"\n This function is used to make sure that profile images follow Civiwiki standards.\n\n Requirements:\n - Height cannot exceed 960px\n - Width cannot exceed 1280px\n - Image must be (jpg, jpeg, pjeg, png)\n - File size cannot exceed 2MB\n \"\"\"\n profile_image = self.cleaned_data[\"profile_image\"]\n\n try:\n w, h = get_image_dimensions(profile_image)\n\n # validate dimensions\n max_height = 960\n max_width = 1280\n if w > max_width or h > max_height:\n raise forms.ValidationError(\n u\"Please use an image that is {w} x {h} pixels or smaller.\".format(\n w=max_width, h=max_height\n )\n )\n\n # validate content type\n main, sub = profile_image.content_type.split(\"/\")\n if not (main == \"image\" and sub in [\"jpg\", \"jpeg\", \"pjpeg\", \"png\"]):\n raise forms.ValidationError(u\"Please use a JPEG or PNG image.\")\n\n # validate file size\n if len(profile_image) > (2000 * 1024):\n raise forms.ValidationError(\n \"Profile image file size may not exceed 2MB.\"\n )\n\n except AttributeError:\n pass\n\n return profile_image\n", "path": "project/api/forms.py"}], "after_files": [{"content": null, "path": "project/api/forms.py"}]} | 1,357 | 745 |
gh_patches_debug_1240 | rasdani/github-patches | git_diff | mindsdb__lightwood-603 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
:wrench: Add default logging level environment variable
## Task
Add a `LIGHTWOOD_LOG` environment variable that controls the default logging level for lightwood. It should be possible to set values for it so that `DEBUG`, `INFO`, `WARNING`, `ERROR` and `CRITICAL` are all possible options. The logger lightwood uses is declared and exported [here](https://github.com/mindsdb/lightwood/blob/stable/lightwood/helpers/log.py).
## Steps :male_detective: :female_detective:
- Fork the Lightwood repository, checkout the `staging` branch and from it create a new one.
- Implement the necessary changes.
- Check that only the appropriate logs are getting through. For this, you can run any of the integration tests, like [`test_boston_housing`](https://github.com/mindsdb/lightwood/blob/stable/tests/integration/basic/test_boston_housing.py), and analyze the output.
- Make the PR and address any comments that reviewers might make.
## Additional rewards :1st_place_medal:
Each documentation PR brings :one: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lightwood/helpers/log.py`
Content:
```
1 import logging
2 import os
3
4
5 def initialize_log():
6 pid = os.getpid()
7 logging.basicConfig()
8 log = logging.getLogger(f'lightwood-{pid}')
9 log.setLevel(logging.DEBUG)
10 return log
11
12
13 log = initialize_log()
14
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lightwood/helpers/log.py b/lightwood/helpers/log.py
--- a/lightwood/helpers/log.py
+++ b/lightwood/helpers/log.py
@@ -6,7 +6,8 @@
pid = os.getpid()
logging.basicConfig()
log = logging.getLogger(f'lightwood-{pid}')
- log.setLevel(logging.DEBUG)
+ log_level = os.environ.get('LIGHTWOOD_LOG', 'DEBUG')
+ log.setLevel(log_level)
return log
| {"golden_diff": "diff --git a/lightwood/helpers/log.py b/lightwood/helpers/log.py\n--- a/lightwood/helpers/log.py\n+++ b/lightwood/helpers/log.py\n@@ -6,7 +6,8 @@\n pid = os.getpid()\n logging.basicConfig()\n log = logging.getLogger(f'lightwood-{pid}')\n- log.setLevel(logging.DEBUG)\n+ log_level = os.environ.get('LIGHTWOOD_LOG', 'DEBUG')\n+ log.setLevel(log_level)\n return log\n", "issue": ":wrench: Add default logging level environment variable\n## Task\r\n\r\nAdd a `LIGHTWOOD_LOG` environment variable that controls the default logging level for lightwood. It should be possible to set values for it so that `DEBUG`, `INFO`, `WARNING`, `ERROR` and `CRITICAL` are all possible options. The logger lightwood uses is declared and exported [here](https://github.com/mindsdb/lightwood/blob/stable/lightwood/helpers/log.py).\r\n\r\n## Steps :male_detective: :female_detective: \r\n\r\n- Fork the Lightwood repository, checkout the `staging` branch and from it create a new one.\r\n- Implement the necessary changes.\r\n- Check that only the appropriate logs are getting through. For this, you can run any of the integration tests, like [`test_boston_housing`](https://github.com/mindsdb/lightwood/blob/stable/tests/integration/basic/test_boston_housing.py), and analyze the output.\r\n- Make the PR and address any comments that reviewers might make.\r\n\r\n## Additional rewards :1st_place_medal: \r\n\r\nEach documentation PR brings :one: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/\n", "before_files": [{"content": "import logging\nimport os\n\n\ndef initialize_log():\n pid = os.getpid()\n logging.basicConfig()\n log = logging.getLogger(f'lightwood-{pid}')\n log.setLevel(logging.DEBUG)\n return log\n\n\nlog = initialize_log()\n", "path": "lightwood/helpers/log.py"}], "after_files": [{"content": "import logging\nimport os\n\n\ndef initialize_log():\n pid = os.getpid()\n logging.basicConfig()\n log = logging.getLogger(f'lightwood-{pid}')\n log_level = os.environ.get('LIGHTWOOD_LOG', 'DEBUG')\n log.setLevel(log_level)\n return log\n\n\nlog = initialize_log()\n", "path": "lightwood/helpers/log.py"}]} | 615 | 99 |
gh_patches_debug_2089 | rasdani/github-patches | git_diff | OpenMined__PySyft-4708 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Windows to CI
## Description
Add windows to the CI tests as a separate step for say python 3.8 and torch==1.6.0 initially just to get things working. Then if it works expand to all versions to see any potential issues.
## Definition of Done
This ticket is done when we know what does and doesn't run on Windows in CI from the current "fast" tests and the new "slow" tests. Post a screenshot and link to CI here when it's running.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/syft/lib/torch/__init__.py`
Content:
```
1 # stdlib
2 from typing import Dict
3 from typing import Union
4
5 # third party
6 from packaging import version
7 import torch
8
9 # syft relative
10 from . import parameter # noqa: 401
11 from . import uppercase_tensor # noqa: 401
12 from ...ast.globals import Globals
13 from .allowlist import allowlist
14
15 TORCH_VERSION = version.parse(torch.__version__)
16
17
18 def get_return_type(support_dict: Union[str, Dict[str, str]]) -> str:
19 if isinstance(support_dict, str):
20 return support_dict
21 else:
22 return support_dict["return_type"]
23
24
25 def version_supported(support_dict: Union[str, Dict[str, str]]) -> bool:
26 if isinstance(support_dict, str):
27 return True
28 else:
29 return TORCH_VERSION >= version.parse(support_dict["min_version"])
30
31
32 def create_torch_ast() -> Globals:
33 ast = Globals()
34
35 # most methods work in all versions and have a single return type
36 # for the more complicated ones we pass a dict with keys like return_type and
37 # min_version
38 for method, return_type_name_or_dict in allowlist.items():
39 if version_supported(support_dict=return_type_name_or_dict):
40 return_type = get_return_type(support_dict=return_type_name_or_dict)
41 if return_type == "unknown":
42 # this allows us to import them for testing
43 continue
44 ast.add_path(
45 path=method, framework_reference=torch, return_type_name=return_type
46 )
47 # add all the torch.nn.Parameter hooks
48 if method.startswith("torch.Tensor."):
49 method = method.replace("torch.Tensor.", "torch.nn.Parameter.")
50 return_type = return_type.replace("torch.Tensor", "torch.nn.Parameter")
51 ast.add_path(
52 path=method, framework_reference=torch, return_type_name=return_type
53 )
54 else:
55 print(f"Skipping torch.{method} not supported in {TORCH_VERSION}")
56
57 for klass in ast.classes:
58 klass.create_pointer_class()
59 klass.create_send_method()
60 klass.create_serialization_methods()
61 klass.create_storable_object_attr_convenience_methods()
62 return ast
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/syft/lib/torch/__init__.py b/src/syft/lib/torch/__init__.py
--- a/src/syft/lib/torch/__init__.py
+++ b/src/syft/lib/torch/__init__.py
@@ -12,7 +12,7 @@
from ...ast.globals import Globals
from .allowlist import allowlist
-TORCH_VERSION = version.parse(torch.__version__)
+TORCH_VERSION = version.parse(torch.__version__.split("+")[0])
def get_return_type(support_dict: Union[str, Dict[str, str]]) -> str:
| {"golden_diff": "diff --git a/src/syft/lib/torch/__init__.py b/src/syft/lib/torch/__init__.py\n--- a/src/syft/lib/torch/__init__.py\n+++ b/src/syft/lib/torch/__init__.py\n@@ -12,7 +12,7 @@\n from ...ast.globals import Globals\n from .allowlist import allowlist\n \n-TORCH_VERSION = version.parse(torch.__version__)\n+TORCH_VERSION = version.parse(torch.__version__.split(\"+\")[0])\n \n \n def get_return_type(support_dict: Union[str, Dict[str, str]]) -> str:\n", "issue": "Add Windows to CI\n## Description\r\nAdd windows to the CI tests as a separate step for say python 3.8 and torch==1.6.0 initially just to get things working. Then if it works expand to all versions to see any potential issues.\r\n\r\n## Definition of Done\r\nThis ticket is done when we know what does and doesn't run on Windows in CI from the current \"fast\" tests and the new \"slow\" tests. Post a screenshot and link to CI here when it's running.\n", "before_files": [{"content": "# stdlib\nfrom typing import Dict\nfrom typing import Union\n\n# third party\nfrom packaging import version\nimport torch\n\n# syft relative\nfrom . import parameter # noqa: 401\nfrom . import uppercase_tensor # noqa: 401\nfrom ...ast.globals import Globals\nfrom .allowlist import allowlist\n\nTORCH_VERSION = version.parse(torch.__version__)\n\n\ndef get_return_type(support_dict: Union[str, Dict[str, str]]) -> str:\n if isinstance(support_dict, str):\n return support_dict\n else:\n return support_dict[\"return_type\"]\n\n\ndef version_supported(support_dict: Union[str, Dict[str, str]]) -> bool:\n if isinstance(support_dict, str):\n return True\n else:\n return TORCH_VERSION >= version.parse(support_dict[\"min_version\"])\n\n\ndef create_torch_ast() -> Globals:\n ast = Globals()\n\n # most methods work in all versions and have a single return type\n # for the more complicated ones we pass a dict with keys like return_type and\n # min_version\n for method, return_type_name_or_dict in allowlist.items():\n if version_supported(support_dict=return_type_name_or_dict):\n return_type = get_return_type(support_dict=return_type_name_or_dict)\n if return_type == \"unknown\":\n # this allows us to import them for testing\n continue\n ast.add_path(\n path=method, framework_reference=torch, return_type_name=return_type\n )\n # add all the torch.nn.Parameter hooks\n if method.startswith(\"torch.Tensor.\"):\n method = method.replace(\"torch.Tensor.\", \"torch.nn.Parameter.\")\n return_type = return_type.replace(\"torch.Tensor\", \"torch.nn.Parameter\")\n ast.add_path(\n path=method, framework_reference=torch, return_type_name=return_type\n )\n else:\n print(f\"Skipping torch.{method} not supported in {TORCH_VERSION}\")\n\n for klass in ast.classes:\n klass.create_pointer_class()\n klass.create_send_method()\n klass.create_serialization_methods()\n klass.create_storable_object_attr_convenience_methods()\n return ast\n", "path": "src/syft/lib/torch/__init__.py"}], "after_files": [{"content": "# stdlib\nfrom typing import Dict\nfrom typing import Union\n\n# third party\nfrom packaging import version\nimport torch\n\n# syft relative\nfrom . import parameter # noqa: 401\nfrom . import uppercase_tensor # noqa: 401\nfrom ...ast.globals import Globals\nfrom .allowlist import allowlist\n\nTORCH_VERSION = version.parse(torch.__version__.split(\"+\")[0])\n\n\ndef get_return_type(support_dict: Union[str, Dict[str, str]]) -> str:\n if isinstance(support_dict, str):\n return support_dict\n else:\n return support_dict[\"return_type\"]\n\n\ndef version_supported(support_dict: Union[str, Dict[str, str]]) -> bool:\n if isinstance(support_dict, str):\n return True\n else:\n return TORCH_VERSION >= version.parse(support_dict[\"min_version\"])\n\n\ndef create_torch_ast() -> Globals:\n ast = Globals()\n\n # most methods work in all versions and have a single return type\n # for the more complicated ones we pass a dict with keys like return_type and\n # min_version\n for method, return_type_name_or_dict in allowlist.items():\n if version_supported(support_dict=return_type_name_or_dict):\n return_type = get_return_type(support_dict=return_type_name_or_dict)\n if return_type == \"unknown\":\n # this allows us to import them for testing\n continue\n ast.add_path(\n path=method, framework_reference=torch, return_type_name=return_type\n )\n # add all the torch.nn.Parameter hooks\n if method.startswith(\"torch.Tensor.\"):\n method = method.replace(\"torch.Tensor.\", \"torch.nn.Parameter.\")\n return_type = return_type.replace(\"torch.Tensor\", \"torch.nn.Parameter\")\n ast.add_path(\n path=method, framework_reference=torch, return_type_name=return_type\n )\n else:\n print(f\"Skipping torch.{method} not supported in {TORCH_VERSION}\")\n\n for klass in ast.classes:\n klass.create_pointer_class()\n klass.create_send_method()\n klass.create_serialization_methods()\n klass.create_storable_object_attr_convenience_methods()\n return ast\n", "path": "src/syft/lib/torch/__init__.py"}]} | 949 | 131 |
gh_patches_debug_37129 | rasdani/github-patches | git_diff | streamlink__streamlink-2912 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug] CDNBG - multiple issues
## Bug Report
1. BITelevision should be removed from the plugin and/or wiki/info pages, as it no longer exists.
2. Inlife.bg shouldn't be listed as supported under CDNbg.
3. Tvbulgare.bg should be listed as supported in inlife.bg's place (latter shares the former's stream)
4. Mu-vi.tv gives an error.
5. CDNBG should cover VTK - the national military channel
6. Kanal3's livestream is not found.
7. CDNBG should cover Cherno More - the regional channel for Varna, Bulgaria.
### Reproduction steps / Explicit stream URLs to test
1. https://bitelevision.com/ is not a thing anymore.
2. Inlife.bg can't be opened and shouldn't be listed - it is a 'media partner' that restreams https://tvbulgare.bg/, which could be put in as a replacement for it.
3. https://tvbulgare.bg/ - No playable streams found.
4. http://mu-vi.tv/LiveStreams/pages/Live.aspx - Error: Unable to open URL.
5. https://www.armymedia.bg/
6. https://kanal3.bg/live
7. https://www.chernomore.bg/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/cdnbg.py`
Content:
```
1 import logging
2 import re
3
4 from streamlink.compat import urlparse
5 from streamlink.plugin import Plugin
6 from streamlink.plugin.api import useragents
7 from streamlink.plugin.api import validate
8 from streamlink.stream import HLSStream
9 from streamlink.utils import update_scheme
10
11 log = logging.getLogger(__name__)
12
13
14 class CDNBG(Plugin):
15 url_re = re.compile(r"""
16 https?://(?:www\.)?(?:
17 tv\.bnt\.bg/\w+(?:/\w+)?|
18 bitelevision\.com/live|
19 nova\.bg/live|
20 kanal3\.bg/live|
21 bgonair\.bg/tvonline|
22 inlife\.bg|
23 mmtvmusic\.com/live|
24 mu-vi\.tv/LiveStreams/pages/Live\.aspx|
25 videochanel\.bstv\.bg|
26 live\.bstv\.bg|
27 bloombergtv.bg/video
28 )/?
29 """, re.VERBOSE)
30 iframe_re = re.compile(r"iframe .*?src=\"((?:https?(?::|:))?//(?:\w+\.)?cdn.bg/live[^\"]+)\"", re.DOTALL)
31 sdata_re = re.compile(r"sdata\.src.*?=.*?(?P<q>[\"'])(?P<url>http.*?)(?P=q)")
32 hls_file_re = re.compile(r"(src|file): (?P<q>[\"'])(?P<url>(https?:)?//.+?m3u8.*?)(?P=q)")
33 hls_src_re = re.compile(r"video src=(?P<url>http[^ ]+m3u8[^ ]*)")
34
35 stream_schema = validate.Schema(
36 validate.any(
37 validate.all(validate.transform(sdata_re.search), validate.get("url")),
38 validate.all(validate.transform(hls_file_re.search), validate.get("url")),
39 validate.all(validate.transform(hls_src_re.search), validate.get("url")),
40 )
41 )
42
43 @classmethod
44 def can_handle_url(cls, url):
45 return cls.url_re.match(url) is not None
46
47 def find_iframe(self, res):
48 p = urlparse(self.url)
49 for url in self.iframe_re.findall(res.text):
50 if "googletagmanager" not in url:
51 url = url.replace(":", ":")
52 if url.startswith("//"):
53 return "{0}:{1}".format(p.scheme, url)
54 else:
55 return url
56
57 def _get_streams(self):
58 self.session.http.headers.update({"User-Agent": useragents.CHROME})
59 res = self.session.http.get(self.url)
60 iframe_url = self.find_iframe(res)
61
62 if iframe_url:
63 log.debug("Found iframe: {0}", iframe_url)
64 res = self.session.http.get(iframe_url, headers={"Referer": self.url})
65 stream_url = update_scheme(self.url, self.stream_schema.validate(res.text))
66 log.warning("SSL Verification disabled.")
67 return HLSStream.parse_variant_playlist(self.session,
68 stream_url,
69 verify=False)
70
71
72 __plugin__ = CDNBG
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/cdnbg.py b/src/streamlink/plugins/cdnbg.py
--- a/src/streamlink/plugins/cdnbg.py
+++ b/src/streamlink/plugins/cdnbg.py
@@ -15,16 +15,15 @@
url_re = re.compile(r"""
https?://(?:www\.)?(?:
tv\.bnt\.bg/\w+(?:/\w+)?|
- bitelevision\.com/live|
nova\.bg/live|
- kanal3\.bg/live|
bgonair\.bg/tvonline|
- inlife\.bg|
mmtvmusic\.com/live|
mu-vi\.tv/LiveStreams/pages/Live\.aspx|
- videochanel\.bstv\.bg|
live\.bstv\.bg|
- bloombergtv.bg/video
+ bloombergtv.bg/video|
+ armymedia.bg|
+ chernomore.bg|
+ i.cdn.bg/live/
)/?
""", re.VERBOSE)
iframe_re = re.compile(r"iframe .*?src=\"((?:https?(?::|:))?//(?:\w+\.)?cdn.bg/live[^\"]+)\"", re.DOTALL)
@@ -44,23 +43,26 @@
def can_handle_url(cls, url):
return cls.url_re.match(url) is not None
- def find_iframe(self, res):
- p = urlparse(self.url)
- for url in self.iframe_re.findall(res.text):
- if "googletagmanager" not in url:
- url = url.replace(":", ":")
- if url.startswith("//"):
- return "{0}:{1}".format(p.scheme, url)
+ def find_iframe(self, url):
+ self.session.http.headers.update({"User-Agent": useragents.CHROME})
+ res = self.session.http.get(self.url)
+ p = urlparse(url)
+ for iframe_url in self.iframe_re.findall(res.text):
+ if "googletagmanager" not in iframe_url:
+ log.debug("Found iframe: {0}", iframe_url)
+ iframe_url = iframe_url.replace(":", ":")
+ if iframe_url.startswith("//"):
+ return "{0}:{1}".format(p.scheme, iframe_url)
else:
- return url
+ return iframe_url
def _get_streams(self):
- self.session.http.headers.update({"User-Agent": useragents.CHROME})
- res = self.session.http.get(self.url)
- iframe_url = self.find_iframe(res)
+ if "i.cdn.bg/live/" in self.url:
+ iframe_url = self.url
+ else:
+ iframe_url = self.find_iframe(self.url)
if iframe_url:
- log.debug("Found iframe: {0}", iframe_url)
res = self.session.http.get(iframe_url, headers={"Referer": self.url})
stream_url = update_scheme(self.url, self.stream_schema.validate(res.text))
log.warning("SSL Verification disabled.")
| {"golden_diff": "diff --git a/src/streamlink/plugins/cdnbg.py b/src/streamlink/plugins/cdnbg.py\n--- a/src/streamlink/plugins/cdnbg.py\n+++ b/src/streamlink/plugins/cdnbg.py\n@@ -15,16 +15,15 @@\n url_re = re.compile(r\"\"\"\n https?://(?:www\\.)?(?:\n tv\\.bnt\\.bg/\\w+(?:/\\w+)?|\n- bitelevision\\.com/live|\n nova\\.bg/live|\n- kanal3\\.bg/live|\n bgonair\\.bg/tvonline|\n- inlife\\.bg|\n mmtvmusic\\.com/live|\n mu-vi\\.tv/LiveStreams/pages/Live\\.aspx|\n- videochanel\\.bstv\\.bg|\n live\\.bstv\\.bg|\n- bloombergtv.bg/video\n+ bloombergtv.bg/video|\n+ armymedia.bg|\n+ chernomore.bg|\n+ i.cdn.bg/live/\n )/?\n \"\"\", re.VERBOSE)\n iframe_re = re.compile(r\"iframe .*?src=\\\"((?:https?(?::|:))?//(?:\\w+\\.)?cdn.bg/live[^\\\"]+)\\\"\", re.DOTALL)\n@@ -44,23 +43,26 @@\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n \n- def find_iframe(self, res):\n- p = urlparse(self.url)\n- for url in self.iframe_re.findall(res.text):\n- if \"googletagmanager\" not in url:\n- url = url.replace(\":\", \":\")\n- if url.startswith(\"//\"):\n- return \"{0}:{1}\".format(p.scheme, url)\n+ def find_iframe(self, url):\n+ self.session.http.headers.update({\"User-Agent\": useragents.CHROME})\n+ res = self.session.http.get(self.url)\n+ p = urlparse(url)\n+ for iframe_url in self.iframe_re.findall(res.text):\n+ if \"googletagmanager\" not in iframe_url:\n+ log.debug(\"Found iframe: {0}\", iframe_url)\n+ iframe_url = iframe_url.replace(\":\", \":\")\n+ if iframe_url.startswith(\"//\"):\n+ return \"{0}:{1}\".format(p.scheme, iframe_url)\n else:\n- return url\n+ return iframe_url\n \n def _get_streams(self):\n- self.session.http.headers.update({\"User-Agent\": useragents.CHROME})\n- res = self.session.http.get(self.url)\n- iframe_url = self.find_iframe(res)\n+ if \"i.cdn.bg/live/\" in self.url:\n+ iframe_url = self.url\n+ else:\n+ iframe_url = self.find_iframe(self.url)\n \n if iframe_url:\n- log.debug(\"Found iframe: {0}\", iframe_url)\n res = self.session.http.get(iframe_url, headers={\"Referer\": self.url})\n stream_url = update_scheme(self.url, self.stream_schema.validate(res.text))\n log.warning(\"SSL Verification disabled.\")\n", "issue": "[bug] CDNBG - multiple issues\n## Bug Report\r\n1. BITelevision should be removed from the plugin and/or wiki/info pages, as it no longer exists.\r\n2. Inlife.bg shouldn't be listed as supported under CDNbg.\r\n3. Tvbulgare.bg should be listed as supported in inlife.bg's place (latter shares the former's stream)\r\n4. Mu-vi.tv gives an error.\r\n5. CDNBG should cover VTK - the national military channel\r\n6. Kanal3's livestream is not found.\r\n7. CDNBG should cover Cherno More - the regional channel for Varna, Bulgaria.\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\n1. https://bitelevision.com/ is not a thing anymore.\r\n2. Inlife.bg can't be opened and shouldn't be listed - it is a 'media partner' that restreams https://tvbulgare.bg/, which could be put in as a replacement for it.\r\n3. https://tvbulgare.bg/ - No playable streams found.\r\n4. http://mu-vi.tv/LiveStreams/pages/Live.aspx - Error: Unable to open URL.\r\n5. https://www.armymedia.bg/\r\n6. https://kanal3.bg/live\r\n7. https://www.chernomore.bg/\n", "before_files": [{"content": "import logging\nimport re\n\nfrom streamlink.compat import urlparse\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import useragents\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils import update_scheme\n\nlog = logging.getLogger(__name__)\n\n\nclass CDNBG(Plugin):\n url_re = re.compile(r\"\"\"\n https?://(?:www\\.)?(?:\n tv\\.bnt\\.bg/\\w+(?:/\\w+)?|\n bitelevision\\.com/live|\n nova\\.bg/live|\n kanal3\\.bg/live|\n bgonair\\.bg/tvonline|\n inlife\\.bg|\n mmtvmusic\\.com/live|\n mu-vi\\.tv/LiveStreams/pages/Live\\.aspx|\n videochanel\\.bstv\\.bg|\n live\\.bstv\\.bg|\n bloombergtv.bg/video\n )/?\n \"\"\", re.VERBOSE)\n iframe_re = re.compile(r\"iframe .*?src=\\\"((?:https?(?::|:))?//(?:\\w+\\.)?cdn.bg/live[^\\\"]+)\\\"\", re.DOTALL)\n sdata_re = re.compile(r\"sdata\\.src.*?=.*?(?P<q>[\\\"'])(?P<url>http.*?)(?P=q)\")\n hls_file_re = re.compile(r\"(src|file): (?P<q>[\\\"'])(?P<url>(https?:)?//.+?m3u8.*?)(?P=q)\")\n hls_src_re = re.compile(r\"video src=(?P<url>http[^ ]+m3u8[^ ]*)\")\n\n stream_schema = validate.Schema(\n validate.any(\n validate.all(validate.transform(sdata_re.search), validate.get(\"url\")),\n validate.all(validate.transform(hls_file_re.search), validate.get(\"url\")),\n validate.all(validate.transform(hls_src_re.search), validate.get(\"url\")),\n )\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n def find_iframe(self, res):\n p = urlparse(self.url)\n for url in self.iframe_re.findall(res.text):\n if \"googletagmanager\" not in url:\n url = url.replace(\":\", \":\")\n if url.startswith(\"//\"):\n return \"{0}:{1}\".format(p.scheme, url)\n else:\n return url\n\n def _get_streams(self):\n self.session.http.headers.update({\"User-Agent\": useragents.CHROME})\n res = self.session.http.get(self.url)\n iframe_url = self.find_iframe(res)\n\n if iframe_url:\n log.debug(\"Found iframe: {0}\", iframe_url)\n res = self.session.http.get(iframe_url, headers={\"Referer\": self.url})\n stream_url = update_scheme(self.url, self.stream_schema.validate(res.text))\n log.warning(\"SSL Verification disabled.\")\n return HLSStream.parse_variant_playlist(self.session,\n stream_url,\n verify=False)\n\n\n__plugin__ = CDNBG\n", "path": "src/streamlink/plugins/cdnbg.py"}], "after_files": [{"content": "import logging\nimport re\n\nfrom streamlink.compat import urlparse\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import useragents\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils import update_scheme\n\nlog = logging.getLogger(__name__)\n\n\nclass CDNBG(Plugin):\n url_re = re.compile(r\"\"\"\n https?://(?:www\\.)?(?:\n tv\\.bnt\\.bg/\\w+(?:/\\w+)?|\n nova\\.bg/live|\n bgonair\\.bg/tvonline|\n mmtvmusic\\.com/live|\n mu-vi\\.tv/LiveStreams/pages/Live\\.aspx|\n live\\.bstv\\.bg|\n bloombergtv.bg/video|\n armymedia.bg|\n chernomore.bg|\n i.cdn.bg/live/\n )/?\n \"\"\", re.VERBOSE)\n iframe_re = re.compile(r\"iframe .*?src=\\\"((?:https?(?::|:))?//(?:\\w+\\.)?cdn.bg/live[^\\\"]+)\\\"\", re.DOTALL)\n sdata_re = re.compile(r\"sdata\\.src.*?=.*?(?P<q>[\\\"'])(?P<url>http.*?)(?P=q)\")\n hls_file_re = re.compile(r\"(src|file): (?P<q>[\\\"'])(?P<url>(https?:)?//.+?m3u8.*?)(?P=q)\")\n hls_src_re = re.compile(r\"video src=(?P<url>http[^ ]+m3u8[^ ]*)\")\n\n stream_schema = validate.Schema(\n validate.any(\n validate.all(validate.transform(sdata_re.search), validate.get(\"url\")),\n validate.all(validate.transform(hls_file_re.search), validate.get(\"url\")),\n validate.all(validate.transform(hls_src_re.search), validate.get(\"url\")),\n )\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n def find_iframe(self, url):\n self.session.http.headers.update({\"User-Agent\": useragents.CHROME})\n res = self.session.http.get(self.url)\n p = urlparse(url)\n for iframe_url in self.iframe_re.findall(res.text):\n if \"googletagmanager\" not in iframe_url:\n log.debug(\"Found iframe: {0}\", iframe_url)\n iframe_url = iframe_url.replace(\":\", \":\")\n if iframe_url.startswith(\"//\"):\n return \"{0}:{1}\".format(p.scheme, iframe_url)\n else:\n return iframe_url\n\n def _get_streams(self):\n if \"i.cdn.bg/live/\" in self.url:\n iframe_url = self.url\n else:\n iframe_url = self.find_iframe(self.url)\n\n if iframe_url:\n res = self.session.http.get(iframe_url, headers={\"Referer\": self.url})\n stream_url = update_scheme(self.url, self.stream_schema.validate(res.text))\n log.warning(\"SSL Verification disabled.\")\n return HLSStream.parse_variant_playlist(self.session,\n stream_url,\n verify=False)\n\n\n__plugin__ = CDNBG\n", "path": "src/streamlink/plugins/cdnbg.py"}]} | 1,335 | 670 |
gh_patches_debug_23260 | rasdani/github-patches | git_diff | beeware__toga-1637 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
False positive "Multiple candidiate toga backends"
**Describe the bug**
When using a development version of *toga*, a `RuntimeError` of multiple candidate toga backends is raised with the same backend. Stacktrace example:
```
Traceback (most recent call last):
File "C:\Users\sagi\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\sagi\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\sagi\PycharmProjects\toga\examples\canvas\canvas\__main__.py", line 4, in <module>
main().main_loop()
File "C:\Users\sagi\PycharmProjects\toga\examples\canvas\canvas\app.py", line 565, in main
return ExampleCanvasApp('Canvas', 'org.beeware.widgets.canvas')
File "C:\Users\sagi\PycharmProjects\toga\src\core\src\toga\app.py", line 308, in __init__
self.factory = get_platform_factory()
File "C:\Users\sagi\PycharmProjects\toga\src\core\src\toga\platform.py", line 106, in get_platform_factory
raise RuntimeError(
RuntimeError: Multiple candidiate toga backends found: ('toga_winforms' (windows), 'toga_winforms' (windows)). Uninstall the backends you don't require, or use TOGA_BACKEND to specify a backend.
```
**To Reproduce**
Steps to reproduce the behavior:
1. Open a new virtual environment
2. Go to src\core
3. Run `pip install -e .`
4. Go to src\winforms
5. Run `pip install -e .`
6. Go to example\canvas (or any other example application)
7. Run `python -m canvas`
**Expected behavior**
`RuntimeError` should not be raised in that case.
**Environment:**
- Operating System: Windows
- Python version: 3.10.6
- Software versions:
- Briefcase: latest
- Toga: latest
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/core/src/toga/platform.py`
Content:
```
1 import importlib
2 import os
3 import sys
4 import warnings
5 from functools import lru_cache
6
7 try:
8 # Usually, the pattern is "import module; if it doesn't exist,
9 # import the shim". However, we need the 3.10 API for entry_points,
10 # as the 3.8 didn't support the `groups` argument to entry_points.
11 # Therefore, we try to import the compatibility shim first; and fall
12 # back to the stdlib module if the shim isn't there.
13 from importlib_metadata import entry_points
14 except ImportError:
15 from importlib.metadata import entry_points
16
17
18 # Map python sys.platform with toga platforms names
19 _TOGA_PLATFORMS = {
20 'android': 'android',
21 'darwin': 'macOS',
22 'ios': 'iOS',
23 'linux': 'linux',
24 'tvos': 'tvOS',
25 'watchos': 'watchOS',
26 'wearos': 'wearOS',
27 'emscripten': 'web',
28 'win32': 'windows',
29 }
30
31
32 try:
33 current_platform = os.environ['TOGA_PLATFORM']
34 except KeyError:
35 # Rely on `sys.getandroidapilevel`, which only exists on Android; see
36 # https://github.com/beeware/Python-Android-support/issues/8
37 if hasattr(sys, 'getandroidapilevel'):
38 current_platform = 'android'
39 else:
40 current_platform = _TOGA_PLATFORMS.get(sys.platform)
41
42
43 @lru_cache(maxsize=1)
44 def get_platform_factory(factory=None):
45 """ This function figures out what the current host platform is and
46 imports the adequate factory. The factory is the interface to all platform
47 specific implementations.
48
49 If the TOGA_BACKEND environment variable is set, the factory will be loaded
50 from that module.
51
52 Returns: The suitable factory for the current host platform.
53
54 Raises:
55 RuntimeError: If no supported host platform can be identified.
56 """
57
58 ######################################################################
59 # 2022-09: Backwards compatibility
60 ######################################################################
61 # factory no longer used
62 if factory:
63 warnings.warn("The factory argument is no longer used.", DeprecationWarning)
64 ######################################################################
65 # End backwards compatibility.
66 ######################################################################
67
68 toga_backends = entry_points(group='toga.backends')
69 if not toga_backends:
70 raise RuntimeError("No toga backend could be loaded.")
71
72 backend_value = os.environ.get('TOGA_BACKEND')
73 if backend_value:
74 try:
75 factory = importlib.import_module(f'{backend_value}.factory')
76 except ModuleNotFoundError:
77 toga_backends_values = ', '.join([f'{backend.value!r}' for backend in toga_backends])
78 raise RuntimeError(
79 f"The backend specified by TOGA_BACKEND ({backend_value!r}) "
80 f"could not be loaded. It should be one of: {toga_backends_values}."
81 )
82 else:
83 if len(toga_backends) == 1:
84 backend = list(toga_backends)[0]
85 else:
86 # multiple backends are installed: choose the one that matches the host platform
87 matching_backends = [
88 backend
89 for backend in toga_backends
90 if backend.name == current_platform
91 ]
92 if len(matching_backends) == 0:
93 toga_backends_string = ', '.join([
94 f'{backend.value!r} ({backend.name})'
95 for backend in toga_backends
96 ])
97 raise RuntimeError(
98 f"Multiple Toga backends are installed ({toga_backends_string}), "
99 f"but none of them match your current platform ({current_platform!r})."
100 )
101 if len(matching_backends) > 1:
102 toga_backends_string = ', '.join([
103 f'{backend.value!r} ({backend.name})'
104 for backend in matching_backends
105 ])
106 raise RuntimeError(
107 f"Multiple candidiate toga backends found: ({toga_backends_string}). "
108 "Uninstall the backends you don't require, or use "
109 "TOGA_BACKEND to specify a backend."
110 )
111 backend = matching_backends[0]
112 factory = importlib.import_module(f'{backend.value}.factory')
113 return factory
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/core/src/toga/platform.py b/src/core/src/toga/platform.py
--- a/src/core/src/toga/platform.py
+++ b/src/core/src/toga/platform.py
@@ -67,7 +67,7 @@
toga_backends = entry_points(group='toga.backends')
if not toga_backends:
- raise RuntimeError("No toga backend could be loaded.")
+ raise RuntimeError("No Toga backend could be loaded.")
backend_value = os.environ.get('TOGA_BACKEND')
if backend_value:
@@ -80,8 +80,14 @@
f"could not be loaded. It should be one of: {toga_backends_values}."
)
else:
+ # As of Setuptools 65.5, entry points are returned duplicated if the
+ # package is installed editable. Use a set to ensure that each entry point
+ # is only returned once.
+ # See https://github.com/pypa/setuptools/issues/3649
+ toga_backends = sorted(set(toga_backends))
+
if len(toga_backends) == 1:
- backend = list(toga_backends)[0]
+ backend = toga_backends[0]
else:
# multiple backends are installed: choose the one that matches the host platform
matching_backends = [
| {"golden_diff": "diff --git a/src/core/src/toga/platform.py b/src/core/src/toga/platform.py\n--- a/src/core/src/toga/platform.py\n+++ b/src/core/src/toga/platform.py\n@@ -67,7 +67,7 @@\n \n toga_backends = entry_points(group='toga.backends')\n if not toga_backends:\n- raise RuntimeError(\"No toga backend could be loaded.\")\n+ raise RuntimeError(\"No Toga backend could be loaded.\")\n \n backend_value = os.environ.get('TOGA_BACKEND')\n if backend_value:\n@@ -80,8 +80,14 @@\n f\"could not be loaded. It should be one of: {toga_backends_values}.\"\n )\n else:\n+ # As of Setuptools 65.5, entry points are returned duplicated if the\n+ # package is installed editable. Use a set to ensure that each entry point\n+ # is only returned once.\n+ # See https://github.com/pypa/setuptools/issues/3649\n+ toga_backends = sorted(set(toga_backends))\n+\n if len(toga_backends) == 1:\n- backend = list(toga_backends)[0]\n+ backend = toga_backends[0]\n else:\n # multiple backends are installed: choose the one that matches the host platform\n matching_backends = [\n", "issue": "False positive \"Multiple candidiate toga backends\"\n**Describe the bug**\r\nWhen using a development version of *toga*, a `RuntimeError` of multiple candidate toga backends is raised with the same backend. Stacktrace example:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\sagi\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Users\\sagi\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\sagi\\PycharmProjects\\toga\\examples\\canvas\\canvas\\__main__.py\", line 4, in <module>\r\n main().main_loop()\r\n File \"C:\\Users\\sagi\\PycharmProjects\\toga\\examples\\canvas\\canvas\\app.py\", line 565, in main\r\n return ExampleCanvasApp('Canvas', 'org.beeware.widgets.canvas')\r\n File \"C:\\Users\\sagi\\PycharmProjects\\toga\\src\\core\\src\\toga\\app.py\", line 308, in __init__\r\n self.factory = get_platform_factory()\r\n File \"C:\\Users\\sagi\\PycharmProjects\\toga\\src\\core\\src\\toga\\platform.py\", line 106, in get_platform_factory\r\n raise RuntimeError(\r\nRuntimeError: Multiple candidiate toga backends found: ('toga_winforms' (windows), 'toga_winforms' (windows)). Uninstall the backends you don't require, or use TOGA_BACKEND to specify a backend.\r\n```\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Open a new virtual environment\r\n2. Go to src\\core\r\n3. Run `pip install -e .`\r\n4. Go to src\\winforms\r\n5. Run `pip install -e .`\r\n6. Go to example\\canvas (or any other example application)\r\n7. Run `python -m canvas`\r\n\r\n**Expected behavior**\r\n`RuntimeError` should not be raised in that case.\r\n\r\n**Environment:**\r\n - Operating System: Windows\r\n - Python version: 3.10.6\r\n - Software versions:\r\n - Briefcase: latest\r\n - Toga: latest\r\n\n", "before_files": [{"content": "import importlib\nimport os\nimport sys\nimport warnings\nfrom functools import lru_cache\n\ntry:\n # Usually, the pattern is \"import module; if it doesn't exist,\n # import the shim\". However, we need the 3.10 API for entry_points,\n # as the 3.8 didn't support the `groups` argument to entry_points.\n # Therefore, we try to import the compatibility shim first; and fall\n # back to the stdlib module if the shim isn't there.\n from importlib_metadata import entry_points\nexcept ImportError:\n from importlib.metadata import entry_points\n\n\n# Map python sys.platform with toga platforms names\n_TOGA_PLATFORMS = {\n 'android': 'android',\n 'darwin': 'macOS',\n 'ios': 'iOS',\n 'linux': 'linux',\n 'tvos': 'tvOS',\n 'watchos': 'watchOS',\n 'wearos': 'wearOS',\n 'emscripten': 'web',\n 'win32': 'windows',\n}\n\n\ntry:\n current_platform = os.environ['TOGA_PLATFORM']\nexcept KeyError:\n # Rely on `sys.getandroidapilevel`, which only exists on Android; see\n # https://github.com/beeware/Python-Android-support/issues/8\n if hasattr(sys, 'getandroidapilevel'):\n current_platform = 'android'\n else:\n current_platform = _TOGA_PLATFORMS.get(sys.platform)\n\n\n@lru_cache(maxsize=1)\ndef get_platform_factory(factory=None):\n \"\"\" This function figures out what the current host platform is and\n imports the adequate factory. The factory is the interface to all platform\n specific implementations.\n\n If the TOGA_BACKEND environment variable is set, the factory will be loaded\n from that module.\n\n Returns: The suitable factory for the current host platform.\n\n Raises:\n RuntimeError: If no supported host platform can be identified.\n \"\"\"\n\n ######################################################################\n # 2022-09: Backwards compatibility\n ######################################################################\n # factory no longer used\n if factory:\n warnings.warn(\"The factory argument is no longer used.\", DeprecationWarning)\n ######################################################################\n # End backwards compatibility.\n ######################################################################\n\n toga_backends = entry_points(group='toga.backends')\n if not toga_backends:\n raise RuntimeError(\"No toga backend could be loaded.\")\n\n backend_value = os.environ.get('TOGA_BACKEND')\n if backend_value:\n try:\n factory = importlib.import_module(f'{backend_value}.factory')\n except ModuleNotFoundError:\n toga_backends_values = ', '.join([f'{backend.value!r}' for backend in toga_backends])\n raise RuntimeError(\n f\"The backend specified by TOGA_BACKEND ({backend_value!r}) \"\n f\"could not be loaded. It should be one of: {toga_backends_values}.\"\n )\n else:\n if len(toga_backends) == 1:\n backend = list(toga_backends)[0]\n else:\n # multiple backends are installed: choose the one that matches the host platform\n matching_backends = [\n backend\n for backend in toga_backends\n if backend.name == current_platform\n ]\n if len(matching_backends) == 0:\n toga_backends_string = ', '.join([\n f'{backend.value!r} ({backend.name})'\n for backend in toga_backends\n ])\n raise RuntimeError(\n f\"Multiple Toga backends are installed ({toga_backends_string}), \"\n f\"but none of them match your current platform ({current_platform!r}).\"\n )\n if len(matching_backends) > 1:\n toga_backends_string = ', '.join([\n f'{backend.value!r} ({backend.name})'\n for backend in matching_backends\n ])\n raise RuntimeError(\n f\"Multiple candidiate toga backends found: ({toga_backends_string}). \"\n \"Uninstall the backends you don't require, or use \"\n \"TOGA_BACKEND to specify a backend.\"\n )\n backend = matching_backends[0]\n factory = importlib.import_module(f'{backend.value}.factory')\n return factory\n", "path": "src/core/src/toga/platform.py"}], "after_files": [{"content": "import importlib\nimport os\nimport sys\nimport warnings\nfrom functools import lru_cache\n\ntry:\n # Usually, the pattern is \"import module; if it doesn't exist,\n # import the shim\". However, we need the 3.10 API for entry_points,\n # as the 3.8 didn't support the `groups` argument to entry_points.\n # Therefore, we try to import the compatibility shim first; and fall\n # back to the stdlib module if the shim isn't there.\n from importlib_metadata import entry_points\nexcept ImportError:\n from importlib.metadata import entry_points\n\n\n# Map python sys.platform with toga platforms names\n_TOGA_PLATFORMS = {\n 'android': 'android',\n 'darwin': 'macOS',\n 'ios': 'iOS',\n 'linux': 'linux',\n 'tvos': 'tvOS',\n 'watchos': 'watchOS',\n 'wearos': 'wearOS',\n 'emscripten': 'web',\n 'win32': 'windows',\n}\n\n\ntry:\n current_platform = os.environ['TOGA_PLATFORM']\nexcept KeyError:\n # Rely on `sys.getandroidapilevel`, which only exists on Android; see\n # https://github.com/beeware/Python-Android-support/issues/8\n if hasattr(sys, 'getandroidapilevel'):\n current_platform = 'android'\n else:\n current_platform = _TOGA_PLATFORMS.get(sys.platform)\n\n\n@lru_cache(maxsize=1)\ndef get_platform_factory(factory=None):\n \"\"\" This function figures out what the current host platform is and\n imports the adequate factory. The factory is the interface to all platform\n specific implementations.\n\n If the TOGA_BACKEND environment variable is set, the factory will be loaded\n from that module.\n\n Returns: The suitable factory for the current host platform.\n\n Raises:\n RuntimeError: If no supported host platform can be identified.\n \"\"\"\n\n ######################################################################\n # 2022-09: Backwards compatibility\n ######################################################################\n # factory no longer used\n if factory:\n warnings.warn(\"The factory argument is no longer used.\", DeprecationWarning)\n ######################################################################\n # End backwards compatibility.\n ######################################################################\n\n toga_backends = entry_points(group='toga.backends')\n if not toga_backends:\n raise RuntimeError(\"No Toga backend could be loaded.\")\n\n backend_value = os.environ.get('TOGA_BACKEND')\n if backend_value:\n try:\n factory = importlib.import_module(f'{backend_value}.factory')\n except ModuleNotFoundError:\n toga_backends_values = ', '.join([f'{backend.value!r}' for backend in toga_backends])\n raise RuntimeError(\n f\"The backend specified by TOGA_BACKEND ({backend_value!r}) \"\n f\"could not be loaded. It should be one of: {toga_backends_values}.\"\n )\n else:\n # As of Setuptools 65.5, entry points are returned duplicated if the\n # package is installed editable. Use a set to ensure that each entry point\n # is only returned once.\n # See https://github.com/pypa/setuptools/issues/3649\n toga_backends = sorted(set(toga_backends))\n\n if len(toga_backends) == 1:\n backend = toga_backends[0]\n else:\n # multiple backends are installed: choose the one that matches the host platform\n matching_backends = [\n backend\n for backend in toga_backends\n if backend.name == current_platform\n ]\n if len(matching_backends) == 0:\n toga_backends_string = ', '.join([\n f'{backend.value!r} ({backend.name})'\n for backend in toga_backends\n ])\n raise RuntimeError(\n f\"Multiple Toga backends are installed ({toga_backends_string}), \"\n f\"but none of them match your current platform ({current_platform!r}).\"\n )\n if len(matching_backends) > 1:\n toga_backends_string = ', '.join([\n f'{backend.value!r} ({backend.name})'\n for backend in matching_backends\n ])\n raise RuntimeError(\n f\"Multiple candidiate toga backends found: ({toga_backends_string}). \"\n \"Uninstall the backends you don't require, or use \"\n \"TOGA_BACKEND to specify a backend.\"\n )\n backend = matching_backends[0]\n factory = importlib.import_module(f'{backend.value}.factory')\n return factory\n", "path": "src/core/src/toga/platform.py"}]} | 1,929 | 302 |
gh_patches_debug_9053 | rasdani/github-patches | git_diff | encode__httpx-234 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in the 0.7.0 packaging
I've tried to upgrade to 0.7.0 and it exploded with a
```
$ poetry update
Updating dependencies
Resolving dependencies... (1.0s)
Package operations: 0 installs, 1 update, 0 removals
- Updating httpx (0.6.8 -> 0.7.0)
[EnvCommandError]
Command ['/Users/pablo/Library/Caches/pypoetry/virtualenvs/drop-eventsng-1aNj3rOl-py3.7/bin/python', '-m', 'pip', 'install', '--no-deps', '-U', 'httpx==0.7.0'] errored with the following return code 1, and output:
Collecting httpx==0.7.0
Using cached https://files.pythonhosted.org/packages/12/b3/fdd6e528a3385e2149ad42cc4e9b54e326d532e3e79a86e7cfdaea45723e/httpx-0.7.0.tar.gz
ERROR: Command errored out with exit status 1:
command: /Users/pablo/Library/Caches/pypoetry/virtualenvs/drop-eventsng-1aNj3rOl-py3.7/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py'"'"'; __file__='"'"'/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base pip-egg-info
cwd: /private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/
Complete output (7 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py", line 45, in <module>
long_description=get_long_description(),
File "/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py", line 26, in get_long_description
with open("CHANGELOG.md", encoding="utf8") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'CHANGELOG.md'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import re
5 from pathlib import Path
6
7 from setuptools import setup
8
9
10 def get_version(package):
11 """
12 Return package version as listed in `__version__` in `init.py`.
13 """
14 version = Path(package, "__version__.py").read_text()
15 return re.search("__version__ = ['\"]([^'\"]+)['\"]", version).group(1)
16
17
18 def get_long_description():
19 """
20 Return the README.
21 """
22 long_description = ""
23 with open("README.md", encoding="utf8") as f:
24 long_description += f.read()
25 long_description += "\n\n"
26 with open("CHANGELOG.md", encoding="utf8") as f:
27 long_description += f.read()
28 return long_description
29
30
31 def get_packages(package):
32 """
33 Return root package and all sub-packages.
34 """
35 return [str(path.parent) for path in Path(package).glob("**/__init__.py")]
36
37
38 setup(
39 name="httpx",
40 python_requires=">=3.6",
41 version=get_version("httpx"),
42 url="https://github.com/encode/httpx",
43 license="BSD",
44 description="The next generation HTTP client.",
45 long_description=get_long_description(),
46 long_description_content_type="text/markdown",
47 author="Tom Christie",
48 author_email="[email protected]",
49 package_data={"httpx": ["py.typed"]},
50 packages=get_packages("httpx"),
51 install_requires=[
52 "certifi",
53 "chardet==3.*",
54 "h11==0.8.*",
55 "h2==3.*",
56 "hstspreload",
57 "idna==2.*",
58 "rfc3986==1.*",
59 ],
60 classifiers=[
61 "Development Status :: 3 - Alpha",
62 "Environment :: Web Environment",
63 "Intended Audience :: Developers",
64 "License :: OSI Approved :: BSD License",
65 "Operating System :: OS Independent",
66 "Topic :: Internet :: WWW/HTTP",
67 "Programming Language :: Python :: 3",
68 "Programming Language :: Python :: 3.6",
69 "Programming Language :: Python :: 3.7",
70 "Programming Language :: Python :: 3.8",
71 ],
72 )
73
```
Path: `httpx/__version__.py`
Content:
```
1 __title__ = "httpx"
2 __description__ = "A next generation HTTP client, for Python 3."
3 __version__ = "0.7.0"
4
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/httpx/__version__.py b/httpx/__version__.py
--- a/httpx/__version__.py
+++ b/httpx/__version__.py
@@ -1,3 +1,3 @@
__title__ = "httpx"
__description__ = "A next generation HTTP client, for Python 3."
-__version__ = "0.7.0"
+__version__ = "0.7.1"
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -48,6 +48,7 @@
author_email="[email protected]",
package_data={"httpx": ["py.typed"]},
packages=get_packages("httpx"),
+ include_package_data=True,
install_requires=[
"certifi",
"chardet==3.*",
| {"golden_diff": "diff --git a/httpx/__version__.py b/httpx/__version__.py\n--- a/httpx/__version__.py\n+++ b/httpx/__version__.py\n@@ -1,3 +1,3 @@\n __title__ = \"httpx\"\n __description__ = \"A next generation HTTP client, for Python 3.\"\n-__version__ = \"0.7.0\"\n+__version__ = \"0.7.1\"\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -48,6 +48,7 @@\n author_email=\"[email protected]\",\n package_data={\"httpx\": [\"py.typed\"]},\n packages=get_packages(\"httpx\"),\n+ include_package_data=True,\n install_requires=[\n \"certifi\",\n \"chardet==3.*\",\n", "issue": "Bug in the 0.7.0 packaging\nI've tried to upgrade to 0.7.0 and it exploded with a\r\n\r\n```\r\n$ poetry update\r\nUpdating dependencies\r\nResolving dependencies... (1.0s)\r\n\r\n\r\nPackage operations: 0 installs, 1 update, 0 removals\r\n\r\n - Updating httpx (0.6.8 -> 0.7.0)\r\n\r\n[EnvCommandError]\r\nCommand ['/Users/pablo/Library/Caches/pypoetry/virtualenvs/drop-eventsng-1aNj3rOl-py3.7/bin/python', '-m', 'pip', 'install', '--no-deps', '-U', 'httpx==0.7.0'] errored with the following return code 1, and output:\r\nCollecting httpx==0.7.0\r\n Using cached https://files.pythonhosted.org/packages/12/b3/fdd6e528a3385e2149ad42cc4e9b54e326d532e3e79a86e7cfdaea45723e/httpx-0.7.0.tar.gz\r\n ERROR: Command errored out with exit status 1:\r\n command: /Users/pablo/Library/Caches/pypoetry/virtualenvs/drop-eventsng-1aNj3rOl-py3.7/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py'\"'\"'; __file__='\"'\"'/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' egg_info --egg-base pip-egg-info\r\n cwd: /private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/\r\n Complete output (7 lines):\r\n Traceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py\", line 45, in <module>\r\n long_description=get_long_description(),\r\n File \"/private/var/folders/x4/txc7_0pn6zlfb30cs_0sh5mm0000gn/T/pip-install-jq6aut9d/httpx/setup.py\", line 26, in get_long_description\r\n with open(\"CHANGELOG.md\", encoding=\"utf8\") as f:\r\n FileNotFoundError: [Errno 2] No such file or directory: 'CHANGELOG.md'\r\n ----------------------------------------\r\nERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport re\nfrom pathlib import Path\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n version = Path(package, \"__version__.py\").read_text()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", version).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n long_description = \"\"\n with open(\"README.md\", encoding=\"utf8\") as f:\n long_description += f.read()\n long_description += \"\\n\\n\"\n with open(\"CHANGELOG.md\", encoding=\"utf8\") as f:\n long_description += f.read()\n return long_description\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [str(path.parent) for path in Path(package).glob(\"**/__init__.py\")]\n\n\nsetup(\n name=\"httpx\",\n python_requires=\">=3.6\",\n version=get_version(\"httpx\"),\n url=\"https://github.com/encode/httpx\",\n license=\"BSD\",\n description=\"The next generation HTTP client.\",\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n package_data={\"httpx\": [\"py.typed\"]},\n packages=get_packages(\"httpx\"),\n install_requires=[\n \"certifi\",\n \"chardet==3.*\",\n \"h11==0.8.*\",\n \"h2==3.*\",\n \"hstspreload\",\n \"idna==2.*\",\n \"rfc3986==1.*\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n)\n", "path": "setup.py"}, {"content": "__title__ = \"httpx\"\n__description__ = \"A next generation HTTP client, for Python 3.\"\n__version__ = \"0.7.0\"\n", "path": "httpx/__version__.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport re\nfrom pathlib import Path\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n version = Path(package, \"__version__.py\").read_text()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", version).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n long_description = \"\"\n with open(\"README.md\", encoding=\"utf8\") as f:\n long_description += f.read()\n long_description += \"\\n\\n\"\n with open(\"CHANGELOG.md\", encoding=\"utf8\") as f:\n long_description += f.read()\n return long_description\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [str(path.parent) for path in Path(package).glob(\"**/__init__.py\")]\n\n\nsetup(\n name=\"httpx\",\n python_requires=\">=3.6\",\n version=get_version(\"httpx\"),\n url=\"https://github.com/encode/httpx\",\n license=\"BSD\",\n description=\"The next generation HTTP client.\",\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n package_data={\"httpx\": [\"py.typed\"]},\n packages=get_packages(\"httpx\"),\n include_package_data=True,\n install_requires=[\n \"certifi\",\n \"chardet==3.*\",\n \"h11==0.8.*\",\n \"h2==3.*\",\n \"hstspreload\",\n \"idna==2.*\",\n \"rfc3986==1.*\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n)\n", "path": "setup.py"}, {"content": "__title__ = \"httpx\"\n__description__ = \"A next generation HTTP client, for Python 3.\"\n__version__ = \"0.7.1\"\n", "path": "httpx/__version__.py"}]} | 1,688 | 183 |
gh_patches_debug_19085 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-811 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use <time> element for dates
This way screen readers (and other ATs) know that it should be read as a date.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/contrib/templatetags/contrib_tags.py`
Content:
```
1 from django import template
2 from django.template.loader import render_to_string
3
4 register = template.Library()
5
6
7 @register.assignment_tag
8 def include_template_string(template, **kwargs):
9 rendered_template = render_to_string(template, kwargs)
10 return str(rendered_template)
11
12
13 @register.assignment_tag
14 def combined_url_parameter(request_query_dict, **kwargs):
15 combined_query_dict = request_query_dict.copy()
16 for key in kwargs:
17 combined_query_dict.setlist(key, [kwargs[key]])
18 encoded_parameter = '?' + combined_query_dict.urlencode()
19 return encoded_parameter
20
21
22 @register.assignment_tag
23 def filter_has_perm(perm, user, objects):
24 """Filter a list of objects based on user permissions."""
25 if not hasattr(user, 'has_perm'):
26 # If the swapped user model does not support permissions, all objects
27 # will be returned. This is taken from rules.templatetags.has_perm.
28 return objects
29 else:
30 return [obj for obj in objects if user.has_perm(perm, obj)]
31
32
33 @register.filter
34 def percentage(value, max_value):
35 return round(value / max_value * 100)
36
37
38 @register.assignment_tag
39 def project_tile_image(project):
40 return project.tile_image or project.image or None
41
42
43 @register.assignment_tag
44 def project_tile_image_copyright(project):
45 if project.tile_image:
46 return project.tile_image_copyright
47 elif project.image:
48 return project.image_copyright
49 else:
50 return None
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/apps/contrib/templatetags/contrib_tags.py b/meinberlin/apps/contrib/templatetags/contrib_tags.py
--- a/meinberlin/apps/contrib/templatetags/contrib_tags.py
+++ b/meinberlin/apps/contrib/templatetags/contrib_tags.py
@@ -1,5 +1,8 @@
from django import template
+from django.forms.utils import flatatt
+from django.template import defaultfilters
from django.template.loader import render_to_string
+from django.utils.safestring import mark_safe
register = template.Library()
@@ -48,3 +51,18 @@
return project.image_copyright
else:
return None
+
+
[email protected]_tag()
+def html_date(value, displayfmt=None, datetimefmt='c', **kwargs):
+ """Format a date and wrap it in a html <time> element.
+
+ Additional html attributes may be provided as kwargs (e.g. 'class').
+ """
+ displaydate = defaultfilters.date(value, displayfmt)
+ datetime = defaultfilters.date(value, datetimefmt)
+ attribs = flatatt(kwargs)
+ result = '<time %s datetime="%s">%s</time>' % (attribs,
+ datetime,
+ displaydate)
+ return mark_safe(result)
| {"golden_diff": "diff --git a/meinberlin/apps/contrib/templatetags/contrib_tags.py b/meinberlin/apps/contrib/templatetags/contrib_tags.py\n--- a/meinberlin/apps/contrib/templatetags/contrib_tags.py\n+++ b/meinberlin/apps/contrib/templatetags/contrib_tags.py\n@@ -1,5 +1,8 @@\n from django import template\n+from django.forms.utils import flatatt\n+from django.template import defaultfilters\n from django.template.loader import render_to_string\n+from django.utils.safestring import mark_safe\n \n register = template.Library()\n \n@@ -48,3 +51,18 @@\n return project.image_copyright\n else:\n return None\n+\n+\[email protected]_tag()\n+def html_date(value, displayfmt=None, datetimefmt='c', **kwargs):\n+ \"\"\"Format a date and wrap it in a html <time> element.\n+\n+ Additional html attributes may be provided as kwargs (e.g. 'class').\n+ \"\"\"\n+ displaydate = defaultfilters.date(value, displayfmt)\n+ datetime = defaultfilters.date(value, datetimefmt)\n+ attribs = flatatt(kwargs)\n+ result = '<time %s datetime=\"%s\">%s</time>' % (attribs,\n+ datetime,\n+ displaydate)\n+ return mark_safe(result)\n", "issue": "Use <time> element for dates\nThis way screen readers (and other ATs) know that it should be read as a date.\n", "before_files": [{"content": "from django import template\nfrom django.template.loader import render_to_string\n\nregister = template.Library()\n\n\[email protected]_tag\ndef include_template_string(template, **kwargs):\n rendered_template = render_to_string(template, kwargs)\n return str(rendered_template)\n\n\[email protected]_tag\ndef combined_url_parameter(request_query_dict, **kwargs):\n combined_query_dict = request_query_dict.copy()\n for key in kwargs:\n combined_query_dict.setlist(key, [kwargs[key]])\n encoded_parameter = '?' + combined_query_dict.urlencode()\n return encoded_parameter\n\n\[email protected]_tag\ndef filter_has_perm(perm, user, objects):\n \"\"\"Filter a list of objects based on user permissions.\"\"\"\n if not hasattr(user, 'has_perm'):\n # If the swapped user model does not support permissions, all objects\n # will be returned. This is taken from rules.templatetags.has_perm.\n return objects\n else:\n return [obj for obj in objects if user.has_perm(perm, obj)]\n\n\[email protected]\ndef percentage(value, max_value):\n return round(value / max_value * 100)\n\n\[email protected]_tag\ndef project_tile_image(project):\n return project.tile_image or project.image or None\n\n\[email protected]_tag\ndef project_tile_image_copyright(project):\n if project.tile_image:\n return project.tile_image_copyright\n elif project.image:\n return project.image_copyright\n else:\n return None\n", "path": "meinberlin/apps/contrib/templatetags/contrib_tags.py"}], "after_files": [{"content": "from django import template\nfrom django.forms.utils import flatatt\nfrom django.template import defaultfilters\nfrom django.template.loader import render_to_string\nfrom django.utils.safestring import mark_safe\n\nregister = template.Library()\n\n\[email protected]_tag\ndef include_template_string(template, **kwargs):\n rendered_template = render_to_string(template, kwargs)\n return str(rendered_template)\n\n\[email protected]_tag\ndef combined_url_parameter(request_query_dict, **kwargs):\n combined_query_dict = request_query_dict.copy()\n for key in kwargs:\n combined_query_dict.setlist(key, [kwargs[key]])\n encoded_parameter = '?' + combined_query_dict.urlencode()\n return encoded_parameter\n\n\[email protected]_tag\ndef filter_has_perm(perm, user, objects):\n \"\"\"Filter a list of objects based on user permissions.\"\"\"\n if not hasattr(user, 'has_perm'):\n # If the swapped user model does not support permissions, all objects\n # will be returned. This is taken from rules.templatetags.has_perm.\n return objects\n else:\n return [obj for obj in objects if user.has_perm(perm, obj)]\n\n\[email protected]\ndef percentage(value, max_value):\n return round(value / max_value * 100)\n\n\[email protected]_tag\ndef project_tile_image(project):\n return project.tile_image or project.image or None\n\n\[email protected]_tag\ndef project_tile_image_copyright(project):\n if project.tile_image:\n return project.tile_image_copyright\n elif project.image:\n return project.image_copyright\n else:\n return None\n\n\[email protected]_tag()\ndef html_date(value, displayfmt=None, datetimefmt='c', **kwargs):\n \"\"\"Format a date and wrap it in a html <time> element.\n\n Additional html attributes may be provided as kwargs (e.g. 'class').\n \"\"\"\n displaydate = defaultfilters.date(value, displayfmt)\n datetime = defaultfilters.date(value, datetimefmt)\n attribs = flatatt(kwargs)\n result = '<time %s datetime=\"%s\">%s</time>' % (attribs,\n datetime,\n displaydate)\n return mark_safe(result)\n", "path": "meinberlin/apps/contrib/templatetags/contrib_tags.py"}]} | 704 | 296 |
gh_patches_debug_356 | rasdani/github-patches | git_diff | zigpy__zha-device-handlers-4 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing import for types breaking LocalDataCluster
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zhaquirks/__init__.py`
Content:
```
1 import importlib
2 import pkgutil
3 from zigpy.quirks import CustomCluster
4 from zigpy.util import ListenableMixin
5
6 UNKNOWN = 'Unknown'
7
8
9 class Bus(ListenableMixin):
10
11 def __init__(self, *args, **kwargs):
12 super().__init__(*args, **kwargs)
13 self._listeners = {}
14
15
16 class LocalDataCluster(CustomCluster):
17
18 def __init__(self, *args, **kwargs):
19 super().__init__(*args, **kwargs)
20
21 async def read_attributes_raw(self, attributes, manufacturer=None):
22 attributes = [types.uint16_t(a) for a in attributes]
23 v = [self._attr_cache.get(attr) for attr in attributes]
24 return v
25
26 def _update_attribute(self, attrid, value):
27 super()._update_attribute(attrid, value)
28
29
30 class EventableCluster(CustomCluster):
31
32 def __init__(self, *args, **kwargs):
33 super().__init__(*args, **kwargs)
34
35 def handle_cluster_request(self, tsn, command_id, args):
36 super().handle_cluster_request(tsn, command_id, args)
37 if self.server_commands is not None and\
38 self.server_commands.get(command_id) is not None:
39 self.listener_event(
40 'zha_send_event',
41 self,
42 self.server_commands.get(command_id)[0],
43 args
44 )
45
46 def _update_attribute(self, attrid, value):
47 super()._update_attribute(attrid, value)
48 self.listener_event(
49 'zha_send_event',
50 self,
51 'attribute_updated',
52 {
53 'attribute_id': attrid,
54 'attribute_name': self.attributes.get(attrid, [UNKNOWN])[0],
55 'value': value
56 }
57 )
58
59 name = __name__
60 path = __path__
61 for importer, modname, ispkg in pkgutil.walk_packages(
62 path=path,
63 prefix=name +'.'
64 ):
65 importlib.import_module(modname)
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zhaquirks/__init__.py b/zhaquirks/__init__.py
--- a/zhaquirks/__init__.py
+++ b/zhaquirks/__init__.py
@@ -2,6 +2,7 @@
import pkgutil
from zigpy.quirks import CustomCluster
from zigpy.util import ListenableMixin
+import zigpy.types as types
UNKNOWN = 'Unknown'
| {"golden_diff": "diff --git a/zhaquirks/__init__.py b/zhaquirks/__init__.py\n--- a/zhaquirks/__init__.py\n+++ b/zhaquirks/__init__.py\n@@ -2,6 +2,7 @@\n import pkgutil\n from zigpy.quirks import CustomCluster\n from zigpy.util import ListenableMixin\n+import zigpy.types as types\n \n UNKNOWN = 'Unknown'\n", "issue": "Missing import for types breaking LocalDataCluster\n\n", "before_files": [{"content": "import importlib\nimport pkgutil\nfrom zigpy.quirks import CustomCluster\nfrom zigpy.util import ListenableMixin\n\nUNKNOWN = 'Unknown'\n\n\nclass Bus(ListenableMixin):\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._listeners = {}\n\n\nclass LocalDataCluster(CustomCluster):\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n async def read_attributes_raw(self, attributes, manufacturer=None):\n attributes = [types.uint16_t(a) for a in attributes]\n v = [self._attr_cache.get(attr) for attr in attributes]\n return v\n\n def _update_attribute(self, attrid, value):\n super()._update_attribute(attrid, value)\n\n\nclass EventableCluster(CustomCluster):\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n def handle_cluster_request(self, tsn, command_id, args):\n super().handle_cluster_request(tsn, command_id, args)\n if self.server_commands is not None and\\\n self.server_commands.get(command_id) is not None:\n self.listener_event(\n 'zha_send_event',\n self,\n self.server_commands.get(command_id)[0],\n args\n )\n\n def _update_attribute(self, attrid, value):\n super()._update_attribute(attrid, value)\n self.listener_event(\n 'zha_send_event',\n self,\n 'attribute_updated',\n {\n 'attribute_id': attrid,\n 'attribute_name': self.attributes.get(attrid, [UNKNOWN])[0],\n 'value': value\n }\n )\n\nname = __name__\npath = __path__\nfor importer, modname, ispkg in pkgutil.walk_packages(\n path=path,\n prefix=name +'.'\n ):\n importlib.import_module(modname)\n", "path": "zhaquirks/__init__.py"}], "after_files": [{"content": "import importlib\nimport pkgutil\nfrom zigpy.quirks import CustomCluster\nfrom zigpy.util import ListenableMixin\nimport zigpy.types as types\n\nUNKNOWN = 'Unknown'\n\n\nclass Bus(ListenableMixin):\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._listeners = {}\n\n\nclass LocalDataCluster(CustomCluster):\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n async def read_attributes_raw(self, attributes, manufacturer=None):\n attributes = [types.uint16_t(a) for a in attributes]\n v = [self._attr_cache.get(attr) for attr in attributes]\n return v\n\n def _update_attribute(self, attrid, value):\n super()._update_attribute(attrid, value)\n\n\nclass EventableCluster(CustomCluster):\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n def handle_cluster_request(self, tsn, command_id, args):\n super().handle_cluster_request(tsn, command_id, args)\n if self.server_commands is not None and\\\n self.server_commands.get(command_id) is not None:\n self.listener_event(\n 'zha_send_event',\n self,\n self.server_commands.get(command_id)[0],\n args\n )\n\n def _update_attribute(self, attrid, value):\n super()._update_attribute(attrid, value)\n self.listener_event(\n 'zha_send_event',\n self,\n 'attribute_updated',\n {\n 'attribute_id': attrid,\n 'attribute_name': self.attributes.get(attrid, [UNKNOWN])[0],\n 'value': value\n }\n )\n\nname = __name__\npath = __path__\nfor importer, modname, ispkg in pkgutil.walk_packages(\n path=path,\n prefix=name +'.'\n ):\n importlib.import_module(modname)\n", "path": "zhaquirks/__init__.py"}]} | 812 | 90 |
gh_patches_debug_597 | rasdani/github-patches | git_diff | pex-tool__pex-1610 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.66
On the docket:
+ [x] Support specifying foreign platforms in full detail. #1597
+ [x] Respect PEX_ROOT in PEXEnvironment.mount. #1599
+ [x] Be able to see what .pex file is run from the list of system processes #1604
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.65"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.65"
+__version__ = "2.1.66"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.65\"\n+__version__ = \"2.1.66\"\n", "issue": "Release 2.1.66\nOn the docket:\r\n+ [x] Support specifying foreign platforms in full detail. #1597\r\n+ [x] Respect PEX_ROOT in PEXEnvironment.mount. #1599 \r\n+ [x] Be able to see what .pex file is run from the list of system processes #1604 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.65\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.66\"\n", "path": "pex/version.py"}]} | 388 | 96 |
gh_patches_debug_12361 | rasdani/github-patches | git_diff | carpentries__amy-2333 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Prepare AMY staging instance for actual use
New test AMY server is running, but it lacks some features from the other server.
- [x] Run fixtures (should be accompanied by #2239)
- [x] Scaffold non-admin users for AMY database
- [ ] Add default admin user
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `amy/workshops/management/commands/create_superuser.py`
Content:
```
1 from django.core.management.base import BaseCommand, CommandError
2
3 from communityroles.models import CommunityRole, CommunityRoleConfig
4 from workshops.models import Person
5
6
7 class Command(BaseCommand):
8 args = "no arguments"
9 help = 'Create a superuser called "admin" with password "admin".'
10
11 def handle(self, *args, **options):
12 try:
13 admin = Person.objects.create_superuser(
14 username="admin",
15 personal="admin",
16 family="admin",
17 email="[email protected]",
18 password="admin",
19 )
20 print("Created admin user")
21
22 role_config = CommunityRoleConfig.objects.get(name="instructor")
23 CommunityRole.objects.create(
24 config=role_config,
25 person=admin,
26 )
27 print("Assigned Instructor community role to admin user")
28
29 except Exception as e:
30 raise CommandError("Failed to create admin: {0}".format(str(e)))
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/amy/workshops/management/commands/create_superuser.py b/amy/workshops/management/commands/create_superuser.py
--- a/amy/workshops/management/commands/create_superuser.py
+++ b/amy/workshops/management/commands/create_superuser.py
@@ -9,9 +9,15 @@
help = 'Create a superuser called "admin" with password "admin".'
def handle(self, *args, **options):
+ username = "admin"
+
+ if Person.objects.filter(username=username).exists():
+ print("Admin user exists, quitting.")
+ return
+
try:
admin = Person.objects.create_superuser(
- username="admin",
+ username=username,
personal="admin",
family="admin",
email="[email protected]",
| {"golden_diff": "diff --git a/amy/workshops/management/commands/create_superuser.py b/amy/workshops/management/commands/create_superuser.py\n--- a/amy/workshops/management/commands/create_superuser.py\n+++ b/amy/workshops/management/commands/create_superuser.py\n@@ -9,9 +9,15 @@\n help = 'Create a superuser called \"admin\" with password \"admin\".'\n \n def handle(self, *args, **options):\n+ username = \"admin\"\n+\n+ if Person.objects.filter(username=username).exists():\n+ print(\"Admin user exists, quitting.\")\n+ return\n+\n try:\n admin = Person.objects.create_superuser(\n- username=\"admin\",\n+ username=username,\n personal=\"admin\",\n family=\"admin\",\n email=\"[email protected]\",\n", "issue": "Prepare AMY staging instance for actual use\nNew test AMY server is running, but it lacks some features from the other server.\r\n\r\n- [x] Run fixtures (should be accompanied by #2239)\r\n- [x] Scaffold non-admin users for AMY database\r\n- [ ] Add default admin user\n", "before_files": [{"content": "from django.core.management.base import BaseCommand, CommandError\n\nfrom communityroles.models import CommunityRole, CommunityRoleConfig\nfrom workshops.models import Person\n\n\nclass Command(BaseCommand):\n args = \"no arguments\"\n help = 'Create a superuser called \"admin\" with password \"admin\".'\n\n def handle(self, *args, **options):\n try:\n admin = Person.objects.create_superuser(\n username=\"admin\",\n personal=\"admin\",\n family=\"admin\",\n email=\"[email protected]\",\n password=\"admin\",\n )\n print(\"Created admin user\")\n\n role_config = CommunityRoleConfig.objects.get(name=\"instructor\")\n CommunityRole.objects.create(\n config=role_config,\n person=admin,\n )\n print(\"Assigned Instructor community role to admin user\")\n\n except Exception as e:\n raise CommandError(\"Failed to create admin: {0}\".format(str(e)))\n", "path": "amy/workshops/management/commands/create_superuser.py"}], "after_files": [{"content": "from django.core.management.base import BaseCommand, CommandError\n\nfrom communityroles.models import CommunityRole, CommunityRoleConfig\nfrom workshops.models import Person\n\n\nclass Command(BaseCommand):\n args = \"no arguments\"\n help = 'Create a superuser called \"admin\" with password \"admin\".'\n\n def handle(self, *args, **options):\n username = \"admin\"\n\n if Person.objects.filter(username=username).exists():\n print(\"Admin user exists, quitting.\")\n return\n\n try:\n admin = Person.objects.create_superuser(\n username=username,\n personal=\"admin\",\n family=\"admin\",\n email=\"[email protected]\",\n password=\"admin\",\n )\n print(\"Created admin user\")\n\n role_config = CommunityRoleConfig.objects.get(name=\"instructor\")\n CommunityRole.objects.create(\n config=role_config,\n person=admin,\n )\n print(\"Assigned Instructor community role to admin user\")\n\n except Exception as e:\n raise CommandError(\"Failed to create admin: {0}\".format(str(e)))\n", "path": "amy/workshops/management/commands/create_superuser.py"}]} | 572 | 170 |
gh_patches_debug_4328 | rasdani/github-patches | git_diff | pytorch__TensorRT-2311 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Upstream Dynamo Backend to Torch
- Add a hook in the Torch repo to secure the namespace `"tensorrt"` and have it point to `"torch_tensorrt"`
- Add necessary imports and skipped tests
- Raise a PR in Torch to add this functionality
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `py/torch_tensorrt/dynamo/backend/backends.py`
Content:
```
1 from __future__ import annotations
2
3 import logging
4 from functools import partial
5 from typing import Any, Callable, Sequence
6
7 import torch
8 import torch._dynamo as td
9 from torch._functorch.aot_autograd import aot_module_simplified, make_boxed_compiler
10 from torch_tensorrt.dynamo import CompilationSettings
11 from torch_tensorrt.dynamo.compile import compile_module
12 from torch_tensorrt.dynamo.lowering._decompositions import get_decompositions
13 from torch_tensorrt.dynamo.lowering._pre_aot_lowering import pre_aot_substitutions
14 from torch_tensorrt.dynamo.utils import parse_dynamo_kwargs
15
16 logger = logging.getLogger(__name__)
17
18
19 @td.register_backend(name="torch_tensorrt") # type: ignore[misc]
20 def torch_tensorrt_backend(
21 gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs: Any
22 ) -> torch.nn.Module:
23 # Set log level at the top of compilation (torch_tensorrt.dynamo)
24 if (
25 (
26 "options" in kwargs
27 and "debug" in kwargs["options"]
28 and kwargs["options"]["debug"]
29 )
30 or ("debug" in kwargs and kwargs["debug"])
31 ) and logger.parent:
32 logger.parent.setLevel(logging.DEBUG)
33
34 DEFAULT_BACKEND = aot_torch_tensorrt_aten_backend
35
36 compiled_mod: torch.nn.Module = DEFAULT_BACKEND(gm, sample_inputs, **kwargs)
37 return compiled_mod
38
39
40 @td.register_backend(name="aot_torch_tensorrt_aten") # type: ignore[misc]
41 def aot_torch_tensorrt_aten_backend(
42 gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs: Any
43 ) -> torch.nn.Module:
44 settings = parse_dynamo_kwargs(kwargs)
45
46 custom_backend = partial(
47 _pretraced_backend,
48 settings=settings,
49 )
50
51 # Perform Pre-AOT Lowering for Module-Level Replacement
52 gm = pre_aot_substitutions(gm)
53
54 # Invoke AOTAutograd to translate operators to aten
55 return aot_module_simplified(
56 gm,
57 sample_inputs,
58 fw_compiler=make_boxed_compiler(custom_backend),
59 decompositions=get_decompositions(settings.enable_experimental_decompositions),
60 )
61
62
63 def _pretraced_backend(
64 gm: torch.fx.GraphModule,
65 sample_inputs: Sequence[torch.Tensor],
66 settings: CompilationSettings = CompilationSettings(),
67 ) -> torch.fx.GraphModule | Callable[..., Any]:
68 """Helper function to manage translation of traced FX module to TRT engines
69
70 Args:
71 module: FX GraphModule to convert
72 inputs: Inputs to the module
73 settings: Compilation settings
74 Returns:
75 Compiled FX GraphModule
76 """
77 try:
78 logger.debug("Post-AOT Autograd graph:\n" + str(gm.graph))
79
80 trt_compiled = compile_module(
81 gm,
82 sample_inputs,
83 settings=settings,
84 )
85 return trt_compiled
86 except AssertionError:
87 if not settings.pass_through_build_failures:
88 logger.warning(
89 "TRT conversion failed on the subgraph. See trace above. "
90 + "Returning GraphModule forward instead.",
91 exc_info=True,
92 )
93 return gm.forward
94 else:
95 logger.critical(
96 "Halting compilation on build failure since "
97 + "pass_through_build_failures was specified as True. "
98 + "To return the default Torch implementation and avoid "
99 + "halting compilation on engine build failures, "
100 + "specify pass_through_build_failures=False."
101 )
102 raise
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/py/torch_tensorrt/dynamo/backend/backends.py b/py/torch_tensorrt/dynamo/backend/backends.py
--- a/py/torch_tensorrt/dynamo/backend/backends.py
+++ b/py/torch_tensorrt/dynamo/backend/backends.py
@@ -16,6 +16,7 @@
logger = logging.getLogger(__name__)
[email protected]_backend(name="tensorrt") # type: ignore[misc]
@td.register_backend(name="torch_tensorrt") # type: ignore[misc]
def torch_tensorrt_backend(
gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs: Any
| {"golden_diff": "diff --git a/py/torch_tensorrt/dynamo/backend/backends.py b/py/torch_tensorrt/dynamo/backend/backends.py\n--- a/py/torch_tensorrt/dynamo/backend/backends.py\n+++ b/py/torch_tensorrt/dynamo/backend/backends.py\n@@ -16,6 +16,7 @@\n logger = logging.getLogger(__name__)\n \n \[email protected]_backend(name=\"tensorrt\") # type: ignore[misc]\n @td.register_backend(name=\"torch_tensorrt\") # type: ignore[misc]\n def torch_tensorrt_backend(\n gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs: Any\n", "issue": "Upstream Dynamo Backend to Torch\n- Add a hook in the Torch repo to secure the namespace `\"tensorrt\"` and have it point to `\"torch_tensorrt\"`\r\n- Add necessary imports and skipped tests\r\n- Raise a PR in Torch to add this functionality\n", "before_files": [{"content": "from __future__ import annotations\n\nimport logging\nfrom functools import partial\nfrom typing import Any, Callable, Sequence\n\nimport torch\nimport torch._dynamo as td\nfrom torch._functorch.aot_autograd import aot_module_simplified, make_boxed_compiler\nfrom torch_tensorrt.dynamo import CompilationSettings\nfrom torch_tensorrt.dynamo.compile import compile_module\nfrom torch_tensorrt.dynamo.lowering._decompositions import get_decompositions\nfrom torch_tensorrt.dynamo.lowering._pre_aot_lowering import pre_aot_substitutions\nfrom torch_tensorrt.dynamo.utils import parse_dynamo_kwargs\n\nlogger = logging.getLogger(__name__)\n\n\[email protected]_backend(name=\"torch_tensorrt\") # type: ignore[misc]\ndef torch_tensorrt_backend(\n gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs: Any\n) -> torch.nn.Module:\n # Set log level at the top of compilation (torch_tensorrt.dynamo)\n if (\n (\n \"options\" in kwargs\n and \"debug\" in kwargs[\"options\"]\n and kwargs[\"options\"][\"debug\"]\n )\n or (\"debug\" in kwargs and kwargs[\"debug\"])\n ) and logger.parent:\n logger.parent.setLevel(logging.DEBUG)\n\n DEFAULT_BACKEND = aot_torch_tensorrt_aten_backend\n\n compiled_mod: torch.nn.Module = DEFAULT_BACKEND(gm, sample_inputs, **kwargs)\n return compiled_mod\n\n\[email protected]_backend(name=\"aot_torch_tensorrt_aten\") # type: ignore[misc]\ndef aot_torch_tensorrt_aten_backend(\n gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs: Any\n) -> torch.nn.Module:\n settings = parse_dynamo_kwargs(kwargs)\n\n custom_backend = partial(\n _pretraced_backend,\n settings=settings,\n )\n\n # Perform Pre-AOT Lowering for Module-Level Replacement\n gm = pre_aot_substitutions(gm)\n\n # Invoke AOTAutograd to translate operators to aten\n return aot_module_simplified(\n gm,\n sample_inputs,\n fw_compiler=make_boxed_compiler(custom_backend),\n decompositions=get_decompositions(settings.enable_experimental_decompositions),\n )\n\n\ndef _pretraced_backend(\n gm: torch.fx.GraphModule,\n sample_inputs: Sequence[torch.Tensor],\n settings: CompilationSettings = CompilationSettings(),\n) -> torch.fx.GraphModule | Callable[..., Any]:\n \"\"\"Helper function to manage translation of traced FX module to TRT engines\n\n Args:\n module: FX GraphModule to convert\n inputs: Inputs to the module\n settings: Compilation settings\n Returns:\n Compiled FX GraphModule\n \"\"\"\n try:\n logger.debug(\"Post-AOT Autograd graph:\\n\" + str(gm.graph))\n\n trt_compiled = compile_module(\n gm,\n sample_inputs,\n settings=settings,\n )\n return trt_compiled\n except AssertionError:\n if not settings.pass_through_build_failures:\n logger.warning(\n \"TRT conversion failed on the subgraph. See trace above. \"\n + \"Returning GraphModule forward instead.\",\n exc_info=True,\n )\n return gm.forward\n else:\n logger.critical(\n \"Halting compilation on build failure since \"\n + \"pass_through_build_failures was specified as True. \"\n + \"To return the default Torch implementation and avoid \"\n + \"halting compilation on engine build failures, \"\n + \"specify pass_through_build_failures=False.\"\n )\n raise\n", "path": "py/torch_tensorrt/dynamo/backend/backends.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport logging\nfrom functools import partial\nfrom typing import Any, Callable, Sequence\n\nimport torch\nimport torch._dynamo as td\nfrom torch._functorch.aot_autograd import aot_module_simplified, make_boxed_compiler\nfrom torch_tensorrt.dynamo import CompilationSettings\nfrom torch_tensorrt.dynamo.compile import compile_module\nfrom torch_tensorrt.dynamo.lowering._decompositions import get_decompositions\nfrom torch_tensorrt.dynamo.lowering._pre_aot_lowering import pre_aot_substitutions\nfrom torch_tensorrt.dynamo.utils import parse_dynamo_kwargs\n\nlogger = logging.getLogger(__name__)\n\n\[email protected]_backend(name=\"tensorrt\") # type: ignore[misc]\[email protected]_backend(name=\"torch_tensorrt\") # type: ignore[misc]\ndef torch_tensorrt_backend(\n gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs: Any\n) -> torch.nn.Module:\n # Set log level at the top of compilation (torch_tensorrt.dynamo)\n if (\n (\n \"options\" in kwargs\n and \"debug\" in kwargs[\"options\"]\n and kwargs[\"options\"][\"debug\"]\n )\n or (\"debug\" in kwargs and kwargs[\"debug\"])\n ) and logger.parent:\n logger.parent.setLevel(logging.DEBUG)\n\n DEFAULT_BACKEND = aot_torch_tensorrt_aten_backend\n\n compiled_mod: torch.nn.Module = DEFAULT_BACKEND(gm, sample_inputs, **kwargs)\n return compiled_mod\n\n\[email protected]_backend(name=\"aot_torch_tensorrt_aten\") # type: ignore[misc]\ndef aot_torch_tensorrt_aten_backend(\n gm: torch.fx.GraphModule, sample_inputs: Sequence[torch.Tensor], **kwargs: Any\n) -> torch.nn.Module:\n settings = parse_dynamo_kwargs(kwargs)\n\n custom_backend = partial(\n _pretraced_backend,\n settings=settings,\n )\n\n # Perform Pre-AOT Lowering for Module-Level Replacement\n gm = pre_aot_substitutions(gm)\n\n # Invoke AOTAutograd to translate operators to aten\n return aot_module_simplified(\n gm,\n sample_inputs,\n fw_compiler=make_boxed_compiler(custom_backend),\n decompositions=get_decompositions(settings.enable_experimental_decompositions),\n )\n\n\ndef _pretraced_backend(\n gm: torch.fx.GraphModule,\n sample_inputs: Sequence[torch.Tensor],\n settings: CompilationSettings = CompilationSettings(),\n) -> torch.fx.GraphModule | Callable[..., Any]:\n \"\"\"Helper function to manage translation of traced FX module to TRT engines\n\n Args:\n module: FX GraphModule to convert\n inputs: Inputs to the module\n settings: Compilation settings\n Returns:\n Compiled FX GraphModule\n \"\"\"\n try:\n logger.debug(\"Post-AOT Autograd graph:\\n\" + str(gm.graph))\n\n trt_compiled = compile_module(\n gm,\n sample_inputs,\n settings=settings,\n )\n return trt_compiled\n except AssertionError:\n if not settings.pass_through_build_failures:\n logger.warning(\n \"TRT conversion failed on the subgraph. See trace above. \"\n + \"Returning GraphModule forward instead.\",\n exc_info=True,\n )\n return gm.forward\n else:\n logger.critical(\n \"Halting compilation on build failure since \"\n + \"pass_through_build_failures was specified as True. \"\n + \"To return the default Torch implementation and avoid \"\n + \"halting compilation on engine build failures, \"\n + \"specify pass_through_build_failures=False.\"\n )\n raise\n", "path": "py/torch_tensorrt/dynamo/backend/backends.py"}]} | 1,281 | 139 |
gh_patches_debug_782 | rasdani/github-patches | git_diff | safe-global__safe-config-service-76 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Serve static files with Nginx
When running the application with Nginx as reverse-proxy, static files (such as Admin CSS) are not correctly collected and served
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/config/settings.py`
Content:
```
1 """
2 Django settings for safe_client_config_service project.
3
4 Generated by 'django-admin startproject' using Django 3.2.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/3.2/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/3.2/ref/settings/
11 """
12 import os
13 from distutils.util import strtobool
14 from pathlib import Path
15
16 # Build paths inside the project like this: BASE_DIR / 'subdir'.
17 BASE_DIR = Path(__file__).resolve().parent.parent
18
19 # Quick-start development settings - unsuitable for production
20 # See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/
21
22 # SECURITY WARNING: keep the secret key used in production secret!
23 SECRET_KEY = os.getenv("SECRET_KEY", None)
24
25 # SECURITY WARNING: don't run with debug turned on in production!
26 DEBUG = bool(strtobool(os.getenv("DEBUG", "false")))
27
28 # https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-ALLOWED_HOSTS
29 allowed_hosts = os.getenv("DJANGO_ALLOWED_HOSTS", ".localhost,127.0.0.1,[::1]")
30 ALLOWED_HOSTS = [allowed_host.strip() for allowed_host in allowed_hosts.split(",")]
31
32 # Application definition
33
34 REST_FRAMEWORK = {
35 # https://www.django-rest-framework.org/api-guide/renderers/
36 "DEFAULT_RENDERER_CLASSES": [
37 "djangorestframework_camel_case.render.CamelCaseJSONRenderer",
38 ]
39 }
40
41 INSTALLED_APPS = [
42 "safe_apps.apps.AppsConfig",
43 "django.contrib.admin",
44 "django.contrib.auth",
45 "django.contrib.contenttypes",
46 "django.contrib.sessions",
47 "django.contrib.messages",
48 "django.contrib.staticfiles",
49 "rest_framework",
50 ]
51
52 MIDDLEWARE = [
53 "config.middleware.LoggingMiddleware",
54 "django.middleware.security.SecurityMiddleware",
55 "django.contrib.sessions.middleware.SessionMiddleware",
56 "django.middleware.common.CommonMiddleware",
57 "django.middleware.csrf.CsrfViewMiddleware",
58 "django.contrib.auth.middleware.AuthenticationMiddleware",
59 "django.contrib.messages.middleware.MessageMiddleware",
60 "django.middleware.clickjacking.XFrameOptionsMiddleware",
61 ]
62
63 CACHES = {
64 "default": {
65 "BACKEND": "django.core.cache.backends.locmem.LocMemCache",
66 },
67 "safe-apps": {
68 "BACKEND": "django.core.cache.backends.locmem.LocMemCache",
69 },
70 }
71
72 LOGGING = {
73 "version": 1,
74 "disable_existing_loggers": False,
75 "formatters": {
76 "short": {"format": "%(asctime)s %(message)s"},
77 "verbose": {
78 "format": "%(asctime)s [%(levelname)s] [%(processName)s] %(message)s"
79 },
80 },
81 "handlers": {
82 "console": {
83 "class": "logging.StreamHandler",
84 "formatter": "verbose",
85 },
86 "console_short": {
87 "class": "logging.StreamHandler",
88 "formatter": "short",
89 },
90 },
91 "root": {
92 "handlers": ["console"],
93 "level": os.getenv("ROOT_LOG_LEVEL", "INFO"),
94 },
95 "loggers": {
96 "LoggingMiddleware": {
97 "handlers": ["console_short"],
98 "level": "INFO",
99 "propagate": False,
100 },
101 },
102 }
103
104 ROOT_URLCONF = "config.urls"
105
106 TEMPLATES = [
107 {
108 "BACKEND": "django.template.backends.django.DjangoTemplates",
109 "DIRS": [],
110 "APP_DIRS": True,
111 "OPTIONS": {
112 "context_processors": [
113 "django.template.context_processors.debug",
114 "django.template.context_processors.request",
115 "django.contrib.auth.context_processors.auth",
116 "django.contrib.messages.context_processors.messages",
117 ],
118 },
119 },
120 ]
121
122 WSGI_APPLICATION = "config.wsgi.application"
123
124 # Database
125 # https://docs.djangoproject.com/en/3.2/ref/settings/#databases
126
127 DATABASES = {
128 "default": {
129 "ENGINE": "django.db.backends.postgresql",
130 "NAME": os.getenv("POSTGRES_NAME", "postgres"),
131 "USER": os.getenv("POSTGRES_USER", "postgres"),
132 "PASSWORD": os.getenv("POSTGRES_PASSWORD", "postgres"),
133 "HOST": os.getenv("POSTGRES_HOST", "db"),
134 "PORT": os.getenv("POSTGRES_PORT", "5432"),
135 }
136 }
137
138 # Password validation
139 # https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators
140
141 AUTH_PASSWORD_VALIDATORS = [
142 {
143 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
144 },
145 {
146 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
147 },
148 {
149 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
150 },
151 {
152 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
153 },
154 ]
155
156 # Internationalization
157 # https://docs.djangoproject.com/en/3.2/topics/i18n/
158
159 LANGUAGE_CODE = "en-us"
160
161 TIME_ZONE = "UTC"
162
163 USE_I18N = True
164
165 USE_L10N = True
166
167 USE_TZ = True
168
169 # Static files (CSS, JavaScript, Images)
170 # https://docs.djangoproject.com/en/3.2/howto/static-files/
171
172 STATIC_URL = "/static/"
173
174 # Default primary key field type
175 # https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field
176
177 DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/config/settings.py b/src/config/settings.py
--- a/src/config/settings.py
+++ b/src/config/settings.py
@@ -171,6 +171,8 @@
STATIC_URL = "/static/"
+STATIC_ROOT = "staticfiles"
+
# Default primary key field type
# https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field
| {"golden_diff": "diff --git a/src/config/settings.py b/src/config/settings.py\n--- a/src/config/settings.py\n+++ b/src/config/settings.py\n@@ -171,6 +171,8 @@\n \n STATIC_URL = \"/static/\"\n \n+STATIC_ROOT = \"staticfiles\"\n+\n # Default primary key field type\n # https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field\n", "issue": "Serve static files with Nginx\nWhen running the application with Nginx as reverse-proxy, static files (such as Admin CSS) are not correctly collected and served\n", "before_files": [{"content": "\"\"\"\nDjango settings for safe_client_config_service project.\n\nGenerated by 'django-admin startproject' using Django 3.2.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.2/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.2/ref/settings/\n\"\"\"\nimport os\nfrom distutils.util import strtobool\nfrom pathlib import Path\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.getenv(\"SECRET_KEY\", None)\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = bool(strtobool(os.getenv(\"DEBUG\", \"false\")))\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-ALLOWED_HOSTS\nallowed_hosts = os.getenv(\"DJANGO_ALLOWED_HOSTS\", \".localhost,127.0.0.1,[::1]\")\nALLOWED_HOSTS = [allowed_host.strip() for allowed_host in allowed_hosts.split(\",\")]\n\n# Application definition\n\nREST_FRAMEWORK = {\n # https://www.django-rest-framework.org/api-guide/renderers/\n \"DEFAULT_RENDERER_CLASSES\": [\n \"djangorestframework_camel_case.render.CamelCaseJSONRenderer\",\n ]\n}\n\nINSTALLED_APPS = [\n \"safe_apps.apps.AppsConfig\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n]\n\nMIDDLEWARE = [\n \"config.middleware.LoggingMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nCACHES = {\n \"default\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n },\n \"safe-apps\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n },\n}\n\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"short\": {\"format\": \"%(asctime)s %(message)s\"},\n \"verbose\": {\n \"format\": \"%(asctime)s [%(levelname)s] [%(processName)s] %(message)s\"\n },\n },\n \"handlers\": {\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"verbose\",\n },\n \"console_short\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"short\",\n },\n },\n \"root\": {\n \"handlers\": [\"console\"],\n \"level\": os.getenv(\"ROOT_LOG_LEVEL\", \"INFO\"),\n },\n \"loggers\": {\n \"LoggingMiddleware\": {\n \"handlers\": [\"console_short\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n },\n}\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.2/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": os.getenv(\"POSTGRES_NAME\", \"postgres\"),\n \"USER\": os.getenv(\"POSTGRES_USER\", \"postgres\"),\n \"PASSWORD\": os.getenv(\"POSTGRES_PASSWORD\", \"postgres\"),\n \"HOST\": os.getenv(\"POSTGRES_HOST\", \"db\"),\n \"PORT\": os.getenv(\"POSTGRES_PORT\", \"5432\"),\n }\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.2/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.2/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\n# Default primary key field type\n# https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field\n\nDEFAULT_AUTO_FIELD = \"django.db.models.BigAutoField\"\n", "path": "src/config/settings.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for safe_client_config_service project.\n\nGenerated by 'django-admin startproject' using Django 3.2.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.2/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.2/ref/settings/\n\"\"\"\nimport os\nfrom distutils.util import strtobool\nfrom pathlib import Path\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.getenv(\"SECRET_KEY\", None)\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = bool(strtobool(os.getenv(\"DEBUG\", \"false\")))\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-ALLOWED_HOSTS\nallowed_hosts = os.getenv(\"DJANGO_ALLOWED_HOSTS\", \".localhost,127.0.0.1,[::1]\")\nALLOWED_HOSTS = [allowed_host.strip() for allowed_host in allowed_hosts.split(\",\")]\n\n# Application definition\n\nREST_FRAMEWORK = {\n # https://www.django-rest-framework.org/api-guide/renderers/\n \"DEFAULT_RENDERER_CLASSES\": [\n \"djangorestframework_camel_case.render.CamelCaseJSONRenderer\",\n ]\n}\n\nINSTALLED_APPS = [\n \"safe_apps.apps.AppsConfig\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n]\n\nMIDDLEWARE = [\n \"config.middleware.LoggingMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nCACHES = {\n \"default\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n },\n \"safe-apps\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n },\n}\n\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"short\": {\"format\": \"%(asctime)s %(message)s\"},\n \"verbose\": {\n \"format\": \"%(asctime)s [%(levelname)s] [%(processName)s] %(message)s\"\n },\n },\n \"handlers\": {\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"verbose\",\n },\n \"console_short\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"short\",\n },\n },\n \"root\": {\n \"handlers\": [\"console\"],\n \"level\": os.getenv(\"ROOT_LOG_LEVEL\", \"INFO\"),\n },\n \"loggers\": {\n \"LoggingMiddleware\": {\n \"handlers\": [\"console_short\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n },\n}\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.2/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": os.getenv(\"POSTGRES_NAME\", \"postgres\"),\n \"USER\": os.getenv(\"POSTGRES_USER\", \"postgres\"),\n \"PASSWORD\": os.getenv(\"POSTGRES_PASSWORD\", \"postgres\"),\n \"HOST\": os.getenv(\"POSTGRES_HOST\", \"db\"),\n \"PORT\": os.getenv(\"POSTGRES_PORT\", \"5432\"),\n }\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.2/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.2/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\nSTATIC_ROOT = \"staticfiles\"\n\n# Default primary key field type\n# https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field\n\nDEFAULT_AUTO_FIELD = \"django.db.models.BigAutoField\"\n", "path": "src/config/settings.py"}]} | 1,910 | 84 |
gh_patches_debug_16886 | rasdani/github-patches | git_diff | urllib3__urllib3-2335 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add decode_content parameter to top-level APIs
Like the title says, add `decode_content` to the top-level APIs `urllib3.request()`.
See https://github.com/urllib3/urllib3/commit/ddf7361ac0467431a2f3df6ba346c9c506c29d56 for an example.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/urllib3/__init__.py`
Content:
```
1 """
2 Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more
3 """
4
5 # Set default logging handler to avoid "No handler found" warnings.
6 import logging
7 import warnings
8 from logging import NullHandler
9 from typing import Mapping, Optional, Type, Union
10
11 from . import exceptions
12 from ._collections import HTTPHeaderDict
13 from ._version import __version__
14 from .connection import _TYPE_BODY
15 from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, connection_from_url
16 from .filepost import _TYPE_FIELDS, encode_multipart_formdata
17 from .poolmanager import PoolManager, ProxyManager, proxy_from_url
18 from .response import BaseHTTPResponse, HTTPResponse
19 from .util.request import make_headers
20 from .util.retry import Retry
21 from .util.timeout import Timeout
22
23 __author__ = "Andrey Petrov ([email protected])"
24 __license__ = "MIT"
25 __version__ = __version__
26
27 __all__ = (
28 "HTTPConnectionPool",
29 "HTTPHeaderDict",
30 "HTTPSConnectionPool",
31 "PoolManager",
32 "ProxyManager",
33 "HTTPResponse",
34 "Retry",
35 "Timeout",
36 "add_stderr_logger",
37 "connection_from_url",
38 "disable_warnings",
39 "encode_multipart_formdata",
40 "make_headers",
41 "proxy_from_url",
42 "request",
43 )
44
45 logging.getLogger(__name__).addHandler(NullHandler())
46
47
48 def add_stderr_logger(level: int = logging.DEBUG) -> logging.StreamHandler:
49 """
50 Helper for quickly adding a StreamHandler to the logger. Useful for
51 debugging.
52
53 Returns the handler after adding it.
54 """
55 # This method needs to be in this __init__.py to get the __name__ correct
56 # even if urllib3 is vendored within another package.
57 logger = logging.getLogger(__name__)
58 handler = logging.StreamHandler()
59 handler.setFormatter(logging.Formatter("%(asctime)s %(levelname)s %(message)s"))
60 logger.addHandler(handler)
61 logger.setLevel(level)
62 logger.debug("Added a stderr logging handler to logger: %s", __name__)
63 return handler
64
65
66 # ... Clean up.
67 del NullHandler
68
69
70 # All warning filters *must* be appended unless you're really certain that they
71 # shouldn't be: otherwise, it's very hard for users to use most Python
72 # mechanisms to silence them.
73 # SecurityWarning's always go off by default.
74 warnings.simplefilter("always", exceptions.SecurityWarning, append=True)
75 # InsecurePlatformWarning's don't vary between requests, so we keep it default.
76 warnings.simplefilter("default", exceptions.InsecurePlatformWarning, append=True)
77 # SNIMissingWarnings should go off only once.
78 warnings.simplefilter("default", exceptions.SNIMissingWarning, append=True)
79
80
81 def disable_warnings(category: Type[Warning] = exceptions.HTTPWarning) -> None:
82 """
83 Helper for quickly disabling all urllib3 warnings.
84 """
85 warnings.simplefilter("ignore", category)
86
87
88 _DEFAULT_POOL = PoolManager()
89
90
91 def request(
92 method: str,
93 url: str,
94 *,
95 body: Optional[_TYPE_BODY] = None,
96 fields: Optional[_TYPE_FIELDS] = None,
97 headers: Optional[Mapping[str, str]] = None,
98 preload_content: Optional[bool] = True,
99 redirect: Optional[bool] = True,
100 retries: Optional[Union[Retry, bool, int]] = None,
101 timeout: Optional[Union[Timeout, float, int]] = 3,
102 ) -> BaseHTTPResponse:
103 """
104 A convenience, top-level request method. It uses a module-global ``PoolManager`` instance.
105 Therefore, its side effects could be shared across dependencies relying on it.
106 To avoid side effects create a new ``PoolManager`` instance and use it instead.
107 The method does not accept low-level ``**urlopen_kw`` keyword arguments.
108 """
109
110 return _DEFAULT_POOL.request(
111 method,
112 url,
113 body=body,
114 fields=fields,
115 headers=headers,
116 preload_content=preload_content,
117 redirect=redirect,
118 retries=retries,
119 timeout=timeout,
120 )
121
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/urllib3/__init__.py b/src/urllib3/__init__.py
--- a/src/urllib3/__init__.py
+++ b/src/urllib3/__init__.py
@@ -96,6 +96,7 @@
fields: Optional[_TYPE_FIELDS] = None,
headers: Optional[Mapping[str, str]] = None,
preload_content: Optional[bool] = True,
+ decode_content: Optional[bool] = True,
redirect: Optional[bool] = True,
retries: Optional[Union[Retry, bool, int]] = None,
timeout: Optional[Union[Timeout, float, int]] = 3,
@@ -114,6 +115,7 @@
fields=fields,
headers=headers,
preload_content=preload_content,
+ decode_content=decode_content,
redirect=redirect,
retries=retries,
timeout=timeout,
| {"golden_diff": "diff --git a/src/urllib3/__init__.py b/src/urllib3/__init__.py\n--- a/src/urllib3/__init__.py\n+++ b/src/urllib3/__init__.py\n@@ -96,6 +96,7 @@\n fields: Optional[_TYPE_FIELDS] = None,\n headers: Optional[Mapping[str, str]] = None,\n preload_content: Optional[bool] = True,\n+ decode_content: Optional[bool] = True,\n redirect: Optional[bool] = True,\n retries: Optional[Union[Retry, bool, int]] = None,\n timeout: Optional[Union[Timeout, float, int]] = 3,\n@@ -114,6 +115,7 @@\n fields=fields,\n headers=headers,\n preload_content=preload_content,\n+ decode_content=decode_content,\n redirect=redirect,\n retries=retries,\n timeout=timeout,\n", "issue": "Add decode_content parameter to top-level APIs\nLike the title says, add `decode_content` to the top-level APIs `urllib3.request()`.\r\n\r\nSee https://github.com/urllib3/urllib3/commit/ddf7361ac0467431a2f3df6ba346c9c506c29d56 for an example.\n", "before_files": [{"content": "\"\"\"\nPython HTTP library with thread-safe connection pooling, file post support, user friendly, and more\n\"\"\"\n\n# Set default logging handler to avoid \"No handler found\" warnings.\nimport logging\nimport warnings\nfrom logging import NullHandler\nfrom typing import Mapping, Optional, Type, Union\n\nfrom . import exceptions\nfrom ._collections import HTTPHeaderDict\nfrom ._version import __version__\nfrom .connection import _TYPE_BODY\nfrom .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, connection_from_url\nfrom .filepost import _TYPE_FIELDS, encode_multipart_formdata\nfrom .poolmanager import PoolManager, ProxyManager, proxy_from_url\nfrom .response import BaseHTTPResponse, HTTPResponse\nfrom .util.request import make_headers\nfrom .util.retry import Retry\nfrom .util.timeout import Timeout\n\n__author__ = \"Andrey Petrov ([email protected])\"\n__license__ = \"MIT\"\n__version__ = __version__\n\n__all__ = (\n \"HTTPConnectionPool\",\n \"HTTPHeaderDict\",\n \"HTTPSConnectionPool\",\n \"PoolManager\",\n \"ProxyManager\",\n \"HTTPResponse\",\n \"Retry\",\n \"Timeout\",\n \"add_stderr_logger\",\n \"connection_from_url\",\n \"disable_warnings\",\n \"encode_multipart_formdata\",\n \"make_headers\",\n \"proxy_from_url\",\n \"request\",\n)\n\nlogging.getLogger(__name__).addHandler(NullHandler())\n\n\ndef add_stderr_logger(level: int = logging.DEBUG) -> logging.StreamHandler:\n \"\"\"\n Helper for quickly adding a StreamHandler to the logger. Useful for\n debugging.\n\n Returns the handler after adding it.\n \"\"\"\n # This method needs to be in this __init__.py to get the __name__ correct\n # even if urllib3 is vendored within another package.\n logger = logging.getLogger(__name__)\n handler = logging.StreamHandler()\n handler.setFormatter(logging.Formatter(\"%(asctime)s %(levelname)s %(message)s\"))\n logger.addHandler(handler)\n logger.setLevel(level)\n logger.debug(\"Added a stderr logging handler to logger: %s\", __name__)\n return handler\n\n\n# ... Clean up.\ndel NullHandler\n\n\n# All warning filters *must* be appended unless you're really certain that they\n# shouldn't be: otherwise, it's very hard for users to use most Python\n# mechanisms to silence them.\n# SecurityWarning's always go off by default.\nwarnings.simplefilter(\"always\", exceptions.SecurityWarning, append=True)\n# InsecurePlatformWarning's don't vary between requests, so we keep it default.\nwarnings.simplefilter(\"default\", exceptions.InsecurePlatformWarning, append=True)\n# SNIMissingWarnings should go off only once.\nwarnings.simplefilter(\"default\", exceptions.SNIMissingWarning, append=True)\n\n\ndef disable_warnings(category: Type[Warning] = exceptions.HTTPWarning) -> None:\n \"\"\"\n Helper for quickly disabling all urllib3 warnings.\n \"\"\"\n warnings.simplefilter(\"ignore\", category)\n\n\n_DEFAULT_POOL = PoolManager()\n\n\ndef request(\n method: str,\n url: str,\n *,\n body: Optional[_TYPE_BODY] = None,\n fields: Optional[_TYPE_FIELDS] = None,\n headers: Optional[Mapping[str, str]] = None,\n preload_content: Optional[bool] = True,\n redirect: Optional[bool] = True,\n retries: Optional[Union[Retry, bool, int]] = None,\n timeout: Optional[Union[Timeout, float, int]] = 3,\n) -> BaseHTTPResponse:\n \"\"\"\n A convenience, top-level request method. It uses a module-global ``PoolManager`` instance.\n Therefore, its side effects could be shared across dependencies relying on it.\n To avoid side effects create a new ``PoolManager`` instance and use it instead.\n The method does not accept low-level ``**urlopen_kw`` keyword arguments.\n \"\"\"\n\n return _DEFAULT_POOL.request(\n method,\n url,\n body=body,\n fields=fields,\n headers=headers,\n preload_content=preload_content,\n redirect=redirect,\n retries=retries,\n timeout=timeout,\n )\n", "path": "src/urllib3/__init__.py"}], "after_files": [{"content": "\"\"\"\nPython HTTP library with thread-safe connection pooling, file post support, user friendly, and more\n\"\"\"\n\n# Set default logging handler to avoid \"No handler found\" warnings.\nimport logging\nimport warnings\nfrom logging import NullHandler\nfrom typing import Mapping, Optional, Type, Union\n\nfrom . import exceptions\nfrom ._collections import HTTPHeaderDict\nfrom ._version import __version__\nfrom .connection import _TYPE_BODY\nfrom .connectionpool import HTTPConnectionPool, HTTPSConnectionPool, connection_from_url\nfrom .filepost import _TYPE_FIELDS, encode_multipart_formdata\nfrom .poolmanager import PoolManager, ProxyManager, proxy_from_url\nfrom .response import BaseHTTPResponse, HTTPResponse\nfrom .util.request import make_headers\nfrom .util.retry import Retry\nfrom .util.timeout import Timeout\n\n__author__ = \"Andrey Petrov ([email protected])\"\n__license__ = \"MIT\"\n__version__ = __version__\n\n__all__ = (\n \"HTTPConnectionPool\",\n \"HTTPHeaderDict\",\n \"HTTPSConnectionPool\",\n \"PoolManager\",\n \"ProxyManager\",\n \"HTTPResponse\",\n \"Retry\",\n \"Timeout\",\n \"add_stderr_logger\",\n \"connection_from_url\",\n \"disable_warnings\",\n \"encode_multipart_formdata\",\n \"make_headers\",\n \"proxy_from_url\",\n \"request\",\n)\n\nlogging.getLogger(__name__).addHandler(NullHandler())\n\n\ndef add_stderr_logger(level: int = logging.DEBUG) -> logging.StreamHandler:\n \"\"\"\n Helper for quickly adding a StreamHandler to the logger. Useful for\n debugging.\n\n Returns the handler after adding it.\n \"\"\"\n # This method needs to be in this __init__.py to get the __name__ correct\n # even if urllib3 is vendored within another package.\n logger = logging.getLogger(__name__)\n handler = logging.StreamHandler()\n handler.setFormatter(logging.Formatter(\"%(asctime)s %(levelname)s %(message)s\"))\n logger.addHandler(handler)\n logger.setLevel(level)\n logger.debug(\"Added a stderr logging handler to logger: %s\", __name__)\n return handler\n\n\n# ... Clean up.\ndel NullHandler\n\n\n# All warning filters *must* be appended unless you're really certain that they\n# shouldn't be: otherwise, it's very hard for users to use most Python\n# mechanisms to silence them.\n# SecurityWarning's always go off by default.\nwarnings.simplefilter(\"always\", exceptions.SecurityWarning, append=True)\n# InsecurePlatformWarning's don't vary between requests, so we keep it default.\nwarnings.simplefilter(\"default\", exceptions.InsecurePlatformWarning, append=True)\n# SNIMissingWarnings should go off only once.\nwarnings.simplefilter(\"default\", exceptions.SNIMissingWarning, append=True)\n\n\ndef disable_warnings(category: Type[Warning] = exceptions.HTTPWarning) -> None:\n \"\"\"\n Helper for quickly disabling all urllib3 warnings.\n \"\"\"\n warnings.simplefilter(\"ignore\", category)\n\n\n_DEFAULT_POOL = PoolManager()\n\n\ndef request(\n method: str,\n url: str,\n *,\n body: Optional[_TYPE_BODY] = None,\n fields: Optional[_TYPE_FIELDS] = None,\n headers: Optional[Mapping[str, str]] = None,\n preload_content: Optional[bool] = True,\n decode_content: Optional[bool] = True,\n redirect: Optional[bool] = True,\n retries: Optional[Union[Retry, bool, int]] = None,\n timeout: Optional[Union[Timeout, float, int]] = 3,\n) -> BaseHTTPResponse:\n \"\"\"\n A convenience, top-level request method. It uses a module-global ``PoolManager`` instance.\n Therefore, its side effects could be shared across dependencies relying on it.\n To avoid side effects create a new ``PoolManager`` instance and use it instead.\n The method does not accept low-level ``**urlopen_kw`` keyword arguments.\n \"\"\"\n\n return _DEFAULT_POOL.request(\n method,\n url,\n body=body,\n fields=fields,\n headers=headers,\n preload_content=preload_content,\n decode_content=decode_content,\n redirect=redirect,\n retries=retries,\n timeout=timeout,\n )\n", "path": "src/urllib3/__init__.py"}]} | 1,480 | 203 |
gh_patches_debug_24800 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-1493 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Need help with generating GUID/UUID values for context variables
* Cookiecutter version: 1.6
* Template project url: none
* Python version: 3.7 (virtual env created using win python 3.7 x64)
* Operating System: Windows 10, 64 Bit
### Description:
First off many thanks for creating this project !
Here is some context of what I am trying to do and where I need some guidance
* I am trying to use CC to generate new a Visual Studio 2017 solution and project files with a particular folder/file organization that I like
* I was able to most of it working but for the below:
* Parts of the above project, solution files involves generating several unique GUIDs
* my first approach was creating a `pre_gen_project.py` inside the `hooks` folder and update/create new variables that could be added to the ones loaded from `cookiecutter.json` or entered by the user
* I was however blocked as I could not figure out how to access the context being used by CC and the jinja2 engine
* I proceeded to go over the many issues on github and found some related ones like the following: #60, #102, #180, #288 but no clear answer on how to achieve what I'd like
* I also followed some others issues that suggested creating custom jinja2 extension/filter (#944) but I couldnt figure out how or where to put them in the template folder so the cookiecutter.exe can identify them and pick them up
* Lastly, I also tried going over the CC source code and tried to create a new executable from my script (similar to `cli.py`) that passes the guids via the `extra_context` to `cookiecutter.main(...)` but ran into some other problems that I am still trying to figure out
Appreciate any pointers on how I can inject GUID values for the context variables
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cookiecutter/extensions.py`
Content:
```
1 """Jinja2 extensions."""
2 import json
3 import string
4 from secrets import choice
5
6 from jinja2.ext import Extension
7 from slugify import slugify as pyslugify
8
9
10 class JsonifyExtension(Extension):
11 """Jinja2 extension to convert a Python object to JSON."""
12
13 def __init__(self, environment):
14 """Initialize the extension with the given environment."""
15 super(JsonifyExtension, self).__init__(environment)
16
17 def jsonify(obj):
18 return json.dumps(obj, sort_keys=True, indent=4)
19
20 environment.filters['jsonify'] = jsonify
21
22
23 class RandomStringExtension(Extension):
24 """Jinja2 extension to create a random string."""
25
26 def __init__(self, environment):
27 """Jinja2 Extension Constructor."""
28 super(RandomStringExtension, self).__init__(environment)
29
30 def random_ascii_string(length, punctuation=False):
31 if punctuation:
32 corpus = "".join((string.ascii_letters, string.punctuation))
33 else:
34 corpus = string.ascii_letters
35 return "".join(choice(corpus) for _ in range(length))
36
37 environment.globals.update(random_ascii_string=random_ascii_string)
38
39
40 class SlugifyExtension(Extension):
41 """Jinja2 Extension to slugify string."""
42
43 def __init__(self, environment):
44 """Jinja2 Extension constructor."""
45 super(SlugifyExtension, self).__init__(environment)
46
47 def slugify(value, **kwargs):
48 """Slugifies the value."""
49 return pyslugify(value, **kwargs)
50
51 environment.filters['slugify'] = slugify
52
```
Path: `cookiecutter/environment.py`
Content:
```
1 """Jinja2 environment and extensions loading."""
2 from jinja2 import Environment, StrictUndefined
3
4 from cookiecutter.exceptions import UnknownExtension
5
6
7 class ExtensionLoaderMixin(object):
8 """Mixin providing sane loading of extensions specified in a given context.
9
10 The context is being extracted from the keyword arguments before calling
11 the next parent class in line of the child.
12 """
13
14 def __init__(self, **kwargs):
15 """Initialize the Jinja2 Environment object while loading extensions.
16
17 Does the following:
18
19 1. Establishes default_extensions (currently just a Time feature)
20 2. Reads extensions set in the cookiecutter.json _extensions key.
21 3. Attempts to load the extensions. Provides useful error if fails.
22 """
23 context = kwargs.pop('context', {})
24
25 default_extensions = [
26 'cookiecutter.extensions.JsonifyExtension',
27 'cookiecutter.extensions.RandomStringExtension',
28 'cookiecutter.extensions.SlugifyExtension',
29 'jinja2_time.TimeExtension',
30 ]
31 extensions = default_extensions + self._read_extensions(context)
32
33 try:
34 super(ExtensionLoaderMixin, self).__init__(extensions=extensions, **kwargs)
35 except ImportError as err:
36 raise UnknownExtension('Unable to load extension: {}'.format(err))
37
38 def _read_extensions(self, context):
39 """Return list of extensions as str to be passed on to the Jinja2 env.
40
41 If context does not contain the relevant info, return an empty
42 list instead.
43 """
44 try:
45 extensions = context['cookiecutter']['_extensions']
46 except KeyError:
47 return []
48 else:
49 return [str(ext) for ext in extensions]
50
51
52 class StrictEnvironment(ExtensionLoaderMixin, Environment):
53 """Create strict Jinja2 environment.
54
55 Jinja2 environment will raise error on undefined variable in template-
56 rendering context.
57 """
58
59 def __init__(self, **kwargs):
60 """Set the standard Cookiecutter StrictEnvironment.
61
62 Also loading extensions defined in cookiecutter.json's _extensions key.
63 """
64 super(StrictEnvironment, self).__init__(undefined=StrictUndefined, **kwargs)
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cookiecutter/environment.py b/cookiecutter/environment.py
--- a/cookiecutter/environment.py
+++ b/cookiecutter/environment.py
@@ -26,6 +26,7 @@
'cookiecutter.extensions.JsonifyExtension',
'cookiecutter.extensions.RandomStringExtension',
'cookiecutter.extensions.SlugifyExtension',
+ 'cookiecutter.extensions.UUIDExtension',
'jinja2_time.TimeExtension',
]
extensions = default_extensions + self._read_extensions(context)
diff --git a/cookiecutter/extensions.py b/cookiecutter/extensions.py
--- a/cookiecutter/extensions.py
+++ b/cookiecutter/extensions.py
@@ -1,6 +1,7 @@
"""Jinja2 extensions."""
import json
import string
+import uuid
from secrets import choice
from jinja2.ext import Extension
@@ -49,3 +50,17 @@
return pyslugify(value, **kwargs)
environment.filters['slugify'] = slugify
+
+
+class UUIDExtension(Extension):
+ """Jinja2 Extension to generate uuid4 string."""
+
+ def __init__(self, environment):
+ """Jinja2 Extension constructor."""
+ super(UUIDExtension, self).__init__(environment)
+
+ def uuid4():
+ """Generate UUID4."""
+ return str(uuid.uuid4())
+
+ environment.globals.update(uuid4=uuid4)
| {"golden_diff": "diff --git a/cookiecutter/environment.py b/cookiecutter/environment.py\n--- a/cookiecutter/environment.py\n+++ b/cookiecutter/environment.py\n@@ -26,6 +26,7 @@\n 'cookiecutter.extensions.JsonifyExtension',\n 'cookiecutter.extensions.RandomStringExtension',\n 'cookiecutter.extensions.SlugifyExtension',\n+ 'cookiecutter.extensions.UUIDExtension',\n 'jinja2_time.TimeExtension',\n ]\n extensions = default_extensions + self._read_extensions(context)\ndiff --git a/cookiecutter/extensions.py b/cookiecutter/extensions.py\n--- a/cookiecutter/extensions.py\n+++ b/cookiecutter/extensions.py\n@@ -1,6 +1,7 @@\n \"\"\"Jinja2 extensions.\"\"\"\n import json\n import string\n+import uuid\n from secrets import choice\n \n from jinja2.ext import Extension\n@@ -49,3 +50,17 @@\n return pyslugify(value, **kwargs)\n \n environment.filters['slugify'] = slugify\n+\n+\n+class UUIDExtension(Extension):\n+ \"\"\"Jinja2 Extension to generate uuid4 string.\"\"\"\n+\n+ def __init__(self, environment):\n+ \"\"\"Jinja2 Extension constructor.\"\"\"\n+ super(UUIDExtension, self).__init__(environment)\n+\n+ def uuid4():\n+ \"\"\"Generate UUID4.\"\"\"\n+ return str(uuid.uuid4())\n+\n+ environment.globals.update(uuid4=uuid4)\n", "issue": "Need help with generating GUID/UUID values for context variables\n* Cookiecutter version: 1.6\r\n* Template project url: none\r\n* Python version: 3.7 (virtual env created using win python 3.7 x64)\r\n* Operating System: Windows 10, 64 Bit\r\n\r\n### Description:\r\n\r\nFirst off many thanks for creating this project !\r\nHere is some context of what I am trying to do and where I need some guidance\r\n* I am trying to use CC to generate new a Visual Studio 2017 solution and project files with a particular folder/file organization that I like\r\n* I was able to most of it working but for the below:\r\n* Parts of the above project, solution files involves generating several unique GUIDs\r\n* my first approach was creating a `pre_gen_project.py` inside the `hooks` folder and update/create new variables that could be added to the ones loaded from `cookiecutter.json` or entered by the user\r\n* I was however blocked as I could not figure out how to access the context being used by CC and the jinja2 engine \r\n* I proceeded to go over the many issues on github and found some related ones like the following: #60, #102, #180, #288 but no clear answer on how to achieve what I'd like\r\n* I also followed some others issues that suggested creating custom jinja2 extension/filter (#944) but I couldnt figure out how or where to put them in the template folder so the cookiecutter.exe can identify them and pick them up\r\n* Lastly, I also tried going over the CC source code and tried to create a new executable from my script (similar to `cli.py`) that passes the guids via the `extra_context` to `cookiecutter.main(...)` but ran into some other problems that I am still trying to figure out\r\n\r\nAppreciate any pointers on how I can inject GUID values for the context variables\n", "before_files": [{"content": "\"\"\"Jinja2 extensions.\"\"\"\nimport json\nimport string\nfrom secrets import choice\n\nfrom jinja2.ext import Extension\nfrom slugify import slugify as pyslugify\n\n\nclass JsonifyExtension(Extension):\n \"\"\"Jinja2 extension to convert a Python object to JSON.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Initialize the extension with the given environment.\"\"\"\n super(JsonifyExtension, self).__init__(environment)\n\n def jsonify(obj):\n return json.dumps(obj, sort_keys=True, indent=4)\n\n environment.filters['jsonify'] = jsonify\n\n\nclass RandomStringExtension(Extension):\n \"\"\"Jinja2 extension to create a random string.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Jinja2 Extension Constructor.\"\"\"\n super(RandomStringExtension, self).__init__(environment)\n\n def random_ascii_string(length, punctuation=False):\n if punctuation:\n corpus = \"\".join((string.ascii_letters, string.punctuation))\n else:\n corpus = string.ascii_letters\n return \"\".join(choice(corpus) for _ in range(length))\n\n environment.globals.update(random_ascii_string=random_ascii_string)\n\n\nclass SlugifyExtension(Extension):\n \"\"\"Jinja2 Extension to slugify string.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Jinja2 Extension constructor.\"\"\"\n super(SlugifyExtension, self).__init__(environment)\n\n def slugify(value, **kwargs):\n \"\"\"Slugifies the value.\"\"\"\n return pyslugify(value, **kwargs)\n\n environment.filters['slugify'] = slugify\n", "path": "cookiecutter/extensions.py"}, {"content": "\"\"\"Jinja2 environment and extensions loading.\"\"\"\nfrom jinja2 import Environment, StrictUndefined\n\nfrom cookiecutter.exceptions import UnknownExtension\n\n\nclass ExtensionLoaderMixin(object):\n \"\"\"Mixin providing sane loading of extensions specified in a given context.\n\n The context is being extracted from the keyword arguments before calling\n the next parent class in line of the child.\n \"\"\"\n\n def __init__(self, **kwargs):\n \"\"\"Initialize the Jinja2 Environment object while loading extensions.\n\n Does the following:\n\n 1. Establishes default_extensions (currently just a Time feature)\n 2. Reads extensions set in the cookiecutter.json _extensions key.\n 3. Attempts to load the extensions. Provides useful error if fails.\n \"\"\"\n context = kwargs.pop('context', {})\n\n default_extensions = [\n 'cookiecutter.extensions.JsonifyExtension',\n 'cookiecutter.extensions.RandomStringExtension',\n 'cookiecutter.extensions.SlugifyExtension',\n 'jinja2_time.TimeExtension',\n ]\n extensions = default_extensions + self._read_extensions(context)\n\n try:\n super(ExtensionLoaderMixin, self).__init__(extensions=extensions, **kwargs)\n except ImportError as err:\n raise UnknownExtension('Unable to load extension: {}'.format(err))\n\n def _read_extensions(self, context):\n \"\"\"Return list of extensions as str to be passed on to the Jinja2 env.\n\n If context does not contain the relevant info, return an empty\n list instead.\n \"\"\"\n try:\n extensions = context['cookiecutter']['_extensions']\n except KeyError:\n return []\n else:\n return [str(ext) for ext in extensions]\n\n\nclass StrictEnvironment(ExtensionLoaderMixin, Environment):\n \"\"\"Create strict Jinja2 environment.\n\n Jinja2 environment will raise error on undefined variable in template-\n rendering context.\n \"\"\"\n\n def __init__(self, **kwargs):\n \"\"\"Set the standard Cookiecutter StrictEnvironment.\n\n Also loading extensions defined in cookiecutter.json's _extensions key.\n \"\"\"\n super(StrictEnvironment, self).__init__(undefined=StrictUndefined, **kwargs)\n", "path": "cookiecutter/environment.py"}], "after_files": [{"content": "\"\"\"Jinja2 extensions.\"\"\"\nimport json\nimport string\nimport uuid\nfrom secrets import choice\n\nfrom jinja2.ext import Extension\nfrom slugify import slugify as pyslugify\n\n\nclass JsonifyExtension(Extension):\n \"\"\"Jinja2 extension to convert a Python object to JSON.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Initialize the extension with the given environment.\"\"\"\n super(JsonifyExtension, self).__init__(environment)\n\n def jsonify(obj):\n return json.dumps(obj, sort_keys=True, indent=4)\n\n environment.filters['jsonify'] = jsonify\n\n\nclass RandomStringExtension(Extension):\n \"\"\"Jinja2 extension to create a random string.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Jinja2 Extension Constructor.\"\"\"\n super(RandomStringExtension, self).__init__(environment)\n\n def random_ascii_string(length, punctuation=False):\n if punctuation:\n corpus = \"\".join((string.ascii_letters, string.punctuation))\n else:\n corpus = string.ascii_letters\n return \"\".join(choice(corpus) for _ in range(length))\n\n environment.globals.update(random_ascii_string=random_ascii_string)\n\n\nclass SlugifyExtension(Extension):\n \"\"\"Jinja2 Extension to slugify string.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Jinja2 Extension constructor.\"\"\"\n super(SlugifyExtension, self).__init__(environment)\n\n def slugify(value, **kwargs):\n \"\"\"Slugifies the value.\"\"\"\n return pyslugify(value, **kwargs)\n\n environment.filters['slugify'] = slugify\n\n\nclass UUIDExtension(Extension):\n \"\"\"Jinja2 Extension to generate uuid4 string.\"\"\"\n\n def __init__(self, environment):\n \"\"\"Jinja2 Extension constructor.\"\"\"\n super(UUIDExtension, self).__init__(environment)\n\n def uuid4():\n \"\"\"Generate UUID4.\"\"\"\n return str(uuid.uuid4())\n\n environment.globals.update(uuid4=uuid4)\n", "path": "cookiecutter/extensions.py"}, {"content": "\"\"\"Jinja2 environment and extensions loading.\"\"\"\nfrom jinja2 import Environment, StrictUndefined\n\nfrom cookiecutter.exceptions import UnknownExtension\n\n\nclass ExtensionLoaderMixin(object):\n \"\"\"Mixin providing sane loading of extensions specified in a given context.\n\n The context is being extracted from the keyword arguments before calling\n the next parent class in line of the child.\n \"\"\"\n\n def __init__(self, **kwargs):\n \"\"\"Initialize the Jinja2 Environment object while loading extensions.\n\n Does the following:\n\n 1. Establishes default_extensions (currently just a Time feature)\n 2. Reads extensions set in the cookiecutter.json _extensions key.\n 3. Attempts to load the extensions. Provides useful error if fails.\n \"\"\"\n context = kwargs.pop('context', {})\n\n default_extensions = [\n 'cookiecutter.extensions.JsonifyExtension',\n 'cookiecutter.extensions.RandomStringExtension',\n 'cookiecutter.extensions.SlugifyExtension',\n 'cookiecutter.extensions.UUIDExtension',\n 'jinja2_time.TimeExtension',\n ]\n extensions = default_extensions + self._read_extensions(context)\n\n try:\n super(ExtensionLoaderMixin, self).__init__(extensions=extensions, **kwargs)\n except ImportError as err:\n raise UnknownExtension('Unable to load extension: {}'.format(err))\n\n def _read_extensions(self, context):\n \"\"\"Return list of extensions as str to be passed on to the Jinja2 env.\n\n If context does not contain the relevant info, return an empty\n list instead.\n \"\"\"\n try:\n extensions = context['cookiecutter']['_extensions']\n except KeyError:\n return []\n else:\n return [str(ext) for ext in extensions]\n\n\nclass StrictEnvironment(ExtensionLoaderMixin, Environment):\n \"\"\"Create strict Jinja2 environment.\n\n Jinja2 environment will raise error on undefined variable in template-\n rendering context.\n \"\"\"\n\n def __init__(self, **kwargs):\n \"\"\"Set the standard Cookiecutter StrictEnvironment.\n\n Also loading extensions defined in cookiecutter.json's _extensions key.\n \"\"\"\n super(StrictEnvironment, self).__init__(undefined=StrictUndefined, **kwargs)\n", "path": "cookiecutter/environment.py"}]} | 1,691 | 311 |
gh_patches_debug_8560 | rasdani/github-patches | git_diff | uccser__cs-unplugged-197 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove topic query from homepage
When the homepage is loaded, a database query is performed. This is currently not needed and should be removed.
Remove topic query from homepage
When the homepage is loaded, a database query is performed. This is currently not needed and should be removed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `csunplugged/general/views.py`
Content:
```
1 from django.views.generic import TemplateView
2
3
4 class GeneralIndexView(TemplateView):
5 template_name = 'general/index.html'
6
7 def get_context_data(self, **kwargs):
8 # TODO: Investigate if importing model from another
9 # app is sensible/best approach.
10 from topics.models import Topic
11 context = super(GeneralIndexView, self).get_context_data(**kwargs)
12 context['total_topics'] = Topic.objects.count()
13 return context
14
15
16 class GeneralAboutView(TemplateView):
17 template_name = 'general/about.html'
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/csunplugged/general/views.py b/csunplugged/general/views.py
--- a/csunplugged/general/views.py
+++ b/csunplugged/general/views.py
@@ -4,14 +4,6 @@
class GeneralIndexView(TemplateView):
template_name = 'general/index.html'
- def get_context_data(self, **kwargs):
- # TODO: Investigate if importing model from another
- # app is sensible/best approach.
- from topics.models import Topic
- context = super(GeneralIndexView, self).get_context_data(**kwargs)
- context['total_topics'] = Topic.objects.count()
- return context
-
class GeneralAboutView(TemplateView):
template_name = 'general/about.html'
| {"golden_diff": "diff --git a/csunplugged/general/views.py b/csunplugged/general/views.py\n--- a/csunplugged/general/views.py\n+++ b/csunplugged/general/views.py\n@@ -4,14 +4,6 @@\n class GeneralIndexView(TemplateView):\n template_name = 'general/index.html'\n \n- def get_context_data(self, **kwargs):\n- # TODO: Investigate if importing model from another\n- # app is sensible/best approach.\n- from topics.models import Topic\n- context = super(GeneralIndexView, self).get_context_data(**kwargs)\n- context['total_topics'] = Topic.objects.count()\n- return context\n-\n \n class GeneralAboutView(TemplateView):\n template_name = 'general/about.html'\n", "issue": "Remove topic query from homepage\nWhen the homepage is loaded, a database query is performed. This is currently not needed and should be removed.\nRemove topic query from homepage\nWhen the homepage is loaded, a database query is performed. This is currently not needed and should be removed.\n", "before_files": [{"content": "from django.views.generic import TemplateView\n\n\nclass GeneralIndexView(TemplateView):\n template_name = 'general/index.html'\n\n def get_context_data(self, **kwargs):\n # TODO: Investigate if importing model from another\n # app is sensible/best approach.\n from topics.models import Topic\n context = super(GeneralIndexView, self).get_context_data(**kwargs)\n context['total_topics'] = Topic.objects.count()\n return context\n\n\nclass GeneralAboutView(TemplateView):\n template_name = 'general/about.html'\n", "path": "csunplugged/general/views.py"}], "after_files": [{"content": "from django.views.generic import TemplateView\n\n\nclass GeneralIndexView(TemplateView):\n template_name = 'general/index.html'\n\n\nclass GeneralAboutView(TemplateView):\n template_name = 'general/about.html'\n", "path": "csunplugged/general/views.py"}]} | 461 | 165 |
gh_patches_debug_35530 | rasdani/github-patches | git_diff | hylang__hy-2214 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
curses and SIGWINCH do not work properly with Hy.
After importing curses under hy, curses is unable to detect the size of the terminal as it is resized.
This manifests as curses.LINES and curses.COLS not being updated, stdscr.getmaxyx not working and so on.
However, the workaround of launching the hy program from python with:
```
import hy
import curses
from main import main_event_loop
if __name__ == "__main__":
curses.wrapper(main_event_loop, ...)
```
allows curses to dynamically detect the size of the terminal.
I conclude therefore the problem is with the hy binary. My (limited) understanding acquired during tracking down the source of this problem is that curses uses the SIGWINCH signal, so perhaps that is a place to look.
Void linux x86 64bit, python 3.9.0
Freebsd 12.2, python 3.8.6
hy 0.19.0 (reported by pip) installed from git master branch via pip
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hy/completer.py`
Content:
```
1 import contextlib
2 import os
3 import re
4 import sys
5 import builtins
6
7 import hy.macros
8 import hy.compiler
9
10
11 docomplete = True
12
13 try:
14 import readline
15 except AttributeError as e:
16 # https://github.com/pyreadline/pyreadline/issues/65
17 if "module 'collections' has no attribute 'Callable'" in str(e):
18 docomplete = False
19 else:
20 raise
21 except ImportError:
22 docomplete = False
23
24 if docomplete:
25 if sys.platform == 'darwin' and 'libedit' in readline.__doc__:
26 readline_bind = "bind ^I rl_complete"
27 else:
28 readline_bind = "tab: complete"
29
30
31 class Completer:
32
33 def __init__(self, namespace={}):
34 if not isinstance(namespace, dict):
35 raise TypeError('namespace must be a dictionary')
36 self.namespace = namespace
37 self.path = [builtins.__dict__,
38 namespace]
39
40 namespace.setdefault('__macros__', {})
41
42 self.path.append(namespace['__macros__'])
43
44 def attr_matches(self, text):
45 # Borrowed from IPython's completer
46 m = re.match(r"(\S+(\.[\w-]+)*)\.([\w-]*)$", text)
47
48 if m:
49 expr, attr = m.group(1, 3)
50 attr = attr.replace("-", "_")
51 expr = expr.replace("-", "_")
52 else:
53 return []
54
55 try:
56 obj = eval(expr, self.namespace)
57 words = dir(obj)
58 except Exception:
59 return []
60
61 n = len(attr)
62 matches = []
63 for w in words:
64 if w[:n] == attr:
65 matches.append("{}.{}".format(
66 expr.replace("_", "-"), w.replace("_", "-")))
67 return matches
68
69 def global_matches(self, text):
70 matches = []
71 for p in self.path:
72 for k in p.keys():
73 if isinstance(k, str):
74 k = k.replace("_", "-")
75 if k.startswith(text):
76 matches.append(k)
77 return matches
78
79 def complete(self, text, state):
80 if "." in text:
81 matches = self.attr_matches(text)
82 else:
83 matches = self.global_matches(text)
84 try:
85 return matches[state]
86 except IndexError:
87 return None
88
89
90 @contextlib.contextmanager
91 def completion(completer=None):
92 delims = "()[]{} "
93 if not completer:
94 completer = Completer()
95
96 if docomplete:
97 readline.set_completer(completer.complete)
98 readline.set_completer_delims(delims)
99
100 history = os.environ.get(
101 "HY_HISTORY", os.path.expanduser("~/.hy-history"))
102 readline.parse_and_bind("set blink-matching-paren on")
103
104 try:
105 readline.read_history_file(history)
106 except OSError:
107 pass
108
109 readline.parse_and_bind(readline_bind)
110
111 try:
112 yield
113 finally:
114 if docomplete:
115 try:
116 readline.write_history_file(history)
117 except OSError:
118 pass
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hy/completer.py b/hy/completer.py
--- a/hy/completer.py
+++ b/hy/completer.py
@@ -8,24 +8,19 @@
import hy.compiler
-docomplete = True
-
-try:
- import readline
-except AttributeError as e:
- # https://github.com/pyreadline/pyreadline/issues/65
- if "module 'collections' has no attribute 'Callable'" in str(e):
- docomplete = False
- else:
- raise
-except ImportError:
- docomplete = False
-
-if docomplete:
- if sys.platform == 'darwin' and 'libedit' in readline.__doc__:
- readline_bind = "bind ^I rl_complete"
- else:
- readline_bind = "tab: complete"
+# Lazily import `readline` to work around
+# https://bugs.python.org/issue2675#msg265564
+readline = None
+def init_readline():
+ global readline
+ try:
+ import readline
+ except AttributeError as e:
+ # https://github.com/pyreadline/pyreadline/issues/65
+ if "module 'collections' has no attribute 'Callable'" not in str(e):
+ raise
+ except ImportError:
+ pass
class Completer:
@@ -86,33 +81,42 @@
except IndexError:
return None
-
@contextlib.contextmanager
def completion(completer=None):
delims = "()[]{} "
+
+ init_readline()
+ if not readline:
+ # We have nothing to do. Act like a null context manager.
+ yield
+ return
+
if not completer:
completer = Completer()
- if docomplete:
- readline.set_completer(completer.complete)
- readline.set_completer_delims(delims)
+ if sys.platform == 'darwin' and 'libedit' in readline.__doc__:
+ readline_bind = "bind ^I rl_complete"
+ else:
+ readline_bind = "tab: complete"
- history = os.environ.get(
- "HY_HISTORY", os.path.expanduser("~/.hy-history"))
- readline.parse_and_bind("set blink-matching-paren on")
+ readline.set_completer(completer.complete)
+ readline.set_completer_delims(delims)
- try:
- readline.read_history_file(history)
- except OSError:
- pass
+ history = os.environ.get(
+ "HY_HISTORY", os.path.expanduser("~/.hy-history"))
+ readline.parse_and_bind("set blink-matching-paren on")
- readline.parse_and_bind(readline_bind)
+ try:
+ readline.read_history_file(history)
+ except OSError:
+ pass
+
+ readline.parse_and_bind(readline_bind)
try:
yield
finally:
- if docomplete:
- try:
- readline.write_history_file(history)
- except OSError:
- pass
+ try:
+ readline.write_history_file(history)
+ except OSError:
+ pass
| {"golden_diff": "diff --git a/hy/completer.py b/hy/completer.py\n--- a/hy/completer.py\n+++ b/hy/completer.py\n@@ -8,24 +8,19 @@\n import hy.compiler\n \n \n-docomplete = True\n-\n-try:\n- import readline\n-except AttributeError as e:\n- # https://github.com/pyreadline/pyreadline/issues/65\n- if \"module 'collections' has no attribute 'Callable'\" in str(e):\n- docomplete = False\n- else:\n- raise\n-except ImportError:\n- docomplete = False\n-\n-if docomplete:\n- if sys.platform == 'darwin' and 'libedit' in readline.__doc__:\n- readline_bind = \"bind ^I rl_complete\"\n- else:\n- readline_bind = \"tab: complete\"\n+# Lazily import `readline` to work around\n+# https://bugs.python.org/issue2675#msg265564\n+readline = None\n+def init_readline():\n+ global readline\n+ try:\n+ import readline\n+ except AttributeError as e:\n+ # https://github.com/pyreadline/pyreadline/issues/65\n+ if \"module 'collections' has no attribute 'Callable'\" not in str(e):\n+ raise\n+ except ImportError:\n+ pass\n \n \n class Completer:\n@@ -86,33 +81,42 @@\n except IndexError:\n return None\n \n-\n @contextlib.contextmanager\n def completion(completer=None):\n delims = \"()[]{} \"\n+\n+ init_readline()\n+ if not readline:\n+ # We have nothing to do. Act like a null context manager.\n+ yield\n+ return\n+\n if not completer:\n completer = Completer()\n \n- if docomplete:\n- readline.set_completer(completer.complete)\n- readline.set_completer_delims(delims)\n+ if sys.platform == 'darwin' and 'libedit' in readline.__doc__:\n+ readline_bind = \"bind ^I rl_complete\"\n+ else:\n+ readline_bind = \"tab: complete\"\n \n- history = os.environ.get(\n- \"HY_HISTORY\", os.path.expanduser(\"~/.hy-history\"))\n- readline.parse_and_bind(\"set blink-matching-paren on\")\n+ readline.set_completer(completer.complete)\n+ readline.set_completer_delims(delims)\n \n- try:\n- readline.read_history_file(history)\n- except OSError:\n- pass\n+ history = os.environ.get(\n+ \"HY_HISTORY\", os.path.expanduser(\"~/.hy-history\"))\n+ readline.parse_and_bind(\"set blink-matching-paren on\")\n \n- readline.parse_and_bind(readline_bind)\n+ try:\n+ readline.read_history_file(history)\n+ except OSError:\n+ pass\n+\n+ readline.parse_and_bind(readline_bind)\n \n try:\n yield\n finally:\n- if docomplete:\n- try:\n- readline.write_history_file(history)\n- except OSError:\n- pass\n+ try:\n+ readline.write_history_file(history)\n+ except OSError:\n+ pass\n", "issue": "curses and SIGWINCH do not work properly with Hy.\nAfter importing curses under hy, curses is unable to detect the size of the terminal as it is resized.\r\nThis manifests as curses.LINES and curses.COLS not being updated, stdscr.getmaxyx not working and so on.\r\n\r\nHowever, the workaround of launching the hy program from python with:\r\n```\r\nimport hy\r\nimport curses\r\nfrom main import main_event_loop\r\n\r\nif __name__ == \"__main__\":\r\n curses.wrapper(main_event_loop, ...)\r\n```\r\nallows curses to dynamically detect the size of the terminal.\r\n\r\nI conclude therefore the problem is with the hy binary. My (limited) understanding acquired during tracking down the source of this problem is that curses uses the SIGWINCH signal, so perhaps that is a place to look.\r\n\r\nVoid linux x86 64bit, python 3.9.0\r\nFreebsd 12.2, python 3.8.6\r\nhy 0.19.0 (reported by pip) installed from git master branch via pip\n", "before_files": [{"content": "import contextlib\nimport os\nimport re\nimport sys\nimport builtins\n\nimport hy.macros\nimport hy.compiler\n\n\ndocomplete = True\n\ntry:\n import readline\nexcept AttributeError as e:\n # https://github.com/pyreadline/pyreadline/issues/65\n if \"module 'collections' has no attribute 'Callable'\" in str(e):\n docomplete = False\n else:\n raise\nexcept ImportError:\n docomplete = False\n\nif docomplete:\n if sys.platform == 'darwin' and 'libedit' in readline.__doc__:\n readline_bind = \"bind ^I rl_complete\"\n else:\n readline_bind = \"tab: complete\"\n\n\nclass Completer:\n\n def __init__(self, namespace={}):\n if not isinstance(namespace, dict):\n raise TypeError('namespace must be a dictionary')\n self.namespace = namespace\n self.path = [builtins.__dict__,\n namespace]\n\n namespace.setdefault('__macros__', {})\n\n self.path.append(namespace['__macros__'])\n\n def attr_matches(self, text):\n # Borrowed from IPython's completer\n m = re.match(r\"(\\S+(\\.[\\w-]+)*)\\.([\\w-]*)$\", text)\n\n if m:\n expr, attr = m.group(1, 3)\n attr = attr.replace(\"-\", \"_\")\n expr = expr.replace(\"-\", \"_\")\n else:\n return []\n\n try:\n obj = eval(expr, self.namespace)\n words = dir(obj)\n except Exception:\n return []\n\n n = len(attr)\n matches = []\n for w in words:\n if w[:n] == attr:\n matches.append(\"{}.{}\".format(\n expr.replace(\"_\", \"-\"), w.replace(\"_\", \"-\")))\n return matches\n\n def global_matches(self, text):\n matches = []\n for p in self.path:\n for k in p.keys():\n if isinstance(k, str):\n k = k.replace(\"_\", \"-\")\n if k.startswith(text):\n matches.append(k)\n return matches\n\n def complete(self, text, state):\n if \".\" in text:\n matches = self.attr_matches(text)\n else:\n matches = self.global_matches(text)\n try:\n return matches[state]\n except IndexError:\n return None\n\n\[email protected]\ndef completion(completer=None):\n delims = \"()[]{} \"\n if not completer:\n completer = Completer()\n\n if docomplete:\n readline.set_completer(completer.complete)\n readline.set_completer_delims(delims)\n\n history = os.environ.get(\n \"HY_HISTORY\", os.path.expanduser(\"~/.hy-history\"))\n readline.parse_and_bind(\"set blink-matching-paren on\")\n\n try:\n readline.read_history_file(history)\n except OSError:\n pass\n\n readline.parse_and_bind(readline_bind)\n\n try:\n yield\n finally:\n if docomplete:\n try:\n readline.write_history_file(history)\n except OSError:\n pass\n", "path": "hy/completer.py"}], "after_files": [{"content": "import contextlib\nimport os\nimport re\nimport sys\nimport builtins\n\nimport hy.macros\nimport hy.compiler\n\n\n# Lazily import `readline` to work around\n# https://bugs.python.org/issue2675#msg265564\nreadline = None\ndef init_readline():\n global readline\n try:\n import readline\n except AttributeError as e:\n # https://github.com/pyreadline/pyreadline/issues/65\n if \"module 'collections' has no attribute 'Callable'\" not in str(e):\n raise\n except ImportError:\n pass\n\n\nclass Completer:\n\n def __init__(self, namespace={}):\n if not isinstance(namespace, dict):\n raise TypeError('namespace must be a dictionary')\n self.namespace = namespace\n self.path = [builtins.__dict__,\n namespace]\n\n namespace.setdefault('__macros__', {})\n\n self.path.append(namespace['__macros__'])\n\n def attr_matches(self, text):\n # Borrowed from IPython's completer\n m = re.match(r\"(\\S+(\\.[\\w-]+)*)\\.([\\w-]*)$\", text)\n\n if m:\n expr, attr = m.group(1, 3)\n attr = attr.replace(\"-\", \"_\")\n expr = expr.replace(\"-\", \"_\")\n else:\n return []\n\n try:\n obj = eval(expr, self.namespace)\n words = dir(obj)\n except Exception:\n return []\n\n n = len(attr)\n matches = []\n for w in words:\n if w[:n] == attr:\n matches.append(\"{}.{}\".format(\n expr.replace(\"_\", \"-\"), w.replace(\"_\", \"-\")))\n return matches\n\n def global_matches(self, text):\n matches = []\n for p in self.path:\n for k in p.keys():\n if isinstance(k, str):\n k = k.replace(\"_\", \"-\")\n if k.startswith(text):\n matches.append(k)\n return matches\n\n def complete(self, text, state):\n if \".\" in text:\n matches = self.attr_matches(text)\n else:\n matches = self.global_matches(text)\n try:\n return matches[state]\n except IndexError:\n return None\n\[email protected]\ndef completion(completer=None):\n delims = \"()[]{} \"\n\n init_readline()\n if not readline:\n # We have nothing to do. Act like a null context manager.\n yield\n return\n\n if not completer:\n completer = Completer()\n\n if sys.platform == 'darwin' and 'libedit' in readline.__doc__:\n readline_bind = \"bind ^I rl_complete\"\n else:\n readline_bind = \"tab: complete\"\n\n readline.set_completer(completer.complete)\n readline.set_completer_delims(delims)\n\n history = os.environ.get(\n \"HY_HISTORY\", os.path.expanduser(\"~/.hy-history\"))\n readline.parse_and_bind(\"set blink-matching-paren on\")\n\n try:\n readline.read_history_file(history)\n except OSError:\n pass\n\n readline.parse_and_bind(readline_bind)\n\n try:\n yield\n finally:\n try:\n readline.write_history_file(history)\n except OSError:\n pass\n", "path": "hy/completer.py"}]} | 1,382 | 709 |
gh_patches_debug_511 | rasdani/github-patches | git_diff | python-gitlab__python-gitlab-1437 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing API code owner approval for protected branches
## Summary
The branch manager is missing an attribute implementation of `code_owner_approval_required` as documented in [GitLab API documentation](https://docs.gitlab.com/ce/api/protected_branches.html#protect-repository-branches)
## Expected Behavior
`ProjectProtectedBranchManager.code_owner_approval_required` should be implemented to mirror the API as documented:
Attribute | Type | Required | Description
-- | -- | -- | --
code_owner_approval_required | boolean | no | Prevent pushes to this branch if it matches an item in the CODEOWNERS file. (defaults: false)
## Actual Behavior
`code_owner_approval_required` is not available as attribute in `ProjectProtectedBranchManager`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gitlab/v4/objects/branches.py`
Content:
```
1 from gitlab import cli
2 from gitlab import exceptions as exc
3 from gitlab.base import RequiredOptional, RESTManager, RESTObject
4 from gitlab.mixins import NoUpdateMixin, ObjectDeleteMixin
5
6
7 __all__ = [
8 "ProjectBranch",
9 "ProjectBranchManager",
10 "ProjectProtectedBranch",
11 "ProjectProtectedBranchManager",
12 ]
13
14
15 class ProjectBranch(ObjectDeleteMixin, RESTObject):
16 _id_attr = "name"
17
18 @cli.register_custom_action(
19 "ProjectBranch", tuple(), ("developers_can_push", "developers_can_merge")
20 )
21 @exc.on_http_error(exc.GitlabProtectError)
22 def protect(self, developers_can_push=False, developers_can_merge=False, **kwargs):
23 """Protect the branch.
24
25 Args:
26 developers_can_push (bool): Set to True if developers are allowed
27 to push to the branch
28 developers_can_merge (bool): Set to True if developers are allowed
29 to merge to the branch
30 **kwargs: Extra options to send to the server (e.g. sudo)
31
32 Raises:
33 GitlabAuthenticationError: If authentication is not correct
34 GitlabProtectError: If the branch could not be protected
35 """
36 id = self.get_id().replace("/", "%2F")
37 path = "%s/%s/protect" % (self.manager.path, id)
38 post_data = {
39 "developers_can_push": developers_can_push,
40 "developers_can_merge": developers_can_merge,
41 }
42 self.manager.gitlab.http_put(path, post_data=post_data, **kwargs)
43 self._attrs["protected"] = True
44
45 @cli.register_custom_action("ProjectBranch")
46 @exc.on_http_error(exc.GitlabProtectError)
47 def unprotect(self, **kwargs):
48 """Unprotect the branch.
49
50 Args:
51 **kwargs: Extra options to send to the server (e.g. sudo)
52
53 Raises:
54 GitlabAuthenticationError: If authentication is not correct
55 GitlabProtectError: If the branch could not be unprotected
56 """
57 id = self.get_id().replace("/", "%2F")
58 path = "%s/%s/unprotect" % (self.manager.path, id)
59 self.manager.gitlab.http_put(path, **kwargs)
60 self._attrs["protected"] = False
61
62
63 class ProjectBranchManager(NoUpdateMixin, RESTManager):
64 _path = "/projects/%(project_id)s/repository/branches"
65 _obj_cls = ProjectBranch
66 _from_parent_attrs = {"project_id": "id"}
67 _create_attrs = RequiredOptional(required=("branch", "ref"))
68
69
70 class ProjectProtectedBranch(ObjectDeleteMixin, RESTObject):
71 _id_attr = "name"
72
73
74 class ProjectProtectedBranchManager(NoUpdateMixin, RESTManager):
75 _path = "/projects/%(project_id)s/protected_branches"
76 _obj_cls = ProjectProtectedBranch
77 _from_parent_attrs = {"project_id": "id"}
78 _create_attrs = RequiredOptional(
79 required=("name",),
80 optional=(
81 "push_access_level",
82 "merge_access_level",
83 "unprotect_access_level",
84 "allowed_to_push",
85 "allowed_to_merge",
86 "allowed_to_unprotect",
87 ),
88 )
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gitlab/v4/objects/branches.py b/gitlab/v4/objects/branches.py
--- a/gitlab/v4/objects/branches.py
+++ b/gitlab/v4/objects/branches.py
@@ -84,5 +84,6 @@
"allowed_to_push",
"allowed_to_merge",
"allowed_to_unprotect",
+ "code_owner_approval_required",
),
)
| {"golden_diff": "diff --git a/gitlab/v4/objects/branches.py b/gitlab/v4/objects/branches.py\n--- a/gitlab/v4/objects/branches.py\n+++ b/gitlab/v4/objects/branches.py\n@@ -84,5 +84,6 @@\n \"allowed_to_push\",\n \"allowed_to_merge\",\n \"allowed_to_unprotect\",\n+ \"code_owner_approval_required\",\n ),\n )\n", "issue": "Missing API code owner approval for protected branches\n## Summary\r\n\r\nThe branch manager is missing an attribute implementation of `code_owner_approval_required` as documented in [GitLab API documentation](https://docs.gitlab.com/ce/api/protected_branches.html#protect-repository-branches)\r\n\r\n## Expected Behavior\r\n\r\n`ProjectProtectedBranchManager.code_owner_approval_required` should be implemented to mirror the API as documented:\r\n\r\nAttribute | Type | Required | Description\r\n-- | -- | -- | --\r\ncode_owner_approval_required | boolean | no | Prevent pushes to this branch if it matches an item in the\u00a0CODEOWNERS\u00a0file. (defaults: false)\r\n\r\n## Actual Behavior\r\n\r\n`code_owner_approval_required` is not available as attribute in `ProjectProtectedBranchManager`.\r\n\n", "before_files": [{"content": "from gitlab import cli\nfrom gitlab import exceptions as exc\nfrom gitlab.base import RequiredOptional, RESTManager, RESTObject\nfrom gitlab.mixins import NoUpdateMixin, ObjectDeleteMixin\n\n\n__all__ = [\n \"ProjectBranch\",\n \"ProjectBranchManager\",\n \"ProjectProtectedBranch\",\n \"ProjectProtectedBranchManager\",\n]\n\n\nclass ProjectBranch(ObjectDeleteMixin, RESTObject):\n _id_attr = \"name\"\n\n @cli.register_custom_action(\n \"ProjectBranch\", tuple(), (\"developers_can_push\", \"developers_can_merge\")\n )\n @exc.on_http_error(exc.GitlabProtectError)\n def protect(self, developers_can_push=False, developers_can_merge=False, **kwargs):\n \"\"\"Protect the branch.\n\n Args:\n developers_can_push (bool): Set to True if developers are allowed\n to push to the branch\n developers_can_merge (bool): Set to True if developers are allowed\n to merge to the branch\n **kwargs: Extra options to send to the server (e.g. sudo)\n\n Raises:\n GitlabAuthenticationError: If authentication is not correct\n GitlabProtectError: If the branch could not be protected\n \"\"\"\n id = self.get_id().replace(\"/\", \"%2F\")\n path = \"%s/%s/protect\" % (self.manager.path, id)\n post_data = {\n \"developers_can_push\": developers_can_push,\n \"developers_can_merge\": developers_can_merge,\n }\n self.manager.gitlab.http_put(path, post_data=post_data, **kwargs)\n self._attrs[\"protected\"] = True\n\n @cli.register_custom_action(\"ProjectBranch\")\n @exc.on_http_error(exc.GitlabProtectError)\n def unprotect(self, **kwargs):\n \"\"\"Unprotect the branch.\n\n Args:\n **kwargs: Extra options to send to the server (e.g. sudo)\n\n Raises:\n GitlabAuthenticationError: If authentication is not correct\n GitlabProtectError: If the branch could not be unprotected\n \"\"\"\n id = self.get_id().replace(\"/\", \"%2F\")\n path = \"%s/%s/unprotect\" % (self.manager.path, id)\n self.manager.gitlab.http_put(path, **kwargs)\n self._attrs[\"protected\"] = False\n\n\nclass ProjectBranchManager(NoUpdateMixin, RESTManager):\n _path = \"/projects/%(project_id)s/repository/branches\"\n _obj_cls = ProjectBranch\n _from_parent_attrs = {\"project_id\": \"id\"}\n _create_attrs = RequiredOptional(required=(\"branch\", \"ref\"))\n\n\nclass ProjectProtectedBranch(ObjectDeleteMixin, RESTObject):\n _id_attr = \"name\"\n\n\nclass ProjectProtectedBranchManager(NoUpdateMixin, RESTManager):\n _path = \"/projects/%(project_id)s/protected_branches\"\n _obj_cls = ProjectProtectedBranch\n _from_parent_attrs = {\"project_id\": \"id\"}\n _create_attrs = RequiredOptional(\n required=(\"name\",),\n optional=(\n \"push_access_level\",\n \"merge_access_level\",\n \"unprotect_access_level\",\n \"allowed_to_push\",\n \"allowed_to_merge\",\n \"allowed_to_unprotect\",\n ),\n )\n", "path": "gitlab/v4/objects/branches.py"}], "after_files": [{"content": "from gitlab import cli\nfrom gitlab import exceptions as exc\nfrom gitlab.base import RequiredOptional, RESTManager, RESTObject\nfrom gitlab.mixins import NoUpdateMixin, ObjectDeleteMixin\n\n\n__all__ = [\n \"ProjectBranch\",\n \"ProjectBranchManager\",\n \"ProjectProtectedBranch\",\n \"ProjectProtectedBranchManager\",\n]\n\n\nclass ProjectBranch(ObjectDeleteMixin, RESTObject):\n _id_attr = \"name\"\n\n @cli.register_custom_action(\n \"ProjectBranch\", tuple(), (\"developers_can_push\", \"developers_can_merge\")\n )\n @exc.on_http_error(exc.GitlabProtectError)\n def protect(self, developers_can_push=False, developers_can_merge=False, **kwargs):\n \"\"\"Protect the branch.\n\n Args:\n developers_can_push (bool): Set to True if developers are allowed\n to push to the branch\n developers_can_merge (bool): Set to True if developers are allowed\n to merge to the branch\n **kwargs: Extra options to send to the server (e.g. sudo)\n\n Raises:\n GitlabAuthenticationError: If authentication is not correct\n GitlabProtectError: If the branch could not be protected\n \"\"\"\n id = self.get_id().replace(\"/\", \"%2F\")\n path = \"%s/%s/protect\" % (self.manager.path, id)\n post_data = {\n \"developers_can_push\": developers_can_push,\n \"developers_can_merge\": developers_can_merge,\n }\n self.manager.gitlab.http_put(path, post_data=post_data, **kwargs)\n self._attrs[\"protected\"] = True\n\n @cli.register_custom_action(\"ProjectBranch\")\n @exc.on_http_error(exc.GitlabProtectError)\n def unprotect(self, **kwargs):\n \"\"\"Unprotect the branch.\n\n Args:\n **kwargs: Extra options to send to the server (e.g. sudo)\n\n Raises:\n GitlabAuthenticationError: If authentication is not correct\n GitlabProtectError: If the branch could not be unprotected\n \"\"\"\n id = self.get_id().replace(\"/\", \"%2F\")\n path = \"%s/%s/unprotect\" % (self.manager.path, id)\n self.manager.gitlab.http_put(path, **kwargs)\n self._attrs[\"protected\"] = False\n\n\nclass ProjectBranchManager(NoUpdateMixin, RESTManager):\n _path = \"/projects/%(project_id)s/repository/branches\"\n _obj_cls = ProjectBranch\n _from_parent_attrs = {\"project_id\": \"id\"}\n _create_attrs = RequiredOptional(required=(\"branch\", \"ref\"))\n\n\nclass ProjectProtectedBranch(ObjectDeleteMixin, RESTObject):\n _id_attr = \"name\"\n\n\nclass ProjectProtectedBranchManager(NoUpdateMixin, RESTManager):\n _path = \"/projects/%(project_id)s/protected_branches\"\n _obj_cls = ProjectProtectedBranch\n _from_parent_attrs = {\"project_id\": \"id\"}\n _create_attrs = RequiredOptional(\n required=(\"name\",),\n optional=(\n \"push_access_level\",\n \"merge_access_level\",\n \"unprotect_access_level\",\n \"allowed_to_push\",\n \"allowed_to_merge\",\n \"allowed_to_unprotect\",\n \"code_owner_approval_required\",\n ),\n )\n", "path": "gitlab/v4/objects/branches.py"}]} | 1,269 | 91 |
gh_patches_debug_29411 | rasdani/github-patches | git_diff | cloudtools__troposphere-836 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add ResourceLifecycleConfig to AWS::ElasticBeanstalk::Application
[AWS::ElasticBeanstalk::Application](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-beanstalk.html)
Use the ResourceLifecycleConfig property to define lifecycle settings for resources that belong to the application, and the service role that Elastic Beanstalk assumes in order to apply lifecycle settings.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/elasticbeanstalk.py`
Content:
```
1 # Copyright (c) 2013, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSObject, AWSProperty, Tags
7
8
9 WebServer = "WebServer"
10 Worker = "Worker"
11 WebServerType = "Standard"
12 WorkerType = "SQS/HTTP"
13
14
15 class SourceBundle(AWSProperty):
16 props = {
17 'S3Bucket': (basestring, True),
18 'S3Key': (basestring, True),
19 }
20
21
22 class SourceConfiguration(AWSProperty):
23 props = {
24 'ApplicationName': (basestring, True),
25 'TemplateName': (basestring, True),
26 }
27
28
29 class OptionSettings(AWSProperty):
30 props = {
31 'Namespace': (basestring, True),
32 'OptionName': (basestring, True),
33 'Value': (basestring, True),
34 }
35
36
37 class Application(AWSObject):
38 resource_type = "AWS::ElasticBeanstalk::Application"
39
40 props = {
41 'ApplicationName': (basestring, False),
42 'Description': (basestring, False),
43 }
44
45
46 class ApplicationVersion(AWSObject):
47 resource_type = "AWS::ElasticBeanstalk::ApplicationVersion"
48
49 props = {
50 'ApplicationName': (basestring, True),
51 'Description': (basestring, False),
52 'SourceBundle': (SourceBundle, False),
53 }
54
55
56 class ConfigurationTemplate(AWSObject):
57 resource_type = "AWS::ElasticBeanstalk::ConfigurationTemplate"
58
59 props = {
60 'ApplicationName': (basestring, True),
61 'Description': (basestring, False),
62 'EnvironmentId': (basestring, False),
63 'OptionSettings': ([OptionSettings], False),
64 'SolutionStackName': (basestring, False),
65 'SourceConfiguration': (SourceConfiguration, False),
66 }
67
68
69 def validate_tier_name(name):
70 valid_names = [WebServer, Worker]
71 if name not in valid_names:
72 raise ValueError('Tier name needs to be one of %r' % valid_names)
73 return name
74
75
76 def validate_tier_type(tier_type):
77 valid_types = [WebServerType, WorkerType]
78 if tier_type not in valid_types:
79 raise ValueError('Tier type needs to be one of %r' % valid_types)
80 return tier_type
81
82
83 class Tier(AWSProperty):
84 props = {
85 'Name': (validate_tier_name, False),
86 'Type': (validate_tier_type, False),
87 'Version': (basestring, False),
88 }
89
90
91 class Environment(AWSObject):
92 resource_type = "AWS::ElasticBeanstalk::Environment"
93
94 props = {
95 'ApplicationName': (basestring, True),
96 'CNAMEPrefix': (basestring, False),
97 'Description': (basestring, False),
98 'EnvironmentName': (basestring, False),
99 'OptionSettings': ([OptionSettings], False),
100 'SolutionStackName': (basestring, False),
101 'Tags': (Tags, False),
102 'TemplateName': (basestring, False),
103 'Tier': (Tier, False),
104 'VersionLabel': (basestring, False),
105 }
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/troposphere/elasticbeanstalk.py b/troposphere/elasticbeanstalk.py
--- a/troposphere/elasticbeanstalk.py
+++ b/troposphere/elasticbeanstalk.py
@@ -4,7 +4,7 @@
# See LICENSE file for full license.
from . import AWSObject, AWSProperty, Tags
-
+from .validators import boolean, integer
WebServer = "WebServer"
Worker = "Worker"
@@ -12,6 +12,29 @@
WorkerType = "SQS/HTTP"
+class MaxAgeRule(AWSProperty):
+ props = {
+ 'DeleteSourceFromS3': (boolean, False),
+ 'Enabled': (boolean, False),
+ 'MaxAgeInDays': (integer, False),
+ }
+
+
+class MaxCountRule(AWSProperty):
+ props = {
+ 'DeleteSourceFromS3': (boolean, False),
+ 'Enabled': (boolean, False),
+ 'MaxCount': (integer, False),
+ }
+
+
+class ApplicationVersionLifecycleConfig(AWSProperty):
+ props = {
+ 'MaxAgeRule': (MaxAgeRule, False),
+ 'MaxCountRule': (MaxCountRule, False),
+ }
+
+
class SourceBundle(AWSProperty):
props = {
'S3Bucket': (basestring, True),
@@ -26,6 +49,13 @@
}
+class ApplicationResourceLifecycleConfig(AWSProperty):
+ props = {
+ 'ServiceRole': (basestring, False),
+ 'VersionLifecycleConfig': (ApplicationVersionLifecycleConfig, False),
+ }
+
+
class OptionSettings(AWSProperty):
props = {
'Namespace': (basestring, True),
@@ -40,6 +70,7 @@
props = {
'ApplicationName': (basestring, False),
'Description': (basestring, False),
+ 'ResourceLifecycleConfig': (ApplicationResourceLifecycleConfig, False),
}
| {"golden_diff": "diff --git a/troposphere/elasticbeanstalk.py b/troposphere/elasticbeanstalk.py\n--- a/troposphere/elasticbeanstalk.py\n+++ b/troposphere/elasticbeanstalk.py\n@@ -4,7 +4,7 @@\n # See LICENSE file for full license.\n \n from . import AWSObject, AWSProperty, Tags\n-\n+from .validators import boolean, integer\n \n WebServer = \"WebServer\"\n Worker = \"Worker\"\n@@ -12,6 +12,29 @@\n WorkerType = \"SQS/HTTP\"\n \n \n+class MaxAgeRule(AWSProperty):\n+ props = {\n+ 'DeleteSourceFromS3': (boolean, False),\n+ 'Enabled': (boolean, False),\n+ 'MaxAgeInDays': (integer, False),\n+ }\n+\n+\n+class MaxCountRule(AWSProperty):\n+ props = {\n+ 'DeleteSourceFromS3': (boolean, False),\n+ 'Enabled': (boolean, False),\n+ 'MaxCount': (integer, False),\n+ }\n+\n+\n+class ApplicationVersionLifecycleConfig(AWSProperty):\n+ props = {\n+ 'MaxAgeRule': (MaxAgeRule, False),\n+ 'MaxCountRule': (MaxCountRule, False),\n+ }\n+\n+\n class SourceBundle(AWSProperty):\n props = {\n 'S3Bucket': (basestring, True),\n@@ -26,6 +49,13 @@\n }\n \n \n+class ApplicationResourceLifecycleConfig(AWSProperty):\n+ props = {\n+ 'ServiceRole': (basestring, False),\n+ 'VersionLifecycleConfig': (ApplicationVersionLifecycleConfig, False),\n+ }\n+\n+\n class OptionSettings(AWSProperty):\n props = {\n 'Namespace': (basestring, True),\n@@ -40,6 +70,7 @@\n props = {\n 'ApplicationName': (basestring, False),\n 'Description': (basestring, False),\n+ 'ResourceLifecycleConfig': (ApplicationResourceLifecycleConfig, False),\n }\n", "issue": "Add ResourceLifecycleConfig to AWS::ElasticBeanstalk::Application\n[AWS::ElasticBeanstalk::Application](http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-beanstalk.html)\r\nUse the ResourceLifecycleConfig property to define lifecycle settings for resources that belong to the application, and the service role that Elastic Beanstalk assumes in order to apply lifecycle settings.\n", "before_files": [{"content": "# Copyright (c) 2013, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty, Tags\n\n\nWebServer = \"WebServer\"\nWorker = \"Worker\"\nWebServerType = \"Standard\"\nWorkerType = \"SQS/HTTP\"\n\n\nclass SourceBundle(AWSProperty):\n props = {\n 'S3Bucket': (basestring, True),\n 'S3Key': (basestring, True),\n }\n\n\nclass SourceConfiguration(AWSProperty):\n props = {\n 'ApplicationName': (basestring, True),\n 'TemplateName': (basestring, True),\n }\n\n\nclass OptionSettings(AWSProperty):\n props = {\n 'Namespace': (basestring, True),\n 'OptionName': (basestring, True),\n 'Value': (basestring, True),\n }\n\n\nclass Application(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::Application\"\n\n props = {\n 'ApplicationName': (basestring, False),\n 'Description': (basestring, False),\n }\n\n\nclass ApplicationVersion(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::ApplicationVersion\"\n\n props = {\n 'ApplicationName': (basestring, True),\n 'Description': (basestring, False),\n 'SourceBundle': (SourceBundle, False),\n }\n\n\nclass ConfigurationTemplate(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::ConfigurationTemplate\"\n\n props = {\n 'ApplicationName': (basestring, True),\n 'Description': (basestring, False),\n 'EnvironmentId': (basestring, False),\n 'OptionSettings': ([OptionSettings], False),\n 'SolutionStackName': (basestring, False),\n 'SourceConfiguration': (SourceConfiguration, False),\n }\n\n\ndef validate_tier_name(name):\n valid_names = [WebServer, Worker]\n if name not in valid_names:\n raise ValueError('Tier name needs to be one of %r' % valid_names)\n return name\n\n\ndef validate_tier_type(tier_type):\n valid_types = [WebServerType, WorkerType]\n if tier_type not in valid_types:\n raise ValueError('Tier type needs to be one of %r' % valid_types)\n return tier_type\n\n\nclass Tier(AWSProperty):\n props = {\n 'Name': (validate_tier_name, False),\n 'Type': (validate_tier_type, False),\n 'Version': (basestring, False),\n }\n\n\nclass Environment(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::Environment\"\n\n props = {\n 'ApplicationName': (basestring, True),\n 'CNAMEPrefix': (basestring, False),\n 'Description': (basestring, False),\n 'EnvironmentName': (basestring, False),\n 'OptionSettings': ([OptionSettings], False),\n 'SolutionStackName': (basestring, False),\n 'Tags': (Tags, False),\n 'TemplateName': (basestring, False),\n 'Tier': (Tier, False),\n 'VersionLabel': (basestring, False),\n }\n", "path": "troposphere/elasticbeanstalk.py"}], "after_files": [{"content": "# Copyright (c) 2013, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty, Tags\nfrom .validators import boolean, integer\n\nWebServer = \"WebServer\"\nWorker = \"Worker\"\nWebServerType = \"Standard\"\nWorkerType = \"SQS/HTTP\"\n\n\nclass MaxAgeRule(AWSProperty):\n props = {\n 'DeleteSourceFromS3': (boolean, False),\n 'Enabled': (boolean, False),\n 'MaxAgeInDays': (integer, False),\n }\n\n\nclass MaxCountRule(AWSProperty):\n props = {\n 'DeleteSourceFromS3': (boolean, False),\n 'Enabled': (boolean, False),\n 'MaxCount': (integer, False),\n }\n\n\nclass ApplicationVersionLifecycleConfig(AWSProperty):\n props = {\n 'MaxAgeRule': (MaxAgeRule, False),\n 'MaxCountRule': (MaxCountRule, False),\n }\n\n\nclass SourceBundle(AWSProperty):\n props = {\n 'S3Bucket': (basestring, True),\n 'S3Key': (basestring, True),\n }\n\n\nclass SourceConfiguration(AWSProperty):\n props = {\n 'ApplicationName': (basestring, True),\n 'TemplateName': (basestring, True),\n }\n\n\nclass ApplicationResourceLifecycleConfig(AWSProperty):\n props = {\n 'ServiceRole': (basestring, False),\n 'VersionLifecycleConfig': (ApplicationVersionLifecycleConfig, False),\n }\n\n\nclass OptionSettings(AWSProperty):\n props = {\n 'Namespace': (basestring, True),\n 'OptionName': (basestring, True),\n 'Value': (basestring, True),\n }\n\n\nclass Application(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::Application\"\n\n props = {\n 'ApplicationName': (basestring, False),\n 'Description': (basestring, False),\n 'ResourceLifecycleConfig': (ApplicationResourceLifecycleConfig, False),\n }\n\n\nclass ApplicationVersion(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::ApplicationVersion\"\n\n props = {\n 'ApplicationName': (basestring, True),\n 'Description': (basestring, False),\n 'SourceBundle': (SourceBundle, False),\n }\n\n\nclass ConfigurationTemplate(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::ConfigurationTemplate\"\n\n props = {\n 'ApplicationName': (basestring, True),\n 'Description': (basestring, False),\n 'EnvironmentId': (basestring, False),\n 'OptionSettings': ([OptionSettings], False),\n 'SolutionStackName': (basestring, False),\n 'SourceConfiguration': (SourceConfiguration, False),\n }\n\n\ndef validate_tier_name(name):\n valid_names = [WebServer, Worker]\n if name not in valid_names:\n raise ValueError('Tier name needs to be one of %r' % valid_names)\n return name\n\n\ndef validate_tier_type(tier_type):\n valid_types = [WebServerType, WorkerType]\n if tier_type not in valid_types:\n raise ValueError('Tier type needs to be one of %r' % valid_types)\n return tier_type\n\n\nclass Tier(AWSProperty):\n props = {\n 'Name': (validate_tier_name, False),\n 'Type': (validate_tier_type, False),\n 'Version': (basestring, False),\n }\n\n\nclass Environment(AWSObject):\n resource_type = \"AWS::ElasticBeanstalk::Environment\"\n\n props = {\n 'ApplicationName': (basestring, True),\n 'CNAMEPrefix': (basestring, False),\n 'Description': (basestring, False),\n 'EnvironmentName': (basestring, False),\n 'OptionSettings': ([OptionSettings], False),\n 'SolutionStackName': (basestring, False),\n 'Tags': (Tags, False),\n 'TemplateName': (basestring, False),\n 'Tier': (Tier, False),\n 'VersionLabel': (basestring, False),\n }\n", "path": "troposphere/elasticbeanstalk.py"}]} | 1,243 | 438 |
gh_patches_debug_30488 | rasdani/github-patches | git_diff | kivy__kivy-3652 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Python 3.4 packager KeyError: 'rthooks'
As described here:
http://stackoverflow.com/questions/31083061/kivy-1-9-0-windows-package-keyerror-rthooks/32366409#32366409
I've partially troubleshooted this:
---
The [kivy docs](http://kivy.org/docs/guide/packaging-windows.html) you reference mention adding the following three lines to the top of the `.spec` file:
```
from kivy.tools.packaging.pyinstaller_hooks import install_hooks
import os
install_hooks(globals())
```
The error is happening in `install_hooks(globals())`, which is defined at `\Lib\site-packages\kivy\tools\packaging\pyinstaller_hooks\__init__.py`:
```
from os.path import dirname, join
from functools import partial
curdir = dirname(__file__)
def install_hooks(sym, hookspath=None):
_hookspath = [curdir]
if hookspath is not None:
_hookspath += hookspath
sym['rthooks']['kivy'] = [join(curdir, 'rt-hook-kivy.py')]
sym['Analysis'] = partial(sym['Analysis'], hookspath=_hookspath)
```
But the second last line is causing the message: `WARNING: stderr: KeyError: 'rthooks'`.
So it looks like it's expecting a variable `rthooks` to be in the global namespace, but it's not.
I'm not sure what to do next.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kivy/tools/packaging/pyinstaller_hooks/__init__.py`
Content:
```
1 from os.path import dirname, join
2 from functools import partial
3
4 curdir = dirname(__file__)
5
6
7 def install_hooks(sym, hookspath=None):
8
9 _hookspath = [curdir]
10 if hookspath is not None:
11 _hookspath += hookspath
12
13 sym['rthooks']['kivy'] = [join(curdir, 'rt-hook-kivy.py')]
14 sym['Analysis'] = partial(sym['Analysis'], hookspath=_hookspath)
15
```
Path: `kivy/tools/packaging/pyinstaller_hooks/hook-kivy.py`
Content:
```
1 '''
2 Kivy hook for PyInstaller
3 =========================
4
5 Kivy load itself in a complete dynamic way. PyImported don't see most of the
6 import cause of the Factory and Core.
7 In addition, the data and missing module are not copied automatically.
8
9 With this hook, everything needed for running kivy is correctly copied.
10
11 Check kivy documentation about how to use these hook for packaging application.
12 '''
13
14 import kivy
15 from kivy.factory import Factory
16
17
18 def get_modules():
19 return [x.get('module', None) for x in Factory.classes.values()]
20
21
22 datas = [
23 (kivy.kivy_data_dir, 'kivy_install'),
24 (kivy.kivy_modules_dir, 'kivy_install'),
25 (kivy.kivy_exts_dir, 'kivy_install'),
26 ]
27
28 # extensions
29 _kivy_modules = [
30
31 # sdl2
32
33 # uncomment this if you need to package pygame.
34 # pygame
35 #'pygame.event',
36 #'pygame.video',
37 #'pygame.image',
38 #'pygame.display',
39 #'pygame',
40 'xml.etree.cElementTree',
41
42 # external modules
43 'kivy.cache',
44 'kivy.atlas',
45 'kivy.network',
46 'kivy.network.urlrequest',
47 'kivy.lib.osc',
48 'kivy.lib.osc.OSC',
49 'kivy.lib.osc.oscAPI',
50 'kivy.lib.mtdev',
51 'kivy.lib.sdl2',
52 'kivy.factory_registers',
53 'kivy.input.recorder',
54 'kivy.input.providers',
55 'kivy.input.providers.tuio',
56 'kivy.input.providers.mouse',
57 'kivy.input.providers.wm_common',
58 'kivy.input.providers.wm_touch',
59 'kivy.input.providers.wm_pen',
60 'kivy.input.providers.hidinput',
61 'kivy.input.providers.linuxwacom',
62 'kivy.input.providers.mactouch',
63 'kivy.input.providers.mouse',
64 'kivy.input.providers.mtdev',
65
66 # compiled modules
67 'kivy.event',
68 'kivy.graphics.buffer',
69 'kivy.graphics.c_opengl_debug',
70 'kivy.graphics.compiler',
71 'kivy.graphics.context_instructions',
72 'kivy.graphics.fbo',
73 'kivy.graphics.instructions',
74 'kivy.graphics.opengl',
75 'kivy.graphics.opengl_utils',
76 'kivy.graphics.shader',
77 'kivy.graphics.stenctil_instructions',
78 'kivy.graphics.texture',
79 'kivy.graphics.transformation',
80 'kivy.graphics.vbo',
81 'kivy.graphics.vertex',
82 'kivy.graphics.vertex_instructions',
83 'kivy.graphics.tesselator',
84 'kivy.properties',
85
86 # core
87 'kivy.core.audio.audio_gstplayer',
88 'kivy.core.audio.audio_pygst',
89 'kivy.core.audio.audio_sdl',
90 'kivy.core.audio.audio_pygame',
91 'kivy.core.camera.camera_avfoundation',
92 'kivy.core.camera.camera_pygst',
93 'kivy.core.camera.camera_opencv',
94 'kivy.core.camera.camera_videocapture',
95 'kivy.core.clipboard.clipboard_sdl2',
96 'kivy.core.clipboard.clipboard_android',
97 'kivy.core.clipboard.clipboard_pygame',
98 'kivy.core.clipboard.clipboard_dummy',
99 'kivy.core.image.img_imageio',
100 'kivy.core.image.img_tex',
101 'kivy.core.image.img_dds',
102 'kivy.core.image.img_sdl2',
103 'kivy.core.image.img_pygame',
104 'kivy.core.image.img_pil',
105 'kivy.core.image.img_gif',
106 'kivy.core.spelling.spelling_enchant',
107 'kivy.core.spelling.spelling_osxappkit',
108 'kivy.core.text.text_sdl2',
109 'kivy.core.text.text_pygame',
110 'kivy.core.text.text_sdlttf',
111 'kivy.core.text.text_pil',
112 'kivy.core.video.video_gstplayer',
113 'kivy.core.video.video_pygst',
114 'kivy.core.video.video_ffmpeg',
115 'kivy.core.video.video_pyglet',
116 'kivy.core.video.video_null',
117 'kivy.core.window.window_sdl2',
118 'kivy.core.window.window_egl_rpi',
119 'kivy.core.window.window_pygame',
120 'kivy.core.window.window_sdl',
121 'kivy.core.window.window_x11',
122 ]
123
124 hiddenimports = _kivy_modules + get_modules()
125 hiddenimports = list(set(hiddenimports))
126
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kivy/tools/packaging/pyinstaller_hooks/__init__.py b/kivy/tools/packaging/pyinstaller_hooks/__init__.py
--- a/kivy/tools/packaging/pyinstaller_hooks/__init__.py
+++ b/kivy/tools/packaging/pyinstaller_hooks/__init__.py
@@ -1,14 +1,15 @@
from os.path import dirname, join
-from functools import partial
curdir = dirname(__file__)
-def install_hooks(sym, hookspath=None):
+def runtime_hooks():
+ return [join(curdir, 'rt-hook-kivy.py')]
- _hookspath = [curdir]
- if hookspath is not None:
- _hookspath += hookspath
- sym['rthooks']['kivy'] = [join(curdir, 'rt-hook-kivy.py')]
- sym['Analysis'] = partial(sym['Analysis'], hookspath=_hookspath)
+def hookspath():
+ return [curdir]
+
+
+def get_hooks():
+ return {'hookspath': hookspath(), 'runtime_hooks': runtime_hooks()}
diff --git a/kivy/tools/packaging/pyinstaller_hooks/hook-kivy.py b/kivy/tools/packaging/pyinstaller_hooks/hook-kivy.py
--- a/kivy/tools/packaging/pyinstaller_hooks/hook-kivy.py
+++ b/kivy/tools/packaging/pyinstaller_hooks/hook-kivy.py
@@ -11,19 +11,37 @@
Check kivy documentation about how to use these hook for packaging application.
'''
+from os.path import join, basename
+from distutils.version import LooseVersion
+import PyInstaller
+
import kivy
from kivy.factory import Factory
+try:
+ pyinst_ver = PyInstaller.get_version() # pyinstaller < 3.0x
+except AttributeError:
+ pyinst_ver = PyInstaller.__version__
+
def get_modules():
return [x.get('module', None) for x in Factory.classes.values()]
-datas = [
- (kivy.kivy_data_dir, 'kivy_install'),
- (kivy.kivy_modules_dir, 'kivy_install'),
- (kivy.kivy_exts_dir, 'kivy_install'),
-]
+if LooseVersion(pyinst_ver) >= LooseVersion('3.0'):
+ # in pyinstaller 3, the directory contents rather than the directory itself
+ # is copied. See https://github.com/pyinstaller/pyinstaller/issues/1513.
+ datas = [
+ (kivy.kivy_data_dir,
+ join('kivy_install', basename(kivy.kivy_data_dir))),
+ (kivy.kivy_modules_dir,
+ join('kivy_install', basename(kivy.kivy_modules_dir))),
+ ]
+else:
+ datas = [
+ (kivy.kivy_data_dir, 'kivy_install'),
+ (kivy.kivy_modules_dir, 'kivy_install'),
+ ]
# extensions
_kivy_modules = [
| {"golden_diff": "diff --git a/kivy/tools/packaging/pyinstaller_hooks/__init__.py b/kivy/tools/packaging/pyinstaller_hooks/__init__.py\n--- a/kivy/tools/packaging/pyinstaller_hooks/__init__.py\n+++ b/kivy/tools/packaging/pyinstaller_hooks/__init__.py\n@@ -1,14 +1,15 @@\n from os.path import dirname, join\n-from functools import partial\n \n curdir = dirname(__file__)\n \n \n-def install_hooks(sym, hookspath=None):\n+def runtime_hooks():\n+ return [join(curdir, 'rt-hook-kivy.py')]\n \n- _hookspath = [curdir]\n- if hookspath is not None:\n- _hookspath += hookspath\n \n- sym['rthooks']['kivy'] = [join(curdir, 'rt-hook-kivy.py')]\n- sym['Analysis'] = partial(sym['Analysis'], hookspath=_hookspath)\n+def hookspath():\n+ return [curdir]\n+\n+\n+def get_hooks():\n+ return {'hookspath': hookspath(), 'runtime_hooks': runtime_hooks()}\ndiff --git a/kivy/tools/packaging/pyinstaller_hooks/hook-kivy.py b/kivy/tools/packaging/pyinstaller_hooks/hook-kivy.py\n--- a/kivy/tools/packaging/pyinstaller_hooks/hook-kivy.py\n+++ b/kivy/tools/packaging/pyinstaller_hooks/hook-kivy.py\n@@ -11,19 +11,37 @@\n Check kivy documentation about how to use these hook for packaging application.\n '''\n \n+from os.path import join, basename\n+from distutils.version import LooseVersion\n+import PyInstaller\n+\n import kivy\n from kivy.factory import Factory\n \n+try:\n+ pyinst_ver = PyInstaller.get_version() # pyinstaller < 3.0x\n+except AttributeError:\n+ pyinst_ver = PyInstaller.__version__\n+\n \n def get_modules():\n return [x.get('module', None) for x in Factory.classes.values()]\n \n \n-datas = [\n- (kivy.kivy_data_dir, 'kivy_install'),\n- (kivy.kivy_modules_dir, 'kivy_install'),\n- (kivy.kivy_exts_dir, 'kivy_install'),\n-]\n+if LooseVersion(pyinst_ver) >= LooseVersion('3.0'):\n+ # in pyinstaller 3, the directory contents rather than the directory itself\n+ # is copied. See https://github.com/pyinstaller/pyinstaller/issues/1513.\n+ datas = [\n+ (kivy.kivy_data_dir,\n+ join('kivy_install', basename(kivy.kivy_data_dir))),\n+ (kivy.kivy_modules_dir,\n+ join('kivy_install', basename(kivy.kivy_modules_dir))),\n+ ]\n+else:\n+ datas = [\n+ (kivy.kivy_data_dir, 'kivy_install'),\n+ (kivy.kivy_modules_dir, 'kivy_install'),\n+ ]\n \n # extensions\n _kivy_modules = [\n", "issue": "Python 3.4 packager KeyError: 'rthooks'\nAs described here:\nhttp://stackoverflow.com/questions/31083061/kivy-1-9-0-windows-package-keyerror-rthooks/32366409#32366409\n\nI've partially troubleshooted this:\n\n---\n\nThe [kivy docs](http://kivy.org/docs/guide/packaging-windows.html) you reference mention adding the following three lines to the top of the `.spec` file:\n\n```\nfrom kivy.tools.packaging.pyinstaller_hooks import install_hooks\nimport os\ninstall_hooks(globals())\n```\n\nThe error is happening in `install_hooks(globals())`, which is defined at `\\Lib\\site-packages\\kivy\\tools\\packaging\\pyinstaller_hooks\\__init__.py`:\n\n```\nfrom os.path import dirname, join\nfrom functools import partial\n\ncurdir = dirname(__file__)\n\ndef install_hooks(sym, hookspath=None):\n\n _hookspath = [curdir]\n if hookspath is not None:\n _hookspath += hookspath\n\n sym['rthooks']['kivy'] = [join(curdir, 'rt-hook-kivy.py')]\n sym['Analysis'] = partial(sym['Analysis'], hookspath=_hookspath)\n```\n\nBut the second last line is causing the message: `WARNING: stderr: KeyError: 'rthooks'`.\n\nSo it looks like it's expecting a variable `rthooks` to be in the global namespace, but it's not.\n\nI'm not sure what to do next.\n\n", "before_files": [{"content": "from os.path import dirname, join\nfrom functools import partial\n\ncurdir = dirname(__file__)\n\n\ndef install_hooks(sym, hookspath=None):\n\n _hookspath = [curdir]\n if hookspath is not None:\n _hookspath += hookspath\n\n sym['rthooks']['kivy'] = [join(curdir, 'rt-hook-kivy.py')]\n sym['Analysis'] = partial(sym['Analysis'], hookspath=_hookspath)\n", "path": "kivy/tools/packaging/pyinstaller_hooks/__init__.py"}, {"content": "'''\nKivy hook for PyInstaller\n=========================\n\nKivy load itself in a complete dynamic way. PyImported don't see most of the\nimport cause of the Factory and Core.\nIn addition, the data and missing module are not copied automatically.\n\nWith this hook, everything needed for running kivy is correctly copied.\n\nCheck kivy documentation about how to use these hook for packaging application.\n'''\n\nimport kivy\nfrom kivy.factory import Factory\n\n\ndef get_modules():\n return [x.get('module', None) for x in Factory.classes.values()]\n\n\ndatas = [\n (kivy.kivy_data_dir, 'kivy_install'),\n (kivy.kivy_modules_dir, 'kivy_install'),\n (kivy.kivy_exts_dir, 'kivy_install'),\n]\n\n# extensions\n_kivy_modules = [\n\n # sdl2\n\n # uncomment this if you need to package pygame.\n # pygame\n #'pygame.event',\n #'pygame.video',\n #'pygame.image',\n #'pygame.display',\n #'pygame',\n 'xml.etree.cElementTree',\n\n # external modules\n 'kivy.cache',\n 'kivy.atlas',\n 'kivy.network',\n 'kivy.network.urlrequest',\n 'kivy.lib.osc',\n 'kivy.lib.osc.OSC',\n 'kivy.lib.osc.oscAPI',\n 'kivy.lib.mtdev',\n 'kivy.lib.sdl2',\n 'kivy.factory_registers',\n 'kivy.input.recorder',\n 'kivy.input.providers',\n 'kivy.input.providers.tuio',\n 'kivy.input.providers.mouse',\n 'kivy.input.providers.wm_common',\n 'kivy.input.providers.wm_touch',\n 'kivy.input.providers.wm_pen',\n 'kivy.input.providers.hidinput',\n 'kivy.input.providers.linuxwacom',\n 'kivy.input.providers.mactouch',\n 'kivy.input.providers.mouse',\n 'kivy.input.providers.mtdev',\n\n # compiled modules\n 'kivy.event',\n 'kivy.graphics.buffer',\n 'kivy.graphics.c_opengl_debug',\n 'kivy.graphics.compiler',\n 'kivy.graphics.context_instructions',\n 'kivy.graphics.fbo',\n 'kivy.graphics.instructions',\n 'kivy.graphics.opengl',\n 'kivy.graphics.opengl_utils',\n 'kivy.graphics.shader',\n 'kivy.graphics.stenctil_instructions',\n 'kivy.graphics.texture',\n 'kivy.graphics.transformation',\n 'kivy.graphics.vbo',\n 'kivy.graphics.vertex',\n 'kivy.graphics.vertex_instructions',\n 'kivy.graphics.tesselator',\n 'kivy.properties',\n\n # core\n 'kivy.core.audio.audio_gstplayer',\n 'kivy.core.audio.audio_pygst',\n 'kivy.core.audio.audio_sdl',\n 'kivy.core.audio.audio_pygame',\n 'kivy.core.camera.camera_avfoundation',\n 'kivy.core.camera.camera_pygst',\n 'kivy.core.camera.camera_opencv',\n 'kivy.core.camera.camera_videocapture',\n 'kivy.core.clipboard.clipboard_sdl2',\n 'kivy.core.clipboard.clipboard_android',\n 'kivy.core.clipboard.clipboard_pygame',\n 'kivy.core.clipboard.clipboard_dummy',\n 'kivy.core.image.img_imageio',\n 'kivy.core.image.img_tex',\n 'kivy.core.image.img_dds',\n 'kivy.core.image.img_sdl2',\n 'kivy.core.image.img_pygame',\n 'kivy.core.image.img_pil',\n 'kivy.core.image.img_gif',\n 'kivy.core.spelling.spelling_enchant',\n 'kivy.core.spelling.spelling_osxappkit',\n 'kivy.core.text.text_sdl2',\n 'kivy.core.text.text_pygame',\n 'kivy.core.text.text_sdlttf',\n 'kivy.core.text.text_pil',\n 'kivy.core.video.video_gstplayer',\n 'kivy.core.video.video_pygst',\n 'kivy.core.video.video_ffmpeg',\n 'kivy.core.video.video_pyglet',\n 'kivy.core.video.video_null',\n 'kivy.core.window.window_sdl2',\n 'kivy.core.window.window_egl_rpi',\n 'kivy.core.window.window_pygame',\n 'kivy.core.window.window_sdl',\n 'kivy.core.window.window_x11',\n]\n\nhiddenimports = _kivy_modules + get_modules()\nhiddenimports = list(set(hiddenimports))\n\n", "path": "kivy/tools/packaging/pyinstaller_hooks/hook-kivy.py"}], "after_files": [{"content": "from os.path import dirname, join\n\ncurdir = dirname(__file__)\n\n\ndef runtime_hooks():\n return [join(curdir, 'rt-hook-kivy.py')]\n\n\ndef hookspath():\n return [curdir]\n\n\ndef get_hooks():\n return {'hookspath': hookspath(), 'runtime_hooks': runtime_hooks()}\n", "path": "kivy/tools/packaging/pyinstaller_hooks/__init__.py"}, {"content": "'''\nKivy hook for PyInstaller\n=========================\n\nKivy load itself in a complete dynamic way. PyImported don't see most of the\nimport cause of the Factory and Core.\nIn addition, the data and missing module are not copied automatically.\n\nWith this hook, everything needed for running kivy is correctly copied.\n\nCheck kivy documentation about how to use these hook for packaging application.\n'''\n\nfrom os.path import join, basename\nfrom distutils.version import LooseVersion\nimport PyInstaller\n\nimport kivy\nfrom kivy.factory import Factory\n\ntry:\n pyinst_ver = PyInstaller.get_version() # pyinstaller < 3.0x\nexcept AttributeError:\n pyinst_ver = PyInstaller.__version__\n\n\ndef get_modules():\n return [x.get('module', None) for x in Factory.classes.values()]\n\n\nif LooseVersion(pyinst_ver) >= LooseVersion('3.0'):\n # in pyinstaller 3, the directory contents rather than the directory itself\n # is copied. See https://github.com/pyinstaller/pyinstaller/issues/1513.\n datas = [\n (kivy.kivy_data_dir,\n join('kivy_install', basename(kivy.kivy_data_dir))),\n (kivy.kivy_modules_dir,\n join('kivy_install', basename(kivy.kivy_modules_dir))),\n ]\nelse:\n datas = [\n (kivy.kivy_data_dir, 'kivy_install'),\n (kivy.kivy_modules_dir, 'kivy_install'),\n ]\n\n# extensions\n_kivy_modules = [\n\n # sdl2\n\n # uncomment this if you need to package pygame.\n # pygame\n #'pygame.event',\n #'pygame.video',\n #'pygame.image',\n #'pygame.display',\n #'pygame',\n 'xml.etree.cElementTree',\n\n # external modules\n 'kivy.cache',\n 'kivy.atlas',\n 'kivy.network',\n 'kivy.network.urlrequest',\n 'kivy.lib.osc',\n 'kivy.lib.osc.OSC',\n 'kivy.lib.osc.oscAPI',\n 'kivy.lib.mtdev',\n 'kivy.lib.sdl2',\n 'kivy.factory_registers',\n 'kivy.input.recorder',\n 'kivy.input.providers',\n 'kivy.input.providers.tuio',\n 'kivy.input.providers.mouse',\n 'kivy.input.providers.wm_common',\n 'kivy.input.providers.wm_touch',\n 'kivy.input.providers.wm_pen',\n 'kivy.input.providers.hidinput',\n 'kivy.input.providers.linuxwacom',\n 'kivy.input.providers.mactouch',\n 'kivy.input.providers.mouse',\n 'kivy.input.providers.mtdev',\n\n # compiled modules\n 'kivy.event',\n 'kivy.graphics.buffer',\n 'kivy.graphics.c_opengl_debug',\n 'kivy.graphics.compiler',\n 'kivy.graphics.context_instructions',\n 'kivy.graphics.fbo',\n 'kivy.graphics.instructions',\n 'kivy.graphics.opengl',\n 'kivy.graphics.opengl_utils',\n 'kivy.graphics.shader',\n 'kivy.graphics.stenctil_instructions',\n 'kivy.graphics.texture',\n 'kivy.graphics.transformation',\n 'kivy.graphics.vbo',\n 'kivy.graphics.vertex',\n 'kivy.graphics.vertex_instructions',\n 'kivy.graphics.tesselator',\n 'kivy.properties',\n\n # core\n 'kivy.core.audio.audio_gstplayer',\n 'kivy.core.audio.audio_pygst',\n 'kivy.core.audio.audio_sdl',\n 'kivy.core.audio.audio_pygame',\n 'kivy.core.camera.camera_avfoundation',\n 'kivy.core.camera.camera_pygst',\n 'kivy.core.camera.camera_opencv',\n 'kivy.core.camera.camera_videocapture',\n 'kivy.core.clipboard.clipboard_sdl2',\n 'kivy.core.clipboard.clipboard_android',\n 'kivy.core.clipboard.clipboard_pygame',\n 'kivy.core.clipboard.clipboard_dummy',\n 'kivy.core.image.img_imageio',\n 'kivy.core.image.img_tex',\n 'kivy.core.image.img_dds',\n 'kivy.core.image.img_sdl2',\n 'kivy.core.image.img_pygame',\n 'kivy.core.image.img_pil',\n 'kivy.core.image.img_gif',\n 'kivy.core.spelling.spelling_enchant',\n 'kivy.core.spelling.spelling_osxappkit',\n 'kivy.core.text.text_sdl2',\n 'kivy.core.text.text_pygame',\n 'kivy.core.text.text_sdlttf',\n 'kivy.core.text.text_pil',\n 'kivy.core.video.video_gstplayer',\n 'kivy.core.video.video_pygst',\n 'kivy.core.video.video_ffmpeg',\n 'kivy.core.video.video_pyglet',\n 'kivy.core.video.video_null',\n 'kivy.core.window.window_sdl2',\n 'kivy.core.window.window_egl_rpi',\n 'kivy.core.window.window_pygame',\n 'kivy.core.window.window_sdl',\n 'kivy.core.window.window_x11',\n]\n\nhiddenimports = _kivy_modules + get_modules()\nhiddenimports = list(set(hiddenimports))\n\n", "path": "kivy/tools/packaging/pyinstaller_hooks/hook-kivy.py"}]} | 1,995 | 649 |
gh_patches_debug_19869 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-529 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add suppress_instrumentation flag in context for Metrics
Similar to [logic](https://github.com/open-telemetry/opentelemetry-python/blob/master/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/__init__.py#L205) in SpanProcessors, this need to be done in Metrics to avoid duplicated telemetry when using Http ext or other packages relying on this
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import atexit
16 import threading
17
18
19 class PushController(threading.Thread):
20 """A push based controller, used for exporting.
21
22 Uses a worker thread that periodically collects metrics for exporting,
23 exports them and performs some post-processing.
24 """
25
26 daemon = True
27
28 def __init__(self, meter, exporter, interval, shutdown_on_exit=True):
29 super().__init__()
30 self.meter = meter
31 self.exporter = exporter
32 self.interval = interval
33 self.finished = threading.Event()
34 self._atexit_handler = None
35 if shutdown_on_exit:
36 self._atexit_handler = atexit.register(self.shutdown)
37 self.start()
38
39 def run(self):
40 while not self.finished.wait(self.interval):
41 self.tick()
42
43 def shutdown(self):
44 self.finished.set()
45 self.exporter.shutdown()
46 if self._atexit_handler is not None:
47 atexit.unregister(self._atexit_handler)
48 self._atexit_handler = None
49
50 def tick(self):
51 # Collect all of the meter's metrics to be exported
52 self.meter.collect()
53 # Export the given metrics in the batcher
54 self.exporter.export(self.meter.batcher.checkpoint_set())
55 # Perform post-exporting logic based on batcher configuration
56 self.meter.batcher.finished_collection()
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py
@@ -15,6 +15,8 @@
import atexit
import threading
+from opentelemetry.context import attach, detach, set_value
+
class PushController(threading.Thread):
"""A push based controller, used for exporting.
@@ -50,7 +52,9 @@
def tick(self):
# Collect all of the meter's metrics to be exported
self.meter.collect()
+ token = attach(set_value("suppress_instrumentation", True))
# Export the given metrics in the batcher
self.exporter.export(self.meter.batcher.checkpoint_set())
+ detach(token)
# Perform post-exporting logic based on batcher configuration
self.meter.batcher.finished_collection()
| {"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py\n@@ -15,6 +15,8 @@\n import atexit\n import threading\n \n+from opentelemetry.context import attach, detach, set_value\n+\n \n class PushController(threading.Thread):\n \"\"\"A push based controller, used for exporting.\n@@ -50,7 +52,9 @@\n def tick(self):\n # Collect all of the meter's metrics to be exported\n self.meter.collect()\n+ token = attach(set_value(\"suppress_instrumentation\", True))\n # Export the given metrics in the batcher\n self.exporter.export(self.meter.batcher.checkpoint_set())\n+ detach(token)\n # Perform post-exporting logic based on batcher configuration\n self.meter.batcher.finished_collection()\n", "issue": "Add suppress_instrumentation flag in context for Metrics\nSimilar to [logic](https://github.com/open-telemetry/opentelemetry-python/blob/master/opentelemetry-sdk/src/opentelemetry/sdk/trace/export/__init__.py#L205) in SpanProcessors, this need to be done in Metrics to avoid duplicated telemetry when using Http ext or other packages relying on this\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport atexit\nimport threading\n\n\nclass PushController(threading.Thread):\n \"\"\"A push based controller, used for exporting.\n\n Uses a worker thread that periodically collects metrics for exporting,\n exports them and performs some post-processing.\n \"\"\"\n\n daemon = True\n\n def __init__(self, meter, exporter, interval, shutdown_on_exit=True):\n super().__init__()\n self.meter = meter\n self.exporter = exporter\n self.interval = interval\n self.finished = threading.Event()\n self._atexit_handler = None\n if shutdown_on_exit:\n self._atexit_handler = atexit.register(self.shutdown)\n self.start()\n\n def run(self):\n while not self.finished.wait(self.interval):\n self.tick()\n\n def shutdown(self):\n self.finished.set()\n self.exporter.shutdown()\n if self._atexit_handler is not None:\n atexit.unregister(self._atexit_handler)\n self._atexit_handler = None\n\n def tick(self):\n # Collect all of the meter's metrics to be exported\n self.meter.collect()\n # Export the given metrics in the batcher\n self.exporter.export(self.meter.batcher.checkpoint_set())\n # Perform post-exporting logic based on batcher configuration\n self.meter.batcher.finished_collection()\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport atexit\nimport threading\n\nfrom opentelemetry.context import attach, detach, set_value\n\n\nclass PushController(threading.Thread):\n \"\"\"A push based controller, used for exporting.\n\n Uses a worker thread that periodically collects metrics for exporting,\n exports them and performs some post-processing.\n \"\"\"\n\n daemon = True\n\n def __init__(self, meter, exporter, interval, shutdown_on_exit=True):\n super().__init__()\n self.meter = meter\n self.exporter = exporter\n self.interval = interval\n self.finished = threading.Event()\n self._atexit_handler = None\n if shutdown_on_exit:\n self._atexit_handler = atexit.register(self.shutdown)\n self.start()\n\n def run(self):\n while not self.finished.wait(self.interval):\n self.tick()\n\n def shutdown(self):\n self.finished.set()\n self.exporter.shutdown()\n if self._atexit_handler is not None:\n atexit.unregister(self._atexit_handler)\n self._atexit_handler = None\n\n def tick(self):\n # Collect all of the meter's metrics to be exported\n self.meter.collect()\n token = attach(set_value(\"suppress_instrumentation\", True))\n # Export the given metrics in the batcher\n self.exporter.export(self.meter.batcher.checkpoint_set())\n detach(token)\n # Perform post-exporting logic based on batcher configuration\n self.meter.batcher.finished_collection()\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/metrics/export/controller.py"}]} | 858 | 224 |
gh_patches_debug_8934 | rasdani/github-patches | git_diff | vispy__vispy-1595 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Load STL files into vispy
Hi there, I think I found a bug in vispy/vispy/io/mesh.py in col 42:
mesh = load_stl(fname)
when I try to import a *.stl file by read_mesh(fname), an error occured like this:
File "D:\Python3.5\lib\site-packages\vispy\io\mesh.py", line 43, in read_mesh
mesh = load_stl(fname)
File "D:\Python3.5\lib\site-packages\vispy\io\stl.py", line 43, in load_stl
file_pos = file_obj.tell()
AttributeError: 'str' object has no attribute 'tell'
by change col42 into :mesh = trimesh.load(fname), problem soved!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vispy/io/mesh.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) Vispy Development Team. All Rights Reserved.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4
5 """ Reading and writing of data like images and meshes.
6 """
7
8 from os import path as op
9
10 from .wavefront import WavefrontReader, WavefrontWriter
11 from .stl import load_stl
12
13
14 def read_mesh(fname):
15 """Read mesh data from file.
16
17 Parameters
18 ----------
19 fname : str
20 File name to read. Format will be inferred from the filename.
21 Currently only '.obj' and '.obj.gz' are supported.
22
23 Returns
24 -------
25 vertices : array
26 Vertices.
27 faces : array | None
28 Triangle face definitions.
29 normals : array
30 Normals for the mesh.
31 texcoords : array | None
32 Texture coordinates.
33 """
34 # Check format
35 fmt = op.splitext(fname)[1].lower()
36 if fmt == '.gz':
37 fmt = op.splitext(op.splitext(fname)[0])[1].lower()
38
39 if fmt in ('.obj'):
40 return WavefrontReader.read(fname)
41 elif fmt in ('.stl'):
42 mesh = load_stl(fname)
43 vertices = mesh.vertices
44 faces = mesh.faces
45 normals = mesh.face_normals
46 texcoords = None
47 return vertices, faces, normals, texcoords
48 elif not format:
49 raise ValueError('read_mesh needs could not determine format.')
50 else:
51 raise ValueError('read_mesh does not understand format %s.' % fmt)
52
53
54 def write_mesh(fname, vertices, faces, normals, texcoords, name='',
55 format='obj', overwrite=False, reshape_faces=True):
56 """ Write mesh data to file.
57
58 Parameters
59 ----------
60 fname : str
61 Filename to write. Must end with ".obj" or ".gz".
62 vertices : array
63 Vertices.
64 faces : array | None
65 Triangle face definitions.
66 normals : array
67 Normals for the mesh.
68 texcoords : array | None
69 Texture coordinates.
70 name : str
71 Name of the object.
72 format : str
73 Currently only "obj" is supported.
74 overwrite : bool
75 If the file exists, overwrite it.
76 reshape_faces : bool
77 Reshape the `faces` array to (Nf, 3). Set to `False`
78 if you need to write a mesh with non triangular faces.
79 """
80 # Check file
81 if op.isfile(fname) and not overwrite:
82 raise IOError('file "%s" exists, use overwrite=True' % fname)
83
84 # Check format
85 if format not in ('obj'):
86 raise ValueError('Only "obj" format writing currently supported')
87 WavefrontWriter.write(fname, vertices, faces,
88 normals, texcoords, name, reshape_faces)
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vispy/io/mesh.py b/vispy/io/mesh.py
--- a/vispy/io/mesh.py
+++ b/vispy/io/mesh.py
@@ -39,10 +39,11 @@
if fmt in ('.obj'):
return WavefrontReader.read(fname)
elif fmt in ('.stl'):
- mesh = load_stl(fname)
- vertices = mesh.vertices
- faces = mesh.faces
- normals = mesh.face_normals
+ file_obj = open(fname, mode='rb')
+ mesh = load_stl(file_obj)
+ vertices = mesh['vertices']
+ faces = mesh['faces']
+ normals = mesh['face_normals']
texcoords = None
return vertices, faces, normals, texcoords
elif not format:
| {"golden_diff": "diff --git a/vispy/io/mesh.py b/vispy/io/mesh.py\n--- a/vispy/io/mesh.py\n+++ b/vispy/io/mesh.py\n@@ -39,10 +39,11 @@\n if fmt in ('.obj'):\n return WavefrontReader.read(fname)\n elif fmt in ('.stl'):\n- mesh = load_stl(fname)\n- vertices = mesh.vertices\n- faces = mesh.faces\n- normals = mesh.face_normals\n+ file_obj = open(fname, mode='rb')\n+ mesh = load_stl(file_obj)\n+ vertices = mesh['vertices']\n+ faces = mesh['faces']\n+ normals = mesh['face_normals']\n texcoords = None\n return vertices, faces, normals, texcoords\n elif not format:\n", "issue": "Load STL files into vispy\nHi there, I think I found a bug in vispy/vispy/io/mesh.py in col 42:\r\nmesh = load_stl(fname)\r\nwhen I try to import a *.stl file by read_mesh(fname), an error occured like this: \r\n File \"D:\\Python3.5\\lib\\site-packages\\vispy\\io\\mesh.py\", line 43, in read_mesh\r\n mesh = load_stl(fname)\r\n File \"D:\\Python3.5\\lib\\site-packages\\vispy\\io\\stl.py\", line 43, in load_stl\r\n file_pos = file_obj.tell()\r\nAttributeError: 'str' object has no attribute 'tell'\r\nby change col42 into :mesh = trimesh.load(fname), problem soved!\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\" Reading and writing of data like images and meshes.\n\"\"\"\n\nfrom os import path as op\n\nfrom .wavefront import WavefrontReader, WavefrontWriter\nfrom .stl import load_stl\n\n\ndef read_mesh(fname):\n \"\"\"Read mesh data from file.\n\n Parameters\n ----------\n fname : str\n File name to read. Format will be inferred from the filename.\n Currently only '.obj' and '.obj.gz' are supported.\n\n Returns\n -------\n vertices : array\n Vertices.\n faces : array | None\n Triangle face definitions.\n normals : array\n Normals for the mesh.\n texcoords : array | None\n Texture coordinates.\n \"\"\"\n # Check format\n fmt = op.splitext(fname)[1].lower()\n if fmt == '.gz':\n fmt = op.splitext(op.splitext(fname)[0])[1].lower()\n\n if fmt in ('.obj'):\n return WavefrontReader.read(fname)\n elif fmt in ('.stl'):\n mesh = load_stl(fname)\n vertices = mesh.vertices\n faces = mesh.faces\n normals = mesh.face_normals\n texcoords = None\n return vertices, faces, normals, texcoords\n elif not format:\n raise ValueError('read_mesh needs could not determine format.')\n else:\n raise ValueError('read_mesh does not understand format %s.' % fmt)\n\n\ndef write_mesh(fname, vertices, faces, normals, texcoords, name='',\n format='obj', overwrite=False, reshape_faces=True):\n \"\"\" Write mesh data to file.\n\n Parameters\n ----------\n fname : str\n Filename to write. Must end with \".obj\" or \".gz\".\n vertices : array\n Vertices.\n faces : array | None\n Triangle face definitions.\n normals : array\n Normals for the mesh.\n texcoords : array | None\n Texture coordinates.\n name : str\n Name of the object.\n format : str\n Currently only \"obj\" is supported.\n overwrite : bool\n If the file exists, overwrite it.\n reshape_faces : bool\n Reshape the `faces` array to (Nf, 3). Set to `False`\n if you need to write a mesh with non triangular faces.\n \"\"\"\n # Check file\n if op.isfile(fname) and not overwrite:\n raise IOError('file \"%s\" exists, use overwrite=True' % fname)\n\n # Check format\n if format not in ('obj'):\n raise ValueError('Only \"obj\" format writing currently supported')\n WavefrontWriter.write(fname, vertices, faces,\n normals, texcoords, name, reshape_faces)\n", "path": "vispy/io/mesh.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\" Reading and writing of data like images and meshes.\n\"\"\"\n\nfrom os import path as op\n\nfrom .wavefront import WavefrontReader, WavefrontWriter\nfrom .stl import load_stl\n\n\ndef read_mesh(fname):\n \"\"\"Read mesh data from file.\n\n Parameters\n ----------\n fname : str\n File name to read. Format will be inferred from the filename.\n Currently only '.obj' and '.obj.gz' are supported.\n\n Returns\n -------\n vertices : array\n Vertices.\n faces : array | None\n Triangle face definitions.\n normals : array\n Normals for the mesh.\n texcoords : array | None\n Texture coordinates.\n \"\"\"\n # Check format\n fmt = op.splitext(fname)[1].lower()\n if fmt == '.gz':\n fmt = op.splitext(op.splitext(fname)[0])[1].lower()\n\n if fmt in ('.obj'):\n return WavefrontReader.read(fname)\n elif fmt in ('.stl'):\n file_obj = open(fname, mode='rb')\n mesh = load_stl(file_obj)\n vertices = mesh['vertices']\n faces = mesh['faces']\n normals = mesh['face_normals']\n texcoords = None\n return vertices, faces, normals, texcoords\n elif not format:\n raise ValueError('read_mesh needs could not determine format.')\n else:\n raise ValueError('read_mesh does not understand format %s.' % fmt)\n\n\ndef write_mesh(fname, vertices, faces, normals, texcoords, name='',\n format='obj', overwrite=False, reshape_faces=True):\n \"\"\" Write mesh data to file.\n\n Parameters\n ----------\n fname : str\n Filename to write. Must end with \".obj\" or \".gz\".\n vertices : array\n Vertices.\n faces : array | None\n Triangle face definitions.\n normals : array\n Normals for the mesh.\n texcoords : array | None\n Texture coordinates.\n name : str\n Name of the object.\n format : str\n Currently only \"obj\" is supported.\n overwrite : bool\n If the file exists, overwrite it.\n reshape_faces : bool\n Reshape the `faces` array to (Nf, 3). Set to `False`\n if you need to write a mesh with non triangular faces.\n \"\"\"\n # Check file\n if op.isfile(fname) and not overwrite:\n raise IOError('file \"%s\" exists, use overwrite=True' % fname)\n\n # Check format\n if format not in ('obj'):\n raise ValueError('Only \"obj\" format writing currently supported')\n WavefrontWriter.write(fname, vertices, faces,\n normals, texcoords, name, reshape_faces)\n", "path": "vispy/io/mesh.py"}]} | 1,213 | 179 |
gh_patches_debug_34641 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-1611 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
List Followers You Know When Looking at User's Follower List
**Is your feature request related to a problem? Please describe.**
When you look at a user's profile, the link to the user's followers will be titled 'X followers you follow', but the followers list is supplied in reverse-chronological order by date followed, with followers in common given no priority.
When trying to decide how to handle a follow request, seeing what people we both know is the most relevant information to me.
**Describe the solution you'd like**
Sort followers that I know (follow) ahead of all other followers when showing a users Followers list (honestly, sorting the Following list this way would be nice too)
**Describe alternatives you've considered**
Have a separate view for 'Followers you know' (and rename the link to the Followers list if it doesn't point to that list)
**Additional context**
Hi mouse
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/user.py`
Content:
```
1 """ non-interactive pages """
2 from django.contrib.auth.decorators import login_required
3 from django.core.paginator import Paginator
4 from django.http import Http404
5 from django.shortcuts import redirect
6 from django.template.response import TemplateResponse
7 from django.utils import timezone
8 from django.views import View
9 from django.views.decorators.http import require_POST
10
11 from bookwyrm import models
12 from bookwyrm.activitypub import ActivitypubResponse
13 from bookwyrm.settings import PAGE_LENGTH
14 from .helpers import get_user_from_username, is_api_request
15
16
17 # pylint: disable=no-self-use
18 class User(View):
19 """user profile page"""
20
21 def get(self, request, username):
22 """profile page for a user"""
23 user = get_user_from_username(request.user, username)
24
25 if is_api_request(request):
26 # we have a json request
27 return ActivitypubResponse(user.to_activity())
28 # otherwise we're at a UI view
29
30 shelf_preview = []
31
32 # only show other shelves that should be visible
33 shelves = user.shelf_set
34 is_self = request.user.id == user.id
35 if not is_self:
36 follower = user.followers.filter(id=request.user.id).exists()
37 if follower:
38 shelves = shelves.filter(privacy__in=["public", "followers"])
39 else:
40 shelves = shelves.filter(privacy="public")
41
42 for user_shelf in shelves.all():
43 if not user_shelf.books.count():
44 continue
45 shelf_preview.append(
46 {
47 "name": user_shelf.name,
48 "local_path": user_shelf.local_path,
49 "books": user_shelf.books.all()[:3],
50 "size": user_shelf.books.count(),
51 }
52 )
53 if len(shelf_preview) > 2:
54 break
55
56 # user's posts
57 activities = (
58 models.Status.privacy_filter(
59 request.user,
60 )
61 .filter(user=user)
62 .select_related(
63 "user",
64 "reply_parent",
65 "review__book",
66 "comment__book",
67 "quotation__book",
68 )
69 .prefetch_related(
70 "mention_books",
71 "mention_users",
72 "attachments",
73 )
74 )
75
76 paginated = Paginator(activities, PAGE_LENGTH)
77 goal = models.AnnualGoal.objects.filter(
78 user=user, year=timezone.now().year
79 ).first()
80 if goal:
81 try:
82 goal.raise_visible_to_user(request.user)
83 except Http404:
84 goal = None
85
86 data = {
87 "user": user,
88 "is_self": is_self,
89 "shelves": shelf_preview,
90 "shelf_count": shelves.count(),
91 "activities": paginated.get_page(request.GET.get("page", 1)),
92 "goal": goal,
93 }
94
95 return TemplateResponse(request, "user/user.html", data)
96
97
98 class Followers(View):
99 """list of followers view"""
100
101 def get(self, request, username):
102 """list of followers"""
103 user = get_user_from_username(request.user, username)
104
105 if is_api_request(request):
106 return ActivitypubResponse(user.to_followers_activity(**request.GET))
107
108 paginated = Paginator(
109 user.followers.order_by("-created_date").all(), PAGE_LENGTH
110 )
111 data = {
112 "user": user,
113 "is_self": request.user.id == user.id,
114 "follow_list": paginated.get_page(request.GET.get("page")),
115 }
116 return TemplateResponse(request, "user/relationships/followers.html", data)
117
118
119 class Following(View):
120 """list of following view"""
121
122 def get(self, request, username):
123 """list of followers"""
124 user = get_user_from_username(request.user, username)
125
126 if is_api_request(request):
127 return ActivitypubResponse(user.to_following_activity(**request.GET))
128
129 paginated = Paginator(
130 user.following.order_by("-created_date").all(), PAGE_LENGTH
131 )
132 data = {
133 "user": user,
134 "is_self": request.user.id == user.id,
135 "follow_list": paginated.get_page(request.GET.get("page")),
136 }
137 return TemplateResponse(request, "user/relationships/following.html", data)
138
139
140 class Groups(View):
141 """list of user's groups view"""
142
143 def get(self, request, username):
144 """list of groups"""
145 user = get_user_from_username(request.user, username)
146
147 paginated = Paginator(
148 models.Group.memberships.filter(user=user).order_by("-created_date"),
149 PAGE_LENGTH,
150 )
151 data = {
152 "user": user,
153 "is_self": request.user.id == user.id,
154 "group_list": paginated.get_page(request.GET.get("page")),
155 }
156 return TemplateResponse(request, "user/groups.html", data)
157
158
159 @require_POST
160 @login_required
161 def hide_suggestions(request):
162 """not everyone wants user suggestions"""
163 request.user.show_suggested_users = False
164 request.user.save(broadcast=False, update_fields=["show_suggested_users"])
165 return redirect(request.headers.get("Referer", "/"))
166
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bookwyrm/views/user.py b/bookwyrm/views/user.py
--- a/bookwyrm/views/user.py
+++ b/bookwyrm/views/user.py
@@ -1,6 +1,7 @@
""" non-interactive pages """
from django.contrib.auth.decorators import login_required
from django.core.paginator import Paginator
+from django.db.models import Q, Count
from django.http import Http404
from django.shortcuts import redirect
from django.template.response import TemplateResponse
@@ -105,9 +106,8 @@
if is_api_request(request):
return ActivitypubResponse(user.to_followers_activity(**request.GET))
- paginated = Paginator(
- user.followers.order_by("-created_date").all(), PAGE_LENGTH
- )
+ followers = annotate_if_follows(request.user, user.followers)
+ paginated = Paginator(followers.all(), PAGE_LENGTH)
data = {
"user": user,
"is_self": request.user.id == user.id,
@@ -126,9 +126,8 @@
if is_api_request(request):
return ActivitypubResponse(user.to_following_activity(**request.GET))
- paginated = Paginator(
- user.following.order_by("-created_date").all(), PAGE_LENGTH
- )
+ following = annotate_if_follows(request.user, user.following)
+ paginated = Paginator(following.all(), PAGE_LENGTH)
data = {
"user": user,
"is_self": request.user.id == user.id,
@@ -137,6 +136,16 @@
return TemplateResponse(request, "user/relationships/following.html", data)
+def annotate_if_follows(user, queryset):
+ """Sort a list of users by if you follow them"""
+ if not user.is_authenticated:
+ return queryset.order_by("-created_date")
+
+ return queryset.annotate(
+ request_user_follows=Count("followers", filter=Q(followers=user))
+ ).order_by("-request_user_follows", "-created_date")
+
+
class Groups(View):
"""list of user's groups view"""
| {"golden_diff": "diff --git a/bookwyrm/views/user.py b/bookwyrm/views/user.py\n--- a/bookwyrm/views/user.py\n+++ b/bookwyrm/views/user.py\n@@ -1,6 +1,7 @@\n \"\"\" non-interactive pages \"\"\"\n from django.contrib.auth.decorators import login_required\n from django.core.paginator import Paginator\n+from django.db.models import Q, Count\n from django.http import Http404\n from django.shortcuts import redirect\n from django.template.response import TemplateResponse\n@@ -105,9 +106,8 @@\n if is_api_request(request):\n return ActivitypubResponse(user.to_followers_activity(**request.GET))\n \n- paginated = Paginator(\n- user.followers.order_by(\"-created_date\").all(), PAGE_LENGTH\n- )\n+ followers = annotate_if_follows(request.user, user.followers)\n+ paginated = Paginator(followers.all(), PAGE_LENGTH)\n data = {\n \"user\": user,\n \"is_self\": request.user.id == user.id,\n@@ -126,9 +126,8 @@\n if is_api_request(request):\n return ActivitypubResponse(user.to_following_activity(**request.GET))\n \n- paginated = Paginator(\n- user.following.order_by(\"-created_date\").all(), PAGE_LENGTH\n- )\n+ following = annotate_if_follows(request.user, user.following)\n+ paginated = Paginator(following.all(), PAGE_LENGTH)\n data = {\n \"user\": user,\n \"is_self\": request.user.id == user.id,\n@@ -137,6 +136,16 @@\n return TemplateResponse(request, \"user/relationships/following.html\", data)\n \n \n+def annotate_if_follows(user, queryset):\n+ \"\"\"Sort a list of users by if you follow them\"\"\"\n+ if not user.is_authenticated:\n+ return queryset.order_by(\"-created_date\")\n+\n+ return queryset.annotate(\n+ request_user_follows=Count(\"followers\", filter=Q(followers=user))\n+ ).order_by(\"-request_user_follows\", \"-created_date\")\n+\n+\n class Groups(View):\n \"\"\"list of user's groups view\"\"\"\n", "issue": "List Followers You Know When Looking at User's Follower List\n**Is your feature request related to a problem? Please describe.**\r\nWhen you look at a user's profile, the link to the user's followers will be titled 'X followers you follow', but the followers list is supplied in reverse-chronological order by date followed, with followers in common given no priority.\r\nWhen trying to decide how to handle a follow request, seeing what people we both know is the most relevant information to me.\r\n\r\n**Describe the solution you'd like**\r\nSort followers that I know (follow) ahead of all other followers when showing a users Followers list (honestly, sorting the Following list this way would be nice too)\r\n\r\n**Describe alternatives you've considered**\r\nHave a separate view for 'Followers you know' (and rename the link to the Followers list if it doesn't point to that list)\r\n\r\n**Additional context**\r\nHi mouse\r\n\n", "before_files": [{"content": "\"\"\" non-interactive pages \"\"\"\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.paginator import Paginator\nfrom django.http import Http404\nfrom django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils import timezone\nfrom django.views import View\nfrom django.views.decorators.http import require_POST\n\nfrom bookwyrm import models\nfrom bookwyrm.activitypub import ActivitypubResponse\nfrom bookwyrm.settings import PAGE_LENGTH\nfrom .helpers import get_user_from_username, is_api_request\n\n\n# pylint: disable=no-self-use\nclass User(View):\n \"\"\"user profile page\"\"\"\n\n def get(self, request, username):\n \"\"\"profile page for a user\"\"\"\n user = get_user_from_username(request.user, username)\n\n if is_api_request(request):\n # we have a json request\n return ActivitypubResponse(user.to_activity())\n # otherwise we're at a UI view\n\n shelf_preview = []\n\n # only show other shelves that should be visible\n shelves = user.shelf_set\n is_self = request.user.id == user.id\n if not is_self:\n follower = user.followers.filter(id=request.user.id).exists()\n if follower:\n shelves = shelves.filter(privacy__in=[\"public\", \"followers\"])\n else:\n shelves = shelves.filter(privacy=\"public\")\n\n for user_shelf in shelves.all():\n if not user_shelf.books.count():\n continue\n shelf_preview.append(\n {\n \"name\": user_shelf.name,\n \"local_path\": user_shelf.local_path,\n \"books\": user_shelf.books.all()[:3],\n \"size\": user_shelf.books.count(),\n }\n )\n if len(shelf_preview) > 2:\n break\n\n # user's posts\n activities = (\n models.Status.privacy_filter(\n request.user,\n )\n .filter(user=user)\n .select_related(\n \"user\",\n \"reply_parent\",\n \"review__book\",\n \"comment__book\",\n \"quotation__book\",\n )\n .prefetch_related(\n \"mention_books\",\n \"mention_users\",\n \"attachments\",\n )\n )\n\n paginated = Paginator(activities, PAGE_LENGTH)\n goal = models.AnnualGoal.objects.filter(\n user=user, year=timezone.now().year\n ).first()\n if goal:\n try:\n goal.raise_visible_to_user(request.user)\n except Http404:\n goal = None\n\n data = {\n \"user\": user,\n \"is_self\": is_self,\n \"shelves\": shelf_preview,\n \"shelf_count\": shelves.count(),\n \"activities\": paginated.get_page(request.GET.get(\"page\", 1)),\n \"goal\": goal,\n }\n\n return TemplateResponse(request, \"user/user.html\", data)\n\n\nclass Followers(View):\n \"\"\"list of followers view\"\"\"\n\n def get(self, request, username):\n \"\"\"list of followers\"\"\"\n user = get_user_from_username(request.user, username)\n\n if is_api_request(request):\n return ActivitypubResponse(user.to_followers_activity(**request.GET))\n\n paginated = Paginator(\n user.followers.order_by(\"-created_date\").all(), PAGE_LENGTH\n )\n data = {\n \"user\": user,\n \"is_self\": request.user.id == user.id,\n \"follow_list\": paginated.get_page(request.GET.get(\"page\")),\n }\n return TemplateResponse(request, \"user/relationships/followers.html\", data)\n\n\nclass Following(View):\n \"\"\"list of following view\"\"\"\n\n def get(self, request, username):\n \"\"\"list of followers\"\"\"\n user = get_user_from_username(request.user, username)\n\n if is_api_request(request):\n return ActivitypubResponse(user.to_following_activity(**request.GET))\n\n paginated = Paginator(\n user.following.order_by(\"-created_date\").all(), PAGE_LENGTH\n )\n data = {\n \"user\": user,\n \"is_self\": request.user.id == user.id,\n \"follow_list\": paginated.get_page(request.GET.get(\"page\")),\n }\n return TemplateResponse(request, \"user/relationships/following.html\", data)\n\n\nclass Groups(View):\n \"\"\"list of user's groups view\"\"\"\n\n def get(self, request, username):\n \"\"\"list of groups\"\"\"\n user = get_user_from_username(request.user, username)\n\n paginated = Paginator(\n models.Group.memberships.filter(user=user).order_by(\"-created_date\"),\n PAGE_LENGTH,\n )\n data = {\n \"user\": user,\n \"is_self\": request.user.id == user.id,\n \"group_list\": paginated.get_page(request.GET.get(\"page\")),\n }\n return TemplateResponse(request, \"user/groups.html\", data)\n\n\n@require_POST\n@login_required\ndef hide_suggestions(request):\n \"\"\"not everyone wants user suggestions\"\"\"\n request.user.show_suggested_users = False\n request.user.save(broadcast=False, update_fields=[\"show_suggested_users\"])\n return redirect(request.headers.get(\"Referer\", \"/\"))\n", "path": "bookwyrm/views/user.py"}], "after_files": [{"content": "\"\"\" non-interactive pages \"\"\"\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.paginator import Paginator\nfrom django.db.models import Q, Count\nfrom django.http import Http404\nfrom django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils import timezone\nfrom django.views import View\nfrom django.views.decorators.http import require_POST\n\nfrom bookwyrm import models\nfrom bookwyrm.activitypub import ActivitypubResponse\nfrom bookwyrm.settings import PAGE_LENGTH\nfrom .helpers import get_user_from_username, is_api_request\n\n\n# pylint: disable=no-self-use\nclass User(View):\n \"\"\"user profile page\"\"\"\n\n def get(self, request, username):\n \"\"\"profile page for a user\"\"\"\n user = get_user_from_username(request.user, username)\n\n if is_api_request(request):\n # we have a json request\n return ActivitypubResponse(user.to_activity())\n # otherwise we're at a UI view\n\n shelf_preview = []\n\n # only show other shelves that should be visible\n shelves = user.shelf_set\n is_self = request.user.id == user.id\n if not is_self:\n follower = user.followers.filter(id=request.user.id).exists()\n if follower:\n shelves = shelves.filter(privacy__in=[\"public\", \"followers\"])\n else:\n shelves = shelves.filter(privacy=\"public\")\n\n for user_shelf in shelves.all():\n if not user_shelf.books.count():\n continue\n shelf_preview.append(\n {\n \"name\": user_shelf.name,\n \"local_path\": user_shelf.local_path,\n \"books\": user_shelf.books.all()[:3],\n \"size\": user_shelf.books.count(),\n }\n )\n if len(shelf_preview) > 2:\n break\n\n # user's posts\n activities = (\n models.Status.privacy_filter(\n request.user,\n )\n .filter(user=user)\n .select_related(\n \"user\",\n \"reply_parent\",\n \"review__book\",\n \"comment__book\",\n \"quotation__book\",\n )\n .prefetch_related(\n \"mention_books\",\n \"mention_users\",\n \"attachments\",\n )\n )\n\n paginated = Paginator(activities, PAGE_LENGTH)\n goal = models.AnnualGoal.objects.filter(\n user=user, year=timezone.now().year\n ).first()\n if goal:\n try:\n goal.raise_visible_to_user(request.user)\n except Http404:\n goal = None\n\n data = {\n \"user\": user,\n \"is_self\": is_self,\n \"shelves\": shelf_preview,\n \"shelf_count\": shelves.count(),\n \"activities\": paginated.get_page(request.GET.get(\"page\", 1)),\n \"goal\": goal,\n }\n\n return TemplateResponse(request, \"user/user.html\", data)\n\n\nclass Followers(View):\n \"\"\"list of followers view\"\"\"\n\n def get(self, request, username):\n \"\"\"list of followers\"\"\"\n user = get_user_from_username(request.user, username)\n\n if is_api_request(request):\n return ActivitypubResponse(user.to_followers_activity(**request.GET))\n\n followers = annotate_if_follows(request.user, user.followers)\n paginated = Paginator(followers.all(), PAGE_LENGTH)\n data = {\n \"user\": user,\n \"is_self\": request.user.id == user.id,\n \"follow_list\": paginated.get_page(request.GET.get(\"page\")),\n }\n return TemplateResponse(request, \"user/relationships/followers.html\", data)\n\n\nclass Following(View):\n \"\"\"list of following view\"\"\"\n\n def get(self, request, username):\n \"\"\"list of followers\"\"\"\n user = get_user_from_username(request.user, username)\n\n if is_api_request(request):\n return ActivitypubResponse(user.to_following_activity(**request.GET))\n\n following = annotate_if_follows(request.user, user.following)\n paginated = Paginator(following.all(), PAGE_LENGTH)\n data = {\n \"user\": user,\n \"is_self\": request.user.id == user.id,\n \"follow_list\": paginated.get_page(request.GET.get(\"page\")),\n }\n return TemplateResponse(request, \"user/relationships/following.html\", data)\n\n\ndef annotate_if_follows(user, queryset):\n \"\"\"Sort a list of users by if you follow them\"\"\"\n if not user.is_authenticated:\n return queryset.order_by(\"-created_date\")\n\n return queryset.annotate(\n request_user_follows=Count(\"followers\", filter=Q(followers=user))\n ).order_by(\"-request_user_follows\", \"-created_date\")\n\n\nclass Groups(View):\n \"\"\"list of user's groups view\"\"\"\n\n def get(self, request, username):\n \"\"\"list of groups\"\"\"\n user = get_user_from_username(request.user, username)\n\n paginated = Paginator(\n models.Group.memberships.filter(user=user).order_by(\"-created_date\"),\n PAGE_LENGTH,\n )\n data = {\n \"user\": user,\n \"is_self\": request.user.id == user.id,\n \"group_list\": paginated.get_page(request.GET.get(\"page\")),\n }\n return TemplateResponse(request, \"user/groups.html\", data)\n\n\n@require_POST\n@login_required\ndef hide_suggestions(request):\n \"\"\"not everyone wants user suggestions\"\"\"\n request.user.show_suggested_users = False\n request.user.save(broadcast=False, update_fields=[\"show_suggested_users\"])\n return redirect(request.headers.get(\"Referer\", \"/\"))\n", "path": "bookwyrm/views/user.py"}]} | 1,908 | 458 |
gh_patches_debug_13462 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-4197 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Certifi python module missing when deploying cloudtrail lambda functions
The following error happens when the lambda function is triggered
`[ERROR] Runtime.ImportModuleError: Unable to import module 'custodian_policy': No module named 'certifi'`
As a workaround, I had to add the *certifi* package to the policy files.
```
policies:
- name: my-policy
description: my description
resource: my-resource
mode:
type: cloudtrail
packages:
- botocore
- boto3
- urllib3
- certifi
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `c7n/actions/webhook.py`
Content:
```
1 # Copyright 2019 Microsoft Corporation
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import json
16
17 import certifi
18 import jmespath
19 import urllib3
20 from six.moves.urllib import parse
21
22 from c7n import utils
23 from .core import EventAction
24
25
26 class Webhook(EventAction):
27 """Calls a webhook with optional parameters and body
28 populated from JMESPath queries.
29
30 .. code-block:: yaml
31
32 policies:
33 - name: call-webhook
34 resource: ec2
35 description: |
36 Call webhook with list of resource groups
37 actions:
38 - type: webhook
39 url: http://foo.com
40 query-params:
41 resource_name: resource.name
42 policy_name: policy.name
43 """
44
45 schema_alias = True
46 schema = utils.type_schema(
47 'webhook',
48 required=['url'],
49 **{
50 'url': {'type': 'string'},
51 'body': {'type': 'string'},
52 'batch': {'type': 'boolean'},
53 'batch-size': {'type': 'number'},
54 'method': {'type': 'string', 'enum': ['PUT', 'POST', 'GET', 'PATCH', 'DELETE']},
55 'query-params': {
56 "type": "object",
57 "additionalProperties": {
58 "type": "string",
59 "description": "query string values"
60 }
61 },
62 'headers': {
63 "type": "object",
64 "additionalProperties": {
65 "type": "string",
66 "description": "header values"
67 }
68 }
69 }
70 )
71
72 def __init__(self, data=None, manager=None, log_dir=None):
73 super(Webhook, self).__init__(data, manager, log_dir)
74 self.http = None
75 self.url = self.data.get('url')
76 self.body = self.data.get('body')
77 self.batch = self.data.get('batch', False)
78 self.batch_size = self.data.get('batch-size', 500)
79 self.query_params = self.data.get('query-params', {})
80 self.headers = self.data.get('headers', {})
81 self.method = self.data.get('method', 'POST')
82 self.lookup_data = {
83 'account_id': self.manager.config.account_id,
84 'region': self.manager.config.region,
85 'execution_id': self.manager.ctx.execution_id,
86 'execution_start': self.manager.ctx.start_time,
87 'policy': self.manager.data
88 }
89
90 def process(self, resources, event=None):
91 self.http = urllib3.PoolManager(
92 cert_reqs='CERT_REQUIRED',
93 ca_certs=certifi.where())
94
95 if self.batch:
96 for chunk in utils.chunks(resources, self.batch_size):
97 resource_data = self.lookup_data
98 resource_data['resources'] = chunk
99 self._process_call(resource_data)
100 else:
101 for r in resources:
102 resource_data = self.lookup_data
103 resource_data['resource'] = r
104 self._process_call(resource_data)
105
106 def _process_call(self, resource):
107 prepared_url = self._build_url(resource)
108 prepared_body = self._build_body(resource)
109 prepared_headers = self._build_headers(resource)
110
111 if prepared_body:
112 prepared_headers['Content-Type'] = 'application/json'
113
114 try:
115 res = self.http.request(
116 method=self.method,
117 url=prepared_url,
118 body=prepared_body,
119 headers=prepared_headers)
120
121 self.log.info("%s got response %s with URL %s" %
122 (self.method, res.status, prepared_url))
123 except urllib3.exceptions.HTTPError as e:
124 self.log.error("Error calling %s. Code: %s" % (prepared_url, e.reason))
125
126 def _build_headers(self, resource):
127 return {k: jmespath.search(v, resource) for k, v in self.headers.items()}
128
129 def _build_url(self, resource):
130 """
131 Compose URL with query string parameters.
132
133 Will not lose existing static parameters in the URL string
134 but does not support 'duplicate' parameter entries
135 """
136
137 if not self.query_params:
138 return self.url
139
140 evaluated_params = {k: jmespath.search(v, resource) for k, v in self.query_params.items()}
141
142 url_parts = list(parse.urlparse(self.url))
143 query = dict(parse.parse_qsl(url_parts[4]))
144 query.update(evaluated_params)
145 url_parts[4] = parse.urlencode(query)
146
147 return parse.urlunparse(url_parts)
148
149 def _build_body(self, resource):
150 """Create a JSON body and dump it to encoded bytes."""
151
152 if not self.body:
153 return None
154
155 return json.dumps(jmespath.search(self.body, resource)).encode('utf-8')
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/c7n/actions/webhook.py b/c7n/actions/webhook.py
--- a/c7n/actions/webhook.py
+++ b/c7n/actions/webhook.py
@@ -14,7 +14,11 @@
import json
-import certifi
+try:
+ import certifi
+except ImportError:
+ certifi = None
+
import jmespath
import urllib3
from six.moves.urllib import parse
@@ -90,7 +94,7 @@
def process(self, resources, event=None):
self.http = urllib3.PoolManager(
cert_reqs='CERT_REQUIRED',
- ca_certs=certifi.where())
+ ca_certs=certifi and certifi.where() or None)
if self.batch:
for chunk in utils.chunks(resources, self.batch_size):
| {"golden_diff": "diff --git a/c7n/actions/webhook.py b/c7n/actions/webhook.py\n--- a/c7n/actions/webhook.py\n+++ b/c7n/actions/webhook.py\n@@ -14,7 +14,11 @@\n \n import json\n \n-import certifi\n+try:\n+ import certifi\n+except ImportError:\n+ certifi = None\n+\n import jmespath\n import urllib3\n from six.moves.urllib import parse\n@@ -90,7 +94,7 @@\n def process(self, resources, event=None):\n self.http = urllib3.PoolManager(\n cert_reqs='CERT_REQUIRED',\n- ca_certs=certifi.where())\n+ ca_certs=certifi and certifi.where() or None)\n \n if self.batch:\n for chunk in utils.chunks(resources, self.batch_size):\n", "issue": "Certifi python module missing when deploying cloudtrail lambda functions\nThe following error happens when the lambda function is triggered\r\n\r\n`[ERROR] Runtime.ImportModuleError: Unable to import module 'custodian_policy': No module named 'certifi'`\r\n\r\nAs a workaround, I had to add the *certifi* package to the policy files.\r\n\r\n```\r\npolicies:\r\n - name: my-policy\r\n description: my description\r\n resource: my-resource\r\n mode:\r\n type: cloudtrail\r\n packages:\r\n - botocore\r\n - boto3\r\n - urllib3\r\n - certifi\r\n```\n", "before_files": [{"content": "# Copyright 2019 Microsoft Corporation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport json\n\nimport certifi\nimport jmespath\nimport urllib3\nfrom six.moves.urllib import parse\n\nfrom c7n import utils\nfrom .core import EventAction\n\n\nclass Webhook(EventAction):\n \"\"\"Calls a webhook with optional parameters and body\n populated from JMESPath queries.\n\n .. code-block:: yaml\n\n policies:\n - name: call-webhook\n resource: ec2\n description: |\n Call webhook with list of resource groups\n actions:\n - type: webhook\n url: http://foo.com\n query-params:\n resource_name: resource.name\n policy_name: policy.name\n \"\"\"\n\n schema_alias = True\n schema = utils.type_schema(\n 'webhook',\n required=['url'],\n **{\n 'url': {'type': 'string'},\n 'body': {'type': 'string'},\n 'batch': {'type': 'boolean'},\n 'batch-size': {'type': 'number'},\n 'method': {'type': 'string', 'enum': ['PUT', 'POST', 'GET', 'PATCH', 'DELETE']},\n 'query-params': {\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\",\n \"description\": \"query string values\"\n }\n },\n 'headers': {\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\",\n \"description\": \"header values\"\n }\n }\n }\n )\n\n def __init__(self, data=None, manager=None, log_dir=None):\n super(Webhook, self).__init__(data, manager, log_dir)\n self.http = None\n self.url = self.data.get('url')\n self.body = self.data.get('body')\n self.batch = self.data.get('batch', False)\n self.batch_size = self.data.get('batch-size', 500)\n self.query_params = self.data.get('query-params', {})\n self.headers = self.data.get('headers', {})\n self.method = self.data.get('method', 'POST')\n self.lookup_data = {\n 'account_id': self.manager.config.account_id,\n 'region': self.manager.config.region,\n 'execution_id': self.manager.ctx.execution_id,\n 'execution_start': self.manager.ctx.start_time,\n 'policy': self.manager.data\n }\n\n def process(self, resources, event=None):\n self.http = urllib3.PoolManager(\n cert_reqs='CERT_REQUIRED',\n ca_certs=certifi.where())\n\n if self.batch:\n for chunk in utils.chunks(resources, self.batch_size):\n resource_data = self.lookup_data\n resource_data['resources'] = chunk\n self._process_call(resource_data)\n else:\n for r in resources:\n resource_data = self.lookup_data\n resource_data['resource'] = r\n self._process_call(resource_data)\n\n def _process_call(self, resource):\n prepared_url = self._build_url(resource)\n prepared_body = self._build_body(resource)\n prepared_headers = self._build_headers(resource)\n\n if prepared_body:\n prepared_headers['Content-Type'] = 'application/json'\n\n try:\n res = self.http.request(\n method=self.method,\n url=prepared_url,\n body=prepared_body,\n headers=prepared_headers)\n\n self.log.info(\"%s got response %s with URL %s\" %\n (self.method, res.status, prepared_url))\n except urllib3.exceptions.HTTPError as e:\n self.log.error(\"Error calling %s. Code: %s\" % (prepared_url, e.reason))\n\n def _build_headers(self, resource):\n return {k: jmespath.search(v, resource) for k, v in self.headers.items()}\n\n def _build_url(self, resource):\n \"\"\"\n Compose URL with query string parameters.\n\n Will not lose existing static parameters in the URL string\n but does not support 'duplicate' parameter entries\n \"\"\"\n\n if not self.query_params:\n return self.url\n\n evaluated_params = {k: jmespath.search(v, resource) for k, v in self.query_params.items()}\n\n url_parts = list(parse.urlparse(self.url))\n query = dict(parse.parse_qsl(url_parts[4]))\n query.update(evaluated_params)\n url_parts[4] = parse.urlencode(query)\n\n return parse.urlunparse(url_parts)\n\n def _build_body(self, resource):\n \"\"\"Create a JSON body and dump it to encoded bytes.\"\"\"\n\n if not self.body:\n return None\n\n return json.dumps(jmespath.search(self.body, resource)).encode('utf-8')\n", "path": "c7n/actions/webhook.py"}], "after_files": [{"content": "# Copyright 2019 Microsoft Corporation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport json\n\ntry:\n import certifi\nexcept ImportError:\n certifi = None\n\nimport jmespath\nimport urllib3\nfrom six.moves.urllib import parse\n\nfrom c7n import utils\nfrom .core import EventAction\n\n\nclass Webhook(EventAction):\n \"\"\"Calls a webhook with optional parameters and body\n populated from JMESPath queries.\n\n .. code-block:: yaml\n\n policies:\n - name: call-webhook\n resource: ec2\n description: |\n Call webhook with list of resource groups\n actions:\n - type: webhook\n url: http://foo.com\n query-params:\n resource_name: resource.name\n policy_name: policy.name\n \"\"\"\n\n schema_alias = True\n schema = utils.type_schema(\n 'webhook',\n required=['url'],\n **{\n 'url': {'type': 'string'},\n 'body': {'type': 'string'},\n 'batch': {'type': 'boolean'},\n 'batch-size': {'type': 'number'},\n 'method': {'type': 'string', 'enum': ['PUT', 'POST', 'GET', 'PATCH', 'DELETE']},\n 'query-params': {\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\",\n \"description\": \"query string values\"\n }\n },\n 'headers': {\n \"type\": \"object\",\n \"additionalProperties\": {\n \"type\": \"string\",\n \"description\": \"header values\"\n }\n }\n }\n )\n\n def __init__(self, data=None, manager=None, log_dir=None):\n super(Webhook, self).__init__(data, manager, log_dir)\n self.http = None\n self.url = self.data.get('url')\n self.body = self.data.get('body')\n self.batch = self.data.get('batch', False)\n self.batch_size = self.data.get('batch-size', 500)\n self.query_params = self.data.get('query-params', {})\n self.headers = self.data.get('headers', {})\n self.method = self.data.get('method', 'POST')\n self.lookup_data = {\n 'account_id': self.manager.config.account_id,\n 'region': self.manager.config.region,\n 'execution_id': self.manager.ctx.execution_id,\n 'execution_start': self.manager.ctx.start_time,\n 'policy': self.manager.data\n }\n\n def process(self, resources, event=None):\n self.http = urllib3.PoolManager(\n cert_reqs='CERT_REQUIRED',\n ca_certs=certifi and certifi.where() or None)\n\n if self.batch:\n for chunk in utils.chunks(resources, self.batch_size):\n resource_data = self.lookup_data\n resource_data['resources'] = chunk\n self._process_call(resource_data)\n else:\n for r in resources:\n resource_data = self.lookup_data\n resource_data['resource'] = r\n self._process_call(resource_data)\n\n def _process_call(self, resource):\n prepared_url = self._build_url(resource)\n prepared_body = self._build_body(resource)\n prepared_headers = self._build_headers(resource)\n\n if prepared_body:\n prepared_headers['Content-Type'] = 'application/json'\n\n try:\n res = self.http.request(\n method=self.method,\n url=prepared_url,\n body=prepared_body,\n headers=prepared_headers)\n\n self.log.info(\"%s got response %s with URL %s\" %\n (self.method, res.status, prepared_url))\n except urllib3.exceptions.HTTPError as e:\n self.log.error(\"Error calling %s. Code: %s\" % (prepared_url, e.reason))\n\n def _build_headers(self, resource):\n return {k: jmespath.search(v, resource) for k, v in self.headers.items()}\n\n def _build_url(self, resource):\n \"\"\"\n Compose URL with query string parameters.\n\n Will not lose existing static parameters in the URL string\n but does not support 'duplicate' parameter entries\n \"\"\"\n\n if not self.query_params:\n return self.url\n\n evaluated_params = {k: jmespath.search(v, resource) for k, v in self.query_params.items()}\n\n url_parts = list(parse.urlparse(self.url))\n query = dict(parse.parse_qsl(url_parts[4]))\n query.update(evaluated_params)\n url_parts[4] = parse.urlencode(query)\n\n return parse.urlunparse(url_parts)\n\n def _build_body(self, resource):\n \"\"\"Create a JSON body and dump it to encoded bytes.\"\"\"\n\n if not self.body:\n return None\n\n return json.dumps(jmespath.search(self.body, resource)).encode('utf-8')\n", "path": "c7n/actions/webhook.py"}]} | 1,879 | 183 |
gh_patches_debug_12156 | rasdani/github-patches | git_diff | nltk__nltk-3022 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
nltk.chat.chatbot() endless loop
When I type `import nltk` followed by `nltk.chat.chatbots()`, it lists/asks which one I want to talk to, and then endlessly scrolls the following: ` Enter a number in the range 1-5: Error: bad chatbot number`, in both Jupyter and Spyder.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nltk/chat/__init__.py`
Content:
```
1 # Natural Language Toolkit: Chatbots
2 #
3 # Copyright (C) 2001-2022 NLTK Project
4 # Authors: Steven Bird <[email protected]>
5 # URL: <https://www.nltk.org/>
6 # For license information, see LICENSE.TXT
7
8 # Based on an Eliza implementation by Joe Strout <[email protected]>,
9 # Jeff Epler <[email protected]> and Jez Higgins <[email protected]>.
10
11 """
12 A class for simple chatbots. These perform simple pattern matching on sentences
13 typed by users, and respond with automatically generated sentences.
14
15 These chatbots may not work using the windows command line or the
16 windows IDLE GUI.
17 """
18
19 from nltk.chat.eliza import eliza_chat
20 from nltk.chat.iesha import iesha_chat
21 from nltk.chat.rude import rude_chat
22 from nltk.chat.suntsu import suntsu_chat
23 from nltk.chat.util import Chat
24 from nltk.chat.zen import zen_chat
25
26 bots = [
27 (eliza_chat, "Eliza (psycho-babble)"),
28 (iesha_chat, "Iesha (teen anime junky)"),
29 (rude_chat, "Rude (abusive bot)"),
30 (suntsu_chat, "Suntsu (Chinese sayings)"),
31 (zen_chat, "Zen (gems of wisdom)"),
32 ]
33
34
35 def chatbots():
36 import sys
37
38 print("Which chatbot would you like to talk to?")
39 botcount = len(bots)
40 for i in range(botcount):
41 print(" %d: %s" % (i + 1, bots[i][1]))
42 while True:
43 print("\nEnter a number in the range 1-%d: " % botcount, end=" ")
44 choice = sys.stdin.readline().strip()
45 if choice.isdigit() and (int(choice) - 1) in range(botcount):
46 break
47 else:
48 print(" Error: bad chatbot number")
49
50 chatbot = bots[int(choice) - 1][0]
51 chatbot()
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nltk/chat/__init__.py b/nltk/chat/__init__.py
--- a/nltk/chat/__init__.py
+++ b/nltk/chat/__init__.py
@@ -33,15 +33,12 @@
def chatbots():
- import sys
-
print("Which chatbot would you like to talk to?")
botcount = len(bots)
for i in range(botcount):
print(" %d: %s" % (i + 1, bots[i][1]))
while True:
- print("\nEnter a number in the range 1-%d: " % botcount, end=" ")
- choice = sys.stdin.readline().strip()
+ choice = input(f"\nEnter a number in the range 1-{botcount}: ").strip()
if choice.isdigit() and (int(choice) - 1) in range(botcount):
break
else:
| {"golden_diff": "diff --git a/nltk/chat/__init__.py b/nltk/chat/__init__.py\n--- a/nltk/chat/__init__.py\n+++ b/nltk/chat/__init__.py\n@@ -33,15 +33,12 @@\n \n \n def chatbots():\n- import sys\n-\n print(\"Which chatbot would you like to talk to?\")\n botcount = len(bots)\n for i in range(botcount):\n print(\" %d: %s\" % (i + 1, bots[i][1]))\n while True:\n- print(\"\\nEnter a number in the range 1-%d: \" % botcount, end=\" \")\n- choice = sys.stdin.readline().strip()\n+ choice = input(f\"\\nEnter a number in the range 1-{botcount}: \").strip()\n if choice.isdigit() and (int(choice) - 1) in range(botcount):\n break\n else:\n", "issue": "nltk.chat.chatbot() endless loop\nWhen I type `import nltk` followed by `nltk.chat.chatbots()`, it lists/asks which one I want to talk to, and then endlessly scrolls the following: ` Enter a number in the range 1-5: Error: bad chatbot number`, in both Jupyter and Spyder.\n", "before_files": [{"content": "# Natural Language Toolkit: Chatbots\n#\n# Copyright (C) 2001-2022 NLTK Project\n# Authors: Steven Bird <[email protected]>\n# URL: <https://www.nltk.org/>\n# For license information, see LICENSE.TXT\n\n# Based on an Eliza implementation by Joe Strout <[email protected]>,\n# Jeff Epler <[email protected]> and Jez Higgins <[email protected]>.\n\n\"\"\"\nA class for simple chatbots. These perform simple pattern matching on sentences\ntyped by users, and respond with automatically generated sentences.\n\nThese chatbots may not work using the windows command line or the\nwindows IDLE GUI.\n\"\"\"\n\nfrom nltk.chat.eliza import eliza_chat\nfrom nltk.chat.iesha import iesha_chat\nfrom nltk.chat.rude import rude_chat\nfrom nltk.chat.suntsu import suntsu_chat\nfrom nltk.chat.util import Chat\nfrom nltk.chat.zen import zen_chat\n\nbots = [\n (eliza_chat, \"Eliza (psycho-babble)\"),\n (iesha_chat, \"Iesha (teen anime junky)\"),\n (rude_chat, \"Rude (abusive bot)\"),\n (suntsu_chat, \"Suntsu (Chinese sayings)\"),\n (zen_chat, \"Zen (gems of wisdom)\"),\n]\n\n\ndef chatbots():\n import sys\n\n print(\"Which chatbot would you like to talk to?\")\n botcount = len(bots)\n for i in range(botcount):\n print(\" %d: %s\" % (i + 1, bots[i][1]))\n while True:\n print(\"\\nEnter a number in the range 1-%d: \" % botcount, end=\" \")\n choice = sys.stdin.readline().strip()\n if choice.isdigit() and (int(choice) - 1) in range(botcount):\n break\n else:\n print(\" Error: bad chatbot number\")\n\n chatbot = bots[int(choice) - 1][0]\n chatbot()\n", "path": "nltk/chat/__init__.py"}], "after_files": [{"content": "# Natural Language Toolkit: Chatbots\n#\n# Copyright (C) 2001-2022 NLTK Project\n# Authors: Steven Bird <[email protected]>\n# URL: <https://www.nltk.org/>\n# For license information, see LICENSE.TXT\n\n# Based on an Eliza implementation by Joe Strout <[email protected]>,\n# Jeff Epler <[email protected]> and Jez Higgins <[email protected]>.\n\n\"\"\"\nA class for simple chatbots. These perform simple pattern matching on sentences\ntyped by users, and respond with automatically generated sentences.\n\nThese chatbots may not work using the windows command line or the\nwindows IDLE GUI.\n\"\"\"\n\nfrom nltk.chat.eliza import eliza_chat\nfrom nltk.chat.iesha import iesha_chat\nfrom nltk.chat.rude import rude_chat\nfrom nltk.chat.suntsu import suntsu_chat\nfrom nltk.chat.util import Chat\nfrom nltk.chat.zen import zen_chat\n\nbots = [\n (eliza_chat, \"Eliza (psycho-babble)\"),\n (iesha_chat, \"Iesha (teen anime junky)\"),\n (rude_chat, \"Rude (abusive bot)\"),\n (suntsu_chat, \"Suntsu (Chinese sayings)\"),\n (zen_chat, \"Zen (gems of wisdom)\"),\n]\n\n\ndef chatbots():\n print(\"Which chatbot would you like to talk to?\")\n botcount = len(bots)\n for i in range(botcount):\n print(\" %d: %s\" % (i + 1, bots[i][1]))\n while True:\n choice = input(f\"\\nEnter a number in the range 1-{botcount}: \").strip()\n if choice.isdigit() and (int(choice) - 1) in range(botcount):\n break\n else:\n print(\" Error: bad chatbot number\")\n\n chatbot = bots[int(choice) - 1][0]\n chatbot()\n", "path": "nltk/chat/__init__.py"}]} | 882 | 200 |
gh_patches_debug_27761 | rasdani/github-patches | git_diff | kserve__kserve-2216 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incompatible versions for google protos required by kserve and dependencies
/kind bug
**What steps did you take and what happened:**
[A clear and concise description of what the bug is.]
Trying to install the kserve module, kserve requires `googleapis-common-protos==1.53.0`, but some dependencies of kserve require `googleapis-common-protos<2.0dev,>=1.56.2`, and hence kserve cannot be installed.
**What did you expect to happen:**
The version of protos required by kserve should be updated to be compatible with its dependencies
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
**Environment:**
- Istio Version:
- Knative Version:
- KFServing Version:
- Kubeflow version:
- Kfdef:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]
- Minikube version:
- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`):
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/kserve/setup.py`
Content:
```
1 # Copyright 2021 The KServe Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import setuptools
16
17 TESTS_REQUIRES = [
18 'pytest',
19 'pytest-xdist',
20 'pytest-cov',
21 'pytest-asyncio',
22 'pytest-tornasync',
23 'mypy'
24 ]
25
26 with open('requirements.txt') as f:
27 REQUIRES = f.readlines()
28
29 setuptools.setup(
30 name='kserve',
31 version='0.8.0',
32 author="The KServe Authors",
33 author_email='[email protected], [email protected], [email protected]',
34 license="Apache License Version 2.0",
35 url="https://github.com/kserve/kserve/tree/master/python/kserve",
36 description="KServe Python SDK",
37 long_description="Python SDK for KServe Server and Client.",
38 python_requires='>=3.6',
39 packages=[
40 'kserve',
41 'kserve.api',
42 'kserve.constants',
43 'kserve.models',
44 'kserve.handlers',
45 'kserve.utils',
46 ],
47 package_data={'': ['requirements.txt']},
48 include_package_data=True,
49 zip_safe=False,
50 classifiers=[
51 'Intended Audience :: Developers',
52 'Intended Audience :: Education',
53 'Intended Audience :: Science/Research',
54 'Programming Language :: Python :: 3',
55 'Programming Language :: Python :: 3.6',
56 'Programming Language :: Python :: 3.7',
57 "License :: OSI Approved :: Apache Software License",
58 "Operating System :: OS Independent",
59 'Topic :: Scientific/Engineering',
60 'Topic :: Scientific/Engineering :: Artificial Intelligence',
61 'Topic :: Software Development',
62 'Topic :: Software Development :: Libraries',
63 'Topic :: Software Development :: Libraries :: Python Modules',
64 ],
65 install_requires=REQUIRES,
66 tests_require=TESTS_REQUIRES,
67 extras_require={'test': TESTS_REQUIRES}
68 )
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/kserve/setup.py b/python/kserve/setup.py
--- a/python/kserve/setup.py
+++ b/python/kserve/setup.py
@@ -28,14 +28,14 @@
setuptools.setup(
name='kserve',
- version='0.8.0',
+ version='0.9.0rc0',
author="The KServe Authors",
author_email='[email protected], [email protected], [email protected]',
license="Apache License Version 2.0",
url="https://github.com/kserve/kserve/tree/master/python/kserve",
description="KServe Python SDK",
long_description="Python SDK for KServe Server and Client.",
- python_requires='>=3.6',
+ python_requires='>=3.7',
packages=[
'kserve',
'kserve.api',
@@ -52,8 +52,9 @@
'Intended Audience :: Education',
'Intended Audience :: Science/Research',
'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
+ 'Programming Language :: Python :: 3.8',
+ 'Programming Language :: Python :: 3.9',
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
'Topic :: Scientific/Engineering',
| {"golden_diff": "diff --git a/python/kserve/setup.py b/python/kserve/setup.py\n--- a/python/kserve/setup.py\n+++ b/python/kserve/setup.py\n@@ -28,14 +28,14 @@\n \n setuptools.setup(\n name='kserve',\n- version='0.8.0',\n+ version='0.9.0rc0',\n author=\"The KServe Authors\",\n author_email='[email protected], [email protected], [email protected]',\n license=\"Apache License Version 2.0\",\n url=\"https://github.com/kserve/kserve/tree/master/python/kserve\",\n description=\"KServe Python SDK\",\n long_description=\"Python SDK for KServe Server and Client.\",\n- python_requires='>=3.6',\n+ python_requires='>=3.7',\n packages=[\n 'kserve',\n 'kserve.api',\n@@ -52,8 +52,9 @@\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python :: 3',\n- 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n+ 'Programming Language :: Python :: 3.8',\n+ 'Programming Language :: Python :: 3.9',\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n 'Topic :: Scientific/Engineering',\n", "issue": "Incompatible versions for google protos required by kserve and dependencies\n/kind bug\r\n\r\n**What steps did you take and what happened:**\r\n[A clear and concise description of what the bug is.]\r\nTrying to install the kserve module, kserve requires `googleapis-common-protos==1.53.0`, but some dependencies of kserve require `googleapis-common-protos<2.0dev,>=1.56.2`, and hence kserve cannot be installed.\r\n\r\n\r\n**What did you expect to happen:**\r\nThe version of protos required by kserve should be updated to be compatible with its dependencies\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\r\n\r\n**Environment:**\r\n\r\n- Istio Version:\r\n- Knative Version:\r\n- KFServing Version:\r\n- Kubeflow version:\r\n- Kfdef:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]\r\n- Minikube version:\r\n- Kubernetes version: (use `kubectl version`):\r\n- OS (e.g. from `/etc/os-release`):\r\n\n", "before_files": [{"content": "# Copyright 2021 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport setuptools\n\nTESTS_REQUIRES = [\n 'pytest',\n 'pytest-xdist',\n 'pytest-cov',\n 'pytest-asyncio',\n 'pytest-tornasync',\n 'mypy'\n]\n\nwith open('requirements.txt') as f:\n REQUIRES = f.readlines()\n\nsetuptools.setup(\n name='kserve',\n version='0.8.0',\n author=\"The KServe Authors\",\n author_email='[email protected], [email protected], [email protected]',\n license=\"Apache License Version 2.0\",\n url=\"https://github.com/kserve/kserve/tree/master/python/kserve\",\n description=\"KServe Python SDK\",\n long_description=\"Python SDK for KServe Server and Client.\",\n python_requires='>=3.6',\n packages=[\n 'kserve',\n 'kserve.api',\n 'kserve.constants',\n 'kserve.models',\n 'kserve.handlers',\n 'kserve.utils',\n ],\n package_data={'': ['requirements.txt']},\n include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRES,\n extras_require={'test': TESTS_REQUIRES}\n)\n", "path": "python/kserve/setup.py"}], "after_files": [{"content": "# Copyright 2021 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport setuptools\n\nTESTS_REQUIRES = [\n 'pytest',\n 'pytest-xdist',\n 'pytest-cov',\n 'pytest-asyncio',\n 'pytest-tornasync',\n 'mypy'\n]\n\nwith open('requirements.txt') as f:\n REQUIRES = f.readlines()\n\nsetuptools.setup(\n name='kserve',\n version='0.9.0rc0',\n author=\"The KServe Authors\",\n author_email='[email protected], [email protected], [email protected]',\n license=\"Apache License Version 2.0\",\n url=\"https://github.com/kserve/kserve/tree/master/python/kserve\",\n description=\"KServe Python SDK\",\n long_description=\"Python SDK for KServe Server and Client.\",\n python_requires='>=3.7',\n packages=[\n 'kserve',\n 'kserve.api',\n 'kserve.constants',\n 'kserve.models',\n 'kserve.handlers',\n 'kserve.utils',\n ],\n package_data={'': ['requirements.txt']},\n include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRES,\n extras_require={'test': TESTS_REQUIRES}\n)\n", "path": "python/kserve/setup.py"}]} | 1,162 | 316 |
gh_patches_debug_47853 | rasdani/github-patches | git_diff | searx__searx-3473 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No Bing results for Searx (but for SearxNG)
The public instances Searx instances don't seem return any results for Bing but SearxNG instances do. I think the situation has been the same for days or even weeks.
I tried out several searx and searxng instances: https://searx.space/
### Searx
Example Searx instance: https://searx.roflcopter.fr/ (1.1.0-53-c647b55e)
[Bing search in roflcopter.fr](https://searx.roflcopter.fr/search?q=%21bi%20foo&categories=none&language=en-US) gave "Sorry! we didn't find any results."
### SearxNG
Example SearxNG instances: https://northboot.xyz/ (2023.2.4+7320b0c7)
[Bing search in northboot.xyz](https://northboot.xyz/search?q=%21bi+foo&category_general=1&language=en-US&time_range=&safesearch=1&theme=simple) gave a pageful of results.
### Suggestion
Might the fix be included in the [SearxNG commit list](https://github.com/searxng/searxng/commits/master/searx/engines/bing.py)? There are several references to Bing that are not included in [Searx commit list](https://github.com/searx/searx/commits/master/searx/engines/bing.py)
There is a big diff between https://raw.githubusercontent.com/searxng/searxng/master/searx/engines/bing.py and https://raw.githubusercontent.com/searx/searx/master/searx/engines/bing.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/bing.py`
Content:
```
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 """
3 Bing (Web)
4 """
5
6 import re
7 from urllib.parse import urlencode
8 from lxml import html
9 from searx import logger
10 from searx.utils import eval_xpath, extract_text, match_language
11
12 logger = logger.getChild('bing engine')
13
14 # about
15 about = {
16 "website": 'https://www.bing.com',
17 "wikidata_id": 'Q182496',
18 "official_api_documentation": 'https://www.microsoft.com/en-us/bing/apis/bing-web-search-api',
19 "use_official_api": False,
20 "require_api_key": False,
21 "results": 'HTML',
22 }
23
24 # engine dependent config
25 categories = ['general']
26 paging = True
27 supported_languages_url = 'https://www.bing.com/account/general'
28 language_aliases = {'zh-CN': 'zh-CHS', 'zh-TW': 'zh-CHT', 'zh-HK': 'zh-CHT'}
29
30 # search-url
31 base_url = 'https://www.bing.com/'
32 search_string = 'search?{query}&first={offset}'
33
34
35 def _get_offset_from_pageno(pageno):
36 return (pageno - 1) * 10 + 1
37
38
39 # do search-request
40 def request(query, params):
41 offset = _get_offset_from_pageno(params.get('pageno', 0))
42
43 if params['language'] == 'all':
44 lang = 'EN'
45 else:
46 lang = match_language(params['language'], supported_languages, language_aliases)
47
48 query = 'language:{} {}'.format(lang.split('-')[0].upper(), query)
49
50 search_path = search_string.format(
51 query=urlencode({'q': query}),
52 offset=offset)
53
54 params['url'] = base_url + search_path
55 params['headers']['User-Agent'] = ('Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 '
56 '(KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36')
57
58 return params
59
60
61 # get response from search-request
62 def response(resp):
63 results = []
64 result_len = 0
65
66 dom = html.fromstring(resp.text)
67 # parse results
68 for result in eval_xpath(dom, '//div[@class="sa_cc"]'):
69 link = eval_xpath(result, './/h3/a')[0]
70 url = link.attrib.get('href')
71 pretty_url = extract_text(eval_xpath(result, './/cite'))
72 title = extract_text(link)
73 content = extract_text(eval_xpath(result, './/p'))
74
75 # append result
76 results.append({'url': url,
77 'pretty_url': pretty_url,
78 'title': title,
79 'content': content})
80
81 # parse results again if nothing is found yet
82 for result in eval_xpath(dom, '//li[@class="b_algo"]'):
83 link = eval_xpath(result, './/h2/a')[0]
84 url = link.attrib.get('href')
85 title = extract_text(link)
86 content = extract_text(eval_xpath(result, './/p'))
87
88 # append result
89 results.append({'url': url,
90 'title': title,
91 'content': content})
92
93 try:
94 result_len_container = "".join(eval_xpath(dom, '//span[@class="sb_count"]//text()'))
95 if "-" in result_len_container:
96 # Remove the part "from-to" for paginated request ...
97 result_len_container = result_len_container[result_len_container.find("-") * 2 + 2:]
98
99 result_len_container = re.sub('[^0-9]', '', result_len_container)
100 if len(result_len_container) > 0:
101 result_len = int(result_len_container)
102 except Exception as e:
103 logger.debug('result error :\n%s', e)
104
105 if result_len and _get_offset_from_pageno(resp.search_params.get("pageno", 0)) > result_len:
106 return []
107
108 results.append({'number_of_results': result_len})
109 return results
110
111
112 # get supported languages from their site
113 def _fetch_supported_languages(resp):
114 lang_tags = set()
115
116 setmkt = re.compile('setmkt=([^&]*)')
117 dom = html.fromstring(resp.text)
118 lang_links = eval_xpath(dom, "//li/a[contains(@href, 'setmkt')]")
119
120 for a in lang_links:
121 href = eval_xpath(a, './@href')[0]
122 match = setmkt.search(href)
123 l_tag = match.groups()[0]
124 _lang, _nation = l_tag.split('-', 1)
125 l_tag = _lang.lower() + '-' + _nation.upper()
126 lang_tags.add(l_tag)
127
128 return list(lang_tags)
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/engines/bing.py b/searx/engines/bing.py
--- a/searx/engines/bing.py
+++ b/searx/engines/bing.py
@@ -52,8 +52,7 @@
offset=offset)
params['url'] = base_url + search_path
- params['headers']['User-Agent'] = ('Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 '
- '(KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36')
+ params['headers']['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'
return params
| {"golden_diff": "diff --git a/searx/engines/bing.py b/searx/engines/bing.py\n--- a/searx/engines/bing.py\n+++ b/searx/engines/bing.py\n@@ -52,8 +52,7 @@\n offset=offset)\n \n params['url'] = base_url + search_path\n- params['headers']['User-Agent'] = ('Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 '\n- '(KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36')\n+ params['headers']['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'\n \n return params\n", "issue": "No Bing results for Searx (but for SearxNG)\nThe public instances Searx instances don't seem return any results for Bing but SearxNG instances do. I think the situation has been the same for days or even weeks.\r\n\r\nI tried out several searx and searxng instances: https://searx.space/\r\n\r\n### Searx\r\n\r\nExample Searx instance: https://searx.roflcopter.fr/ (1.1.0-53-c647b55e)\r\n[Bing search in roflcopter.fr](https://searx.roflcopter.fr/search?q=%21bi%20foo&categories=none&language=en-US) gave \"Sorry! we didn't find any results.\"\r\n\r\n### SearxNG\r\n\r\nExample SearxNG instances: https://northboot.xyz/ (2023.2.4+7320b0c7)\r\n[Bing search in northboot.xyz](https://northboot.xyz/search?q=%21bi+foo&category_general=1&language=en-US&time_range=&safesearch=1&theme=simple) gave a pageful of results.\r\n\r\n### Suggestion\r\n\r\nMight the fix be included in the [SearxNG commit list](https://github.com/searxng/searxng/commits/master/searx/engines/bing.py)? There are several references to Bing that are not included in [Searx commit list](https://github.com/searx/searx/commits/master/searx/engines/bing.py)\r\nThere is a big diff between https://raw.githubusercontent.com/searxng/searxng/master/searx/engines/bing.py and https://raw.githubusercontent.com/searx/searx/master/searx/engines/bing.py\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n\"\"\"\n Bing (Web)\n\"\"\"\n\nimport re\nfrom urllib.parse import urlencode\nfrom lxml import html\nfrom searx import logger\nfrom searx.utils import eval_xpath, extract_text, match_language\n\nlogger = logger.getChild('bing engine')\n\n# about\nabout = {\n \"website\": 'https://www.bing.com',\n \"wikidata_id\": 'Q182496',\n \"official_api_documentation\": 'https://www.microsoft.com/en-us/bing/apis/bing-web-search-api',\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": 'HTML',\n}\n\n# engine dependent config\ncategories = ['general']\npaging = True\nsupported_languages_url = 'https://www.bing.com/account/general'\nlanguage_aliases = {'zh-CN': 'zh-CHS', 'zh-TW': 'zh-CHT', 'zh-HK': 'zh-CHT'}\n\n# search-url\nbase_url = 'https://www.bing.com/'\nsearch_string = 'search?{query}&first={offset}'\n\n\ndef _get_offset_from_pageno(pageno):\n return (pageno - 1) * 10 + 1\n\n\n# do search-request\ndef request(query, params):\n offset = _get_offset_from_pageno(params.get('pageno', 0))\n\n if params['language'] == 'all':\n lang = 'EN'\n else:\n lang = match_language(params['language'], supported_languages, language_aliases)\n\n query = 'language:{} {}'.format(lang.split('-')[0].upper(), query)\n\n search_path = search_string.format(\n query=urlencode({'q': query}),\n offset=offset)\n\n params['url'] = base_url + search_path\n params['headers']['User-Agent'] = ('Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 '\n '(KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36')\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n result_len = 0\n\n dom = html.fromstring(resp.text)\n # parse results\n for result in eval_xpath(dom, '//div[@class=\"sa_cc\"]'):\n link = eval_xpath(result, './/h3/a')[0]\n url = link.attrib.get('href')\n pretty_url = extract_text(eval_xpath(result, './/cite'))\n title = extract_text(link)\n content = extract_text(eval_xpath(result, './/p'))\n\n # append result\n results.append({'url': url,\n 'pretty_url': pretty_url,\n 'title': title,\n 'content': content})\n\n # parse results again if nothing is found yet\n for result in eval_xpath(dom, '//li[@class=\"b_algo\"]'):\n link = eval_xpath(result, './/h2/a')[0]\n url = link.attrib.get('href')\n title = extract_text(link)\n content = extract_text(eval_xpath(result, './/p'))\n\n # append result\n results.append({'url': url,\n 'title': title,\n 'content': content})\n\n try:\n result_len_container = \"\".join(eval_xpath(dom, '//span[@class=\"sb_count\"]//text()'))\n if \"-\" in result_len_container:\n # Remove the part \"from-to\" for paginated request ...\n result_len_container = result_len_container[result_len_container.find(\"-\") * 2 + 2:]\n\n result_len_container = re.sub('[^0-9]', '', result_len_container)\n if len(result_len_container) > 0:\n result_len = int(result_len_container)\n except Exception as e:\n logger.debug('result error :\\n%s', e)\n\n if result_len and _get_offset_from_pageno(resp.search_params.get(\"pageno\", 0)) > result_len:\n return []\n\n results.append({'number_of_results': result_len})\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n lang_tags = set()\n\n setmkt = re.compile('setmkt=([^&]*)')\n dom = html.fromstring(resp.text)\n lang_links = eval_xpath(dom, \"//li/a[contains(@href, 'setmkt')]\")\n\n for a in lang_links:\n href = eval_xpath(a, './@href')[0]\n match = setmkt.search(href)\n l_tag = match.groups()[0]\n _lang, _nation = l_tag.split('-', 1)\n l_tag = _lang.lower() + '-' + _nation.upper()\n lang_tags.add(l_tag)\n\n return list(lang_tags)\n", "path": "searx/engines/bing.py"}], "after_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n\"\"\"\n Bing (Web)\n\"\"\"\n\nimport re\nfrom urllib.parse import urlencode\nfrom lxml import html\nfrom searx import logger\nfrom searx.utils import eval_xpath, extract_text, match_language\n\nlogger = logger.getChild('bing engine')\n\n# about\nabout = {\n \"website\": 'https://www.bing.com',\n \"wikidata_id\": 'Q182496',\n \"official_api_documentation\": 'https://www.microsoft.com/en-us/bing/apis/bing-web-search-api',\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": 'HTML',\n}\n\n# engine dependent config\ncategories = ['general']\npaging = True\nsupported_languages_url = 'https://www.bing.com/account/general'\nlanguage_aliases = {'zh-CN': 'zh-CHS', 'zh-TW': 'zh-CHT', 'zh-HK': 'zh-CHT'}\n\n# search-url\nbase_url = 'https://www.bing.com/'\nsearch_string = 'search?{query}&first={offset}'\n\n\ndef _get_offset_from_pageno(pageno):\n return (pageno - 1) * 10 + 1\n\n\n# do search-request\ndef request(query, params):\n offset = _get_offset_from_pageno(params.get('pageno', 0))\n\n if params['language'] == 'all':\n lang = 'EN'\n else:\n lang = match_language(params['language'], supported_languages, language_aliases)\n\n query = 'language:{} {}'.format(lang.split('-')[0].upper(), query)\n\n search_path = search_string.format(\n query=urlencode({'q': query}),\n offset=offset)\n\n params['url'] = base_url + search_path\n params['headers']['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n result_len = 0\n\n dom = html.fromstring(resp.text)\n # parse results\n for result in eval_xpath(dom, '//div[@class=\"sa_cc\"]'):\n link = eval_xpath(result, './/h3/a')[0]\n url = link.attrib.get('href')\n pretty_url = extract_text(eval_xpath(result, './/cite'))\n title = extract_text(link)\n content = extract_text(eval_xpath(result, './/p'))\n\n # append result\n results.append({'url': url,\n 'pretty_url': pretty_url,\n 'title': title,\n 'content': content})\n\n # parse results again if nothing is found yet\n for result in eval_xpath(dom, '//li[@class=\"b_algo\"]'):\n link = eval_xpath(result, './/h2/a')[0]\n url = link.attrib.get('href')\n title = extract_text(link)\n content = extract_text(eval_xpath(result, './/p'))\n\n # append result\n results.append({'url': url,\n 'title': title,\n 'content': content})\n\n try:\n result_len_container = \"\".join(eval_xpath(dom, '//span[@class=\"sb_count\"]//text()'))\n if \"-\" in result_len_container:\n # Remove the part \"from-to\" for paginated request ...\n result_len_container = result_len_container[result_len_container.find(\"-\") * 2 + 2:]\n\n result_len_container = re.sub('[^0-9]', '', result_len_container)\n if len(result_len_container) > 0:\n result_len = int(result_len_container)\n except Exception as e:\n logger.debug('result error :\\n%s', e)\n\n if result_len and _get_offset_from_pageno(resp.search_params.get(\"pageno\", 0)) > result_len:\n return []\n\n results.append({'number_of_results': result_len})\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n lang_tags = set()\n\n setmkt = re.compile('setmkt=([^&]*)')\n dom = html.fromstring(resp.text)\n lang_links = eval_xpath(dom, \"//li/a[contains(@href, 'setmkt')]\")\n\n for a in lang_links:\n href = eval_xpath(a, './@href')[0]\n match = setmkt.search(href)\n l_tag = match.groups()[0]\n _lang, _nation = l_tag.split('-', 1)\n l_tag = _lang.lower() + '-' + _nation.upper()\n lang_tags.add(l_tag)\n\n return list(lang_tags)\n", "path": "searx/engines/bing.py"}]} | 1,978 | 186 |
gh_patches_debug_23345 | rasdani/github-patches | git_diff | elastic__apm-agent-python-958 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Expand k8s pod ID discovery regex
Implementing elastic/apm#344
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/utils/cgroup.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 import os
32 import re
33
34 CGROUP_PATH = "/proc/self/cgroup"
35
36 SYSTEMD_SCOPE_SUFFIX = ".scope"
37
38 kubepods_regexp = re.compile(
39 r"(?:^/kubepods[\S]*/pod([^/]+)$)|(?:^/kubepods\.slice/kubepods-[^/]+\.slice/kubepods-[^/]+-pod([^/]+)\.slice$)"
40 )
41
42 container_id_regexp = re.compile(
43 "^(?:[0-9a-f]{64}|[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4,})$", re.IGNORECASE
44 )
45
46
47 def get_cgroup_container_metadata():
48 """
49 Reads docker/kubernetes metadata (container id, pod id) from /proc/self/cgroup
50
51 The result is a nested dictionary with the detected IDs, e.g.
52
53 {
54 "container": {"id": "2227daf62df6694645fee5df53c1f91271546a9560e8600a525690ae252b7f63"},
55 "pod": {"uid": "90d81341_92de_11e7_8cf2_507b9d4141fa"}
56 }
57
58 :return: a dictionary with the detected ids or {}
59 """
60 if not os.path.exists(CGROUP_PATH):
61 return {}
62 with open(CGROUP_PATH) as f:
63 return parse_cgroups(f) or {}
64
65
66 def parse_cgroups(filehandle):
67 """
68 Reads lines from a file handle and tries to parse docker container IDs and kubernetes Pod IDs.
69
70 See tests.utils.docker_tests.test_cgroup_parsing for a set of test cases
71
72 :param filehandle:
73 :return: nested dictionary or None
74 """
75 for line in filehandle:
76 parts = line.strip().split(":")
77 if len(parts) != 3:
78 continue
79 cgroup_path = parts[2]
80
81 # Depending on the filesystem driver used for cgroup
82 # management, the paths in /proc/pid/cgroup will have
83 # one of the following formats in a Docker container:
84 #
85 # systemd: /system.slice/docker-<container-ID>.scope
86 # cgroupfs: /docker/<container-ID>
87 #
88 # In a Kubernetes pod, the cgroup path will look like:
89 #
90 # systemd:/kubepods.slice/kubepods-<QoS-class>.slice/kubepods-<QoS-class>-pod<pod-UID>.slice/<container-iD>.scope
91 # cgroupfs:/kubepods/<QoS-class>/pod<pod-UID>/<container-iD>
92
93 directory, container_id = os.path.split(cgroup_path)
94 if container_id.endswith(SYSTEMD_SCOPE_SUFFIX):
95 container_id = container_id[: -len(SYSTEMD_SCOPE_SUFFIX)]
96 if "-" in container_id:
97 container_id = container_id.split("-", 1)[1]
98 kubepods_match = kubepods_regexp.match(directory)
99 if kubepods_match:
100 pod_id = kubepods_match.group(1)
101 if not pod_id:
102 pod_id = kubepods_match.group(2)
103 if pod_id:
104 pod_id = pod_id.replace("_", "-")
105 return {"container": {"id": container_id}, "kubernetes": {"pod": {"uid": pod_id}}}
106 elif container_id_regexp.match(container_id):
107 return {"container": {"id": container_id}}
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticapm/utils/cgroup.py b/elasticapm/utils/cgroup.py
--- a/elasticapm/utils/cgroup.py
+++ b/elasticapm/utils/cgroup.py
@@ -36,7 +36,7 @@
SYSTEMD_SCOPE_SUFFIX = ".scope"
kubepods_regexp = re.compile(
- r"(?:^/kubepods[\S]*/pod([^/]+)$)|(?:^/kubepods\.slice/kubepods-[^/]+\.slice/kubepods-[^/]+-pod([^/]+)\.slice$)"
+ r"(?:^/kubepods[\S]*/pod([^/]+)$)|(?:^/kubepods\.slice/(kubepods-[^/]+\.slice/)?kubepods[^/]*-pod([^/]+)\.slice$)"
)
container_id_regexp = re.compile(
@@ -97,9 +97,9 @@
container_id = container_id.split("-", 1)[1]
kubepods_match = kubepods_regexp.match(directory)
if kubepods_match:
- pod_id = kubepods_match.group(1)
+ pod_id = kubepods_match.group(1) # if first part of kubepods_regexp matched
if not pod_id:
- pod_id = kubepods_match.group(2)
+ pod_id = kubepods_match.group(3) # if second part of kubepods_regexp matched
if pod_id:
pod_id = pod_id.replace("_", "-")
return {"container": {"id": container_id}, "kubernetes": {"pod": {"uid": pod_id}}}
| {"golden_diff": "diff --git a/elasticapm/utils/cgroup.py b/elasticapm/utils/cgroup.py\n--- a/elasticapm/utils/cgroup.py\n+++ b/elasticapm/utils/cgroup.py\n@@ -36,7 +36,7 @@\n SYSTEMD_SCOPE_SUFFIX = \".scope\"\n \n kubepods_regexp = re.compile(\n- r\"(?:^/kubepods[\\S]*/pod([^/]+)$)|(?:^/kubepods\\.slice/kubepods-[^/]+\\.slice/kubepods-[^/]+-pod([^/]+)\\.slice$)\"\n+ r\"(?:^/kubepods[\\S]*/pod([^/]+)$)|(?:^/kubepods\\.slice/(kubepods-[^/]+\\.slice/)?kubepods[^/]*-pod([^/]+)\\.slice$)\"\n )\n \n container_id_regexp = re.compile(\n@@ -97,9 +97,9 @@\n container_id = container_id.split(\"-\", 1)[1]\n kubepods_match = kubepods_regexp.match(directory)\n if kubepods_match:\n- pod_id = kubepods_match.group(1)\n+ pod_id = kubepods_match.group(1) # if first part of kubepods_regexp matched\n if not pod_id:\n- pod_id = kubepods_match.group(2)\n+ pod_id = kubepods_match.group(3) # if second part of kubepods_regexp matched\n if pod_id:\n pod_id = pod_id.replace(\"_\", \"-\")\n return {\"container\": {\"id\": container_id}, \"kubernetes\": {\"pod\": {\"uid\": pod_id}}}\n", "issue": "Expand k8s pod ID discovery regex\nImplementing elastic/apm#344\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport os\nimport re\n\nCGROUP_PATH = \"/proc/self/cgroup\"\n\nSYSTEMD_SCOPE_SUFFIX = \".scope\"\n\nkubepods_regexp = re.compile(\n r\"(?:^/kubepods[\\S]*/pod([^/]+)$)|(?:^/kubepods\\.slice/kubepods-[^/]+\\.slice/kubepods-[^/]+-pod([^/]+)\\.slice$)\"\n)\n\ncontainer_id_regexp = re.compile(\n \"^(?:[0-9a-f]{64}|[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4,})$\", re.IGNORECASE\n)\n\n\ndef get_cgroup_container_metadata():\n \"\"\"\n Reads docker/kubernetes metadata (container id, pod id) from /proc/self/cgroup\n\n The result is a nested dictionary with the detected IDs, e.g.\n\n {\n \"container\": {\"id\": \"2227daf62df6694645fee5df53c1f91271546a9560e8600a525690ae252b7f63\"},\n \"pod\": {\"uid\": \"90d81341_92de_11e7_8cf2_507b9d4141fa\"}\n }\n\n :return: a dictionary with the detected ids or {}\n \"\"\"\n if not os.path.exists(CGROUP_PATH):\n return {}\n with open(CGROUP_PATH) as f:\n return parse_cgroups(f) or {}\n\n\ndef parse_cgroups(filehandle):\n \"\"\"\n Reads lines from a file handle and tries to parse docker container IDs and kubernetes Pod IDs.\n\n See tests.utils.docker_tests.test_cgroup_parsing for a set of test cases\n\n :param filehandle:\n :return: nested dictionary or None\n \"\"\"\n for line in filehandle:\n parts = line.strip().split(\":\")\n if len(parts) != 3:\n continue\n cgroup_path = parts[2]\n\n # Depending on the filesystem driver used for cgroup\n # management, the paths in /proc/pid/cgroup will have\n # one of the following formats in a Docker container:\n #\n # systemd: /system.slice/docker-<container-ID>.scope\n # cgroupfs: /docker/<container-ID>\n #\n # In a Kubernetes pod, the cgroup path will look like:\n #\n # systemd:/kubepods.slice/kubepods-<QoS-class>.slice/kubepods-<QoS-class>-pod<pod-UID>.slice/<container-iD>.scope\n # cgroupfs:/kubepods/<QoS-class>/pod<pod-UID>/<container-iD>\n\n directory, container_id = os.path.split(cgroup_path)\n if container_id.endswith(SYSTEMD_SCOPE_SUFFIX):\n container_id = container_id[: -len(SYSTEMD_SCOPE_SUFFIX)]\n if \"-\" in container_id:\n container_id = container_id.split(\"-\", 1)[1]\n kubepods_match = kubepods_regexp.match(directory)\n if kubepods_match:\n pod_id = kubepods_match.group(1)\n if not pod_id:\n pod_id = kubepods_match.group(2)\n if pod_id:\n pod_id = pod_id.replace(\"_\", \"-\")\n return {\"container\": {\"id\": container_id}, \"kubernetes\": {\"pod\": {\"uid\": pod_id}}}\n elif container_id_regexp.match(container_id):\n return {\"container\": {\"id\": container_id}}\n", "path": "elasticapm/utils/cgroup.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport os\nimport re\n\nCGROUP_PATH = \"/proc/self/cgroup\"\n\nSYSTEMD_SCOPE_SUFFIX = \".scope\"\n\nkubepods_regexp = re.compile(\n r\"(?:^/kubepods[\\S]*/pod([^/]+)$)|(?:^/kubepods\\.slice/(kubepods-[^/]+\\.slice/)?kubepods[^/]*-pod([^/]+)\\.slice$)\"\n)\n\ncontainer_id_regexp = re.compile(\n \"^(?:[0-9a-f]{64}|[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4,})$\", re.IGNORECASE\n)\n\n\ndef get_cgroup_container_metadata():\n \"\"\"\n Reads docker/kubernetes metadata (container id, pod id) from /proc/self/cgroup\n\n The result is a nested dictionary with the detected IDs, e.g.\n\n {\n \"container\": {\"id\": \"2227daf62df6694645fee5df53c1f91271546a9560e8600a525690ae252b7f63\"},\n \"pod\": {\"uid\": \"90d81341_92de_11e7_8cf2_507b9d4141fa\"}\n }\n\n :return: a dictionary with the detected ids or {}\n \"\"\"\n if not os.path.exists(CGROUP_PATH):\n return {}\n with open(CGROUP_PATH) as f:\n return parse_cgroups(f) or {}\n\n\ndef parse_cgroups(filehandle):\n \"\"\"\n Reads lines from a file handle and tries to parse docker container IDs and kubernetes Pod IDs.\n\n See tests.utils.docker_tests.test_cgroup_parsing for a set of test cases\n\n :param filehandle:\n :return: nested dictionary or None\n \"\"\"\n for line in filehandle:\n parts = line.strip().split(\":\")\n if len(parts) != 3:\n continue\n cgroup_path = parts[2]\n\n # Depending on the filesystem driver used for cgroup\n # management, the paths in /proc/pid/cgroup will have\n # one of the following formats in a Docker container:\n #\n # systemd: /system.slice/docker-<container-ID>.scope\n # cgroupfs: /docker/<container-ID>\n #\n # In a Kubernetes pod, the cgroup path will look like:\n #\n # systemd:/kubepods.slice/kubepods-<QoS-class>.slice/kubepods-<QoS-class>-pod<pod-UID>.slice/<container-iD>.scope\n # cgroupfs:/kubepods/<QoS-class>/pod<pod-UID>/<container-iD>\n\n directory, container_id = os.path.split(cgroup_path)\n if container_id.endswith(SYSTEMD_SCOPE_SUFFIX):\n container_id = container_id[: -len(SYSTEMD_SCOPE_SUFFIX)]\n if \"-\" in container_id:\n container_id = container_id.split(\"-\", 1)[1]\n kubepods_match = kubepods_regexp.match(directory)\n if kubepods_match:\n pod_id = kubepods_match.group(1) # if first part of kubepods_regexp matched\n if not pod_id:\n pod_id = kubepods_match.group(3) # if second part of kubepods_regexp matched\n if pod_id:\n pod_id = pod_id.replace(\"_\", \"-\")\n return {\"container\": {\"id\": container_id}, \"kubernetes\": {\"pod\": {\"uid\": pod_id}}}\n elif container_id_regexp.match(container_id):\n return {\"container\": {\"id\": container_id}}\n", "path": "elasticapm/utils/cgroup.py"}]} | 1,656 | 383 |
gh_patches_debug_13515 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3267 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add openAPI Specification for UI related databases endpoint
Generate openAPI spec for `databases` endpoint corresponding to UI
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `config/settings/openapi.py`
Content:
```
1 def custom_preprocessing_hook(endpoints):
2 filtered = []
3 for (path, path_regex, method, callback) in endpoints:
4 # Remove all but DRF API endpoints
5 if path.startswith("/api/db/v0/databases/") or path.startswith("/api/db/v0/data_files/") or path.startswith("/api/db/v0/schemas/") or path.startswith("/api/db/v0/tables/"):
6 filtered.append((path, path_regex, method, callback))
7 return filtered
8
9
10 def remove_url_prefix_hook(result, **kwargs):
11 # Remove namespace and version URL prefix from the operation Id of the generated API schema
12 for path, path_info in result['paths'].items():
13 for method, operation in path_info.items():
14 operation_id = operation.get('operationId')
15 if operation_id:
16 if path.startswith('/api/db/v0/'):
17 operation['operationId'] = operation_id.replace('db_v0_', '')
18 elif path.startswith('/api/ui/v0/'):
19 operation['operationId'] = operation_id.replace('ui_v0_', '')
20
21 return result
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/config/settings/openapi.py b/config/settings/openapi.py
--- a/config/settings/openapi.py
+++ b/config/settings/openapi.py
@@ -1,9 +1,14 @@
def custom_preprocessing_hook(endpoints):
- filtered = []
- for (path, path_regex, method, callback) in endpoints:
- # Remove all but DRF API endpoints
- if path.startswith("/api/db/v0/databases/") or path.startswith("/api/db/v0/data_files/") or path.startswith("/api/db/v0/schemas/") or path.startswith("/api/db/v0/tables/"):
- filtered.append((path, path_regex, method, callback))
+ prefixes = [
+ "/api/db/v0/databases/",
+ "/api/db/v0/data_files/",
+ "/api/db/v0/schemas/",
+ "/api/db/v0/tables/",
+ "/api/db/v0/links/",
+ "/api/db/v0/queries/",
+ "/api/ui/v0/databases/"
+ ]
+ filtered = [(path, path_regex, method, callback) for path, path_regex, method, callback in endpoints if any(path.startswith(prefix) for prefix in prefixes)]
return filtered
| {"golden_diff": "diff --git a/config/settings/openapi.py b/config/settings/openapi.py\n--- a/config/settings/openapi.py\n+++ b/config/settings/openapi.py\n@@ -1,9 +1,14 @@\n def custom_preprocessing_hook(endpoints):\n- filtered = []\n- for (path, path_regex, method, callback) in endpoints:\n- # Remove all but DRF API endpoints\n- if path.startswith(\"/api/db/v0/databases/\") or path.startswith(\"/api/db/v0/data_files/\") or path.startswith(\"/api/db/v0/schemas/\") or path.startswith(\"/api/db/v0/tables/\"):\n- filtered.append((path, path_regex, method, callback))\n+ prefixes = [\n+ \"/api/db/v0/databases/\",\n+ \"/api/db/v0/data_files/\",\n+ \"/api/db/v0/schemas/\",\n+ \"/api/db/v0/tables/\",\n+ \"/api/db/v0/links/\",\n+ \"/api/db/v0/queries/\",\n+ \"/api/ui/v0/databases/\"\n+ ]\n+ filtered = [(path, path_regex, method, callback) for path, path_regex, method, callback in endpoints if any(path.startswith(prefix) for prefix in prefixes)]\n return filtered\n", "issue": "Add openAPI Specification for UI related databases endpoint\nGenerate openAPI spec for `databases` endpoint corresponding to UI\n", "before_files": [{"content": "def custom_preprocessing_hook(endpoints):\n filtered = []\n for (path, path_regex, method, callback) in endpoints:\n # Remove all but DRF API endpoints\n if path.startswith(\"/api/db/v0/databases/\") or path.startswith(\"/api/db/v0/data_files/\") or path.startswith(\"/api/db/v0/schemas/\") or path.startswith(\"/api/db/v0/tables/\"):\n filtered.append((path, path_regex, method, callback))\n return filtered\n\n\ndef remove_url_prefix_hook(result, **kwargs):\n # Remove namespace and version URL prefix from the operation Id of the generated API schema\n for path, path_info in result['paths'].items():\n for method, operation in path_info.items():\n operation_id = operation.get('operationId')\n if operation_id:\n if path.startswith('/api/db/v0/'):\n operation['operationId'] = operation_id.replace('db_v0_', '')\n elif path.startswith('/api/ui/v0/'):\n operation['operationId'] = operation_id.replace('ui_v0_', '')\n\n return result\n", "path": "config/settings/openapi.py"}], "after_files": [{"content": "def custom_preprocessing_hook(endpoints):\n prefixes = [\n \"/api/db/v0/databases/\",\n \"/api/db/v0/data_files/\",\n \"/api/db/v0/schemas/\",\n \"/api/db/v0/tables/\",\n \"/api/db/v0/links/\",\n \"/api/db/v0/queries/\",\n \"/api/ui/v0/databases/\"\n ]\n filtered = [(path, path_regex, method, callback) for path, path_regex, method, callback in endpoints if any(path.startswith(prefix) for prefix in prefixes)]\n return filtered\n\n\ndef remove_url_prefix_hook(result, **kwargs):\n # Remove namespace and version URL prefix from the operation Id of the generated API schema\n for path, path_info in result['paths'].items():\n for method, operation in path_info.items():\n operation_id = operation.get('operationId')\n if operation_id:\n if path.startswith('/api/db/v0/'):\n operation['operationId'] = operation_id.replace('db_v0_', '')\n elif path.startswith('/api/ui/v0/'):\n operation['operationId'] = operation_id.replace('ui_v0_', '')\n\n return result\n", "path": "config/settings/openapi.py"}]} | 545 | 262 |
gh_patches_debug_26362 | rasdani/github-patches | git_diff | watchdogpolska__feder-329 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Autocomplete dla JST w MonitoringFilter
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `feder/monitorings/models.py`
Content:
```
1 from itertools import groupby
2
3 import reversion
4 from autoslug.fields import AutoSlugField
5 from django.conf import settings
6 from django.contrib.auth import get_user_model
7 from django.core.urlresolvers import reverse
8 from django.db import models
9 from django.utils.translation import ugettext_lazy as _
10 from guardian.models import GroupObjectPermissionBase, UserObjectPermissionBase
11 from model_utils.models import TimeStampedModel
12
13 from .validators import validate_template_syntax
14
15 _('Monitorings index')
16 _('Can add Monitoring')
17 _('Can change Monitoring')
18 _('Can delete Monitoring')
19
20 NOTIFY_HELP = _("Notify about new alerts person who can view alerts")
21
22
23 class MonitoringQuerySet(models.QuerySet):
24 def with_case_count(self):
25 return self.annotate(case_count=models.Count('case'))
26
27
28 @reversion.register()
29 class Monitoring(TimeStampedModel):
30 perm_model = 'monitoringuserobjectpermission'
31 name = models.CharField(verbose_name=_("Name"), max_length=50)
32 slug = AutoSlugField(populate_from='name', verbose_name=_("Slug"), unique=True)
33 user = models.ForeignKey(settings.AUTH_USER_MODEL, verbose_name=_("User"))
34 description = models.TextField(verbose_name=_("Description"), blank=True)
35 subject = models.CharField(verbose_name=_("Subject"), max_length=80)
36 template = models.TextField(verbose_name=_("Template"),
37 help_text=_("Use {{EMAIL}} for insert reply address"),
38 validators=[validate_template_syntax])
39 email_footer = models.TextField(default='',
40 verbose_name=_("Email footer"),
41 help_text=_("Footer for sent mail and replies"))
42 notify_alert = models.BooleanField(default=True,
43 verbose_name=_("Notify about alerts"),
44 help_text=NOTIFY_HELP)
45 objects = MonitoringQuerySet.as_manager()
46
47 class Meta:
48 verbose_name = _("Monitoring")
49 verbose_name_plural = _("Monitoring")
50 ordering = ['created', ]
51 permissions = (
52 ('add_questionary', _('Can add questionary')),
53 ('change_questionary', _('Can change questionary')),
54 ('delete_questionary', _('Can delete questionary')),
55 ('add_case', _('Can add case')),
56 ('change_case', _('Can change case')),
57 ('delete_case', _('Can delete case')),
58 ('add_task', _('Can add task')),
59 ('change_task', _('Can change task')),
60 ('delete_task', _('Can delete task')),
61 ('add_letter', _('Can add letter')),
62 ('reply', _('Can reply')),
63 ('add_draft', _('Add reply draft')),
64 ('change_letter', _('Can change task')),
65 ('delete_letter', _('Can delete letter')),
66 ('view_alert', _('Can view alert')),
67 ('change_alert', _('Can change alert')),
68 ('delete_alert', _('Can delete alert')),
69 ('manage_perm', _('Can manage perms')),
70 ('select_survey', _('Can select answer')),
71 ('view_log', _('Can view logs')),
72 )
73
74 def __unicode__(self):
75 return self.name
76
77 def get_users_with_perm(self, perm=None):
78 qs = get_user_model().objects.filter(**{self.perm_model + '__content_object': self})
79 if perm:
80 qs = qs.filter(**{self.perm_model + '__permission__codename': perm})
81 return qs.distinct().all()
82
83 def get_absolute_url(self):
84 return reverse('monitorings:details', kwargs={'slug': self.slug})
85
86 def permission_map(self):
87 dataset = (self.monitoringuserobjectpermission_set.select_related('permission', 'user').
88 order_by('permission').all())
89 user_list = {x.user for x in dataset}
90
91 def index_generate():
92 grouped = groupby(dataset, lambda x: x.permission)
93 for perm, users in grouped:
94 user_perm_list = [x.user for x in users]
95 yield perm, [(perm, (user in user_perm_list)) for user in user_list]
96
97 return user_list, index_generate()
98
99
100 class MonitoringUserObjectPermission(UserObjectPermissionBase):
101 content_object = models.ForeignKey(Monitoring)
102
103
104 class MonitoringGroupObjectPermission(GroupObjectPermissionBase):
105 content_object = models.ForeignKey(Monitoring)
106
```
Path: `feder/monitorings/filters.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import django_filters
3 from dal import autocomplete
4 from django.contrib.auth import get_user_model
5 from django.db.models import Count
6 from django.utils.translation import ugettext_lazy as _
7
8 from .models import Monitoring
9
10
11 class MonitoringFilter(django_filters.FilterSet):
12 created = django_filters.DateRangeFilter(label=_("Creation date"))
13
14 def __init__(self, *args, **kwargs):
15 super(MonitoringFilter, self).__init__(*args, **kwargs)
16 self.filters['name'].lookup_expr = 'icontains'
17 self.filters['name'].label = _("Name")
18 # Limit users select to which have any cases
19 qs = (get_user_model().objects.
20 annotate(case_count=Count('case')).
21 filter(case_count__gt=0).all())
22 self.filters['user'].extra['queryset'] = qs
23 self.filters['user'].widget = autocomplete.ModelSelect2(url='users:autocomplete')
24
25 class Meta:
26 model = Monitoring
27 fields = ['name', 'user', 'created']
28 order_by = ['created', '-created', '-case_count']
29 order_by = [
30 ('created', _('Creation date (ascending)')),
31 ('-created', _('Creation date (descending)')),
32 ('case_count', _('Cases count (ascending)')),
33 ('-case_count', _('Cases count (descending)')),
34 ]
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/feder/monitorings/filters.py b/feder/monitorings/filters.py
--- a/feder/monitorings/filters.py
+++ b/feder/monitorings/filters.py
@@ -6,10 +6,22 @@
from django.utils.translation import ugettext_lazy as _
from .models import Monitoring
+from teryt_tree.dal_ext.filters import VoivodeshipFilter, CountyFilter, CommunityFilter
class MonitoringFilter(django_filters.FilterSet):
created = django_filters.DateRangeFilter(label=_("Creation date"))
+ voivodeship = VoivodeshipFilter(
+ widget=autocomplete.ModelSelect2(url='teryt:voivodeship-autocomplete')
+ )
+ county = CountyFilter(
+ widget=autocomplete.ModelSelect2(url='teryt:county-autocomplete',
+ forward=['voivodeship'])
+ )
+ community = CommunityFilter(
+ widget=autocomplete.ModelSelect2(url='teryt:community-autocomplete',
+ forward=['county'])
+ )
def __init__(self, *args, **kwargs):
super(MonitoringFilter, self).__init__(*args, **kwargs)
diff --git a/feder/monitorings/models.py b/feder/monitorings/models.py
--- a/feder/monitorings/models.py
+++ b/feder/monitorings/models.py
@@ -24,6 +24,9 @@
def with_case_count(self):
return self.annotate(case_count=models.Count('case'))
+ def area(self, jst):
+ return self.filter(case__institution__jst__tree_id=jst.tree_id,
+ case__institution__jst__lft__range=(jst.lft, jst.rght))
@reversion.register()
class Monitoring(TimeStampedModel):
| {"golden_diff": "diff --git a/feder/monitorings/filters.py b/feder/monitorings/filters.py\n--- a/feder/monitorings/filters.py\n+++ b/feder/monitorings/filters.py\n@@ -6,10 +6,22 @@\n from django.utils.translation import ugettext_lazy as _\n \n from .models import Monitoring\n+from teryt_tree.dal_ext.filters import VoivodeshipFilter, CountyFilter, CommunityFilter\n \n \n class MonitoringFilter(django_filters.FilterSet):\n created = django_filters.DateRangeFilter(label=_(\"Creation date\"))\n+ voivodeship = VoivodeshipFilter(\n+ widget=autocomplete.ModelSelect2(url='teryt:voivodeship-autocomplete')\n+ )\n+ county = CountyFilter(\n+ widget=autocomplete.ModelSelect2(url='teryt:county-autocomplete',\n+ forward=['voivodeship'])\n+ )\n+ community = CommunityFilter(\n+ widget=autocomplete.ModelSelect2(url='teryt:community-autocomplete',\n+ forward=['county'])\n+ )\n \n def __init__(self, *args, **kwargs):\n super(MonitoringFilter, self).__init__(*args, **kwargs)\ndiff --git a/feder/monitorings/models.py b/feder/monitorings/models.py\n--- a/feder/monitorings/models.py\n+++ b/feder/monitorings/models.py\n@@ -24,6 +24,9 @@\n def with_case_count(self):\n return self.annotate(case_count=models.Count('case'))\n \n+ def area(self, jst):\n+ return self.filter(case__institution__jst__tree_id=jst.tree_id,\n+ case__institution__jst__lft__range=(jst.lft, jst.rght))\n \n @reversion.register()\n class Monitoring(TimeStampedModel):\n", "issue": "Autocomplete dla JST w MonitoringFilter\n\n", "before_files": [{"content": "from itertools import groupby\n\nimport reversion\nfrom autoslug.fields import AutoSlugField\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.core.urlresolvers import reverse\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom guardian.models import GroupObjectPermissionBase, UserObjectPermissionBase\nfrom model_utils.models import TimeStampedModel\n\nfrom .validators import validate_template_syntax\n\n_('Monitorings index')\n_('Can add Monitoring')\n_('Can change Monitoring')\n_('Can delete Monitoring')\n\nNOTIFY_HELP = _(\"Notify about new alerts person who can view alerts\")\n\n\nclass MonitoringQuerySet(models.QuerySet):\n def with_case_count(self):\n return self.annotate(case_count=models.Count('case'))\n\n\[email protected]()\nclass Monitoring(TimeStampedModel):\n perm_model = 'monitoringuserobjectpermission'\n name = models.CharField(verbose_name=_(\"Name\"), max_length=50)\n slug = AutoSlugField(populate_from='name', verbose_name=_(\"Slug\"), unique=True)\n user = models.ForeignKey(settings.AUTH_USER_MODEL, verbose_name=_(\"User\"))\n description = models.TextField(verbose_name=_(\"Description\"), blank=True)\n subject = models.CharField(verbose_name=_(\"Subject\"), max_length=80)\n template = models.TextField(verbose_name=_(\"Template\"),\n help_text=_(\"Use {{EMAIL}} for insert reply address\"),\n validators=[validate_template_syntax])\n email_footer = models.TextField(default='',\n verbose_name=_(\"Email footer\"),\n help_text=_(\"Footer for sent mail and replies\"))\n notify_alert = models.BooleanField(default=True,\n verbose_name=_(\"Notify about alerts\"),\n help_text=NOTIFY_HELP)\n objects = MonitoringQuerySet.as_manager()\n\n class Meta:\n verbose_name = _(\"Monitoring\")\n verbose_name_plural = _(\"Monitoring\")\n ordering = ['created', ]\n permissions = (\n ('add_questionary', _('Can add questionary')),\n ('change_questionary', _('Can change questionary')),\n ('delete_questionary', _('Can delete questionary')),\n ('add_case', _('Can add case')),\n ('change_case', _('Can change case')),\n ('delete_case', _('Can delete case')),\n ('add_task', _('Can add task')),\n ('change_task', _('Can change task')),\n ('delete_task', _('Can delete task')),\n ('add_letter', _('Can add letter')),\n ('reply', _('Can reply')),\n ('add_draft', _('Add reply draft')),\n ('change_letter', _('Can change task')),\n ('delete_letter', _('Can delete letter')),\n ('view_alert', _('Can view alert')),\n ('change_alert', _('Can change alert')),\n ('delete_alert', _('Can delete alert')),\n ('manage_perm', _('Can manage perms')),\n ('select_survey', _('Can select answer')),\n ('view_log', _('Can view logs')),\n )\n\n def __unicode__(self):\n return self.name\n\n def get_users_with_perm(self, perm=None):\n qs = get_user_model().objects.filter(**{self.perm_model + '__content_object': self})\n if perm:\n qs = qs.filter(**{self.perm_model + '__permission__codename': perm})\n return qs.distinct().all()\n\n def get_absolute_url(self):\n return reverse('monitorings:details', kwargs={'slug': self.slug})\n\n def permission_map(self):\n dataset = (self.monitoringuserobjectpermission_set.select_related('permission', 'user').\n order_by('permission').all())\n user_list = {x.user for x in dataset}\n\n def index_generate():\n grouped = groupby(dataset, lambda x: x.permission)\n for perm, users in grouped:\n user_perm_list = [x.user for x in users]\n yield perm, [(perm, (user in user_perm_list)) for user in user_list]\n\n return user_list, index_generate()\n\n\nclass MonitoringUserObjectPermission(UserObjectPermissionBase):\n content_object = models.ForeignKey(Monitoring)\n\n\nclass MonitoringGroupObjectPermission(GroupObjectPermissionBase):\n content_object = models.ForeignKey(Monitoring)\n", "path": "feder/monitorings/models.py"}, {"content": "# -*- coding: utf-8 -*-\nimport django_filters\nfrom dal import autocomplete\nfrom django.contrib.auth import get_user_model\nfrom django.db.models import Count\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom .models import Monitoring\n\n\nclass MonitoringFilter(django_filters.FilterSet):\n created = django_filters.DateRangeFilter(label=_(\"Creation date\"))\n\n def __init__(self, *args, **kwargs):\n super(MonitoringFilter, self).__init__(*args, **kwargs)\n self.filters['name'].lookup_expr = 'icontains'\n self.filters['name'].label = _(\"Name\")\n # Limit users select to which have any cases\n qs = (get_user_model().objects.\n annotate(case_count=Count('case')).\n filter(case_count__gt=0).all())\n self.filters['user'].extra['queryset'] = qs\n self.filters['user'].widget = autocomplete.ModelSelect2(url='users:autocomplete')\n\n class Meta:\n model = Monitoring\n fields = ['name', 'user', 'created']\n order_by = ['created', '-created', '-case_count']\n order_by = [\n ('created', _('Creation date (ascending)')),\n ('-created', _('Creation date (descending)')),\n ('case_count', _('Cases count (ascending)')),\n ('-case_count', _('Cases count (descending)')),\n ]\n", "path": "feder/monitorings/filters.py"}], "after_files": [{"content": "from itertools import groupby\n\nimport reversion\nfrom autoslug.fields import AutoSlugField\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.core.urlresolvers import reverse\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom guardian.models import GroupObjectPermissionBase, UserObjectPermissionBase\nfrom model_utils.models import TimeStampedModel\n\nfrom .validators import validate_template_syntax\n\n_('Monitorings index')\n_('Can add Monitoring')\n_('Can change Monitoring')\n_('Can delete Monitoring')\n\nNOTIFY_HELP = _(\"Notify about new alerts person who can view alerts\")\n\n\nclass MonitoringQuerySet(models.QuerySet):\n def with_case_count(self):\n return self.annotate(case_count=models.Count('case'))\n\n def area(self, jst):\n return self.filter(case__institution__jst__tree_id=jst.tree_id,\n case__institution__jst__lft__range=(jst.lft, jst.rght))\n\[email protected]()\nclass Monitoring(TimeStampedModel):\n perm_model = 'monitoringuserobjectpermission'\n name = models.CharField(verbose_name=_(\"Name\"), max_length=50)\n slug = AutoSlugField(populate_from='name', verbose_name=_(\"Slug\"), unique=True)\n user = models.ForeignKey(settings.AUTH_USER_MODEL, verbose_name=_(\"User\"))\n description = models.TextField(verbose_name=_(\"Description\"), blank=True)\n subject = models.CharField(verbose_name=_(\"Subject\"), max_length=80)\n template = models.TextField(verbose_name=_(\"Template\"),\n help_text=_(\"Use {{EMAIL}} for insert reply address\"),\n validators=[validate_template_syntax])\n email_footer = models.TextField(default='',\n verbose_name=_(\"Email footer\"),\n help_text=_(\"Footer for sent mail and replies\"))\n notify_alert = models.BooleanField(default=True,\n verbose_name=_(\"Notify about alerts\"),\n help_text=NOTIFY_HELP)\n objects = MonitoringQuerySet.as_manager()\n\n class Meta:\n verbose_name = _(\"Monitoring\")\n verbose_name_plural = _(\"Monitoring\")\n ordering = ['created', ]\n permissions = (\n ('add_questionary', _('Can add questionary')),\n ('change_questionary', _('Can change questionary')),\n ('delete_questionary', _('Can delete questionary')),\n ('add_case', _('Can add case')),\n ('change_case', _('Can change case')),\n ('delete_case', _('Can delete case')),\n ('add_task', _('Can add task')),\n ('change_task', _('Can change task')),\n ('delete_task', _('Can delete task')),\n ('add_letter', _('Can add letter')),\n ('reply', _('Can reply')),\n ('add_draft', _('Add reply draft')),\n ('change_letter', _('Can change task')),\n ('delete_letter', _('Can delete letter')),\n ('view_alert', _('Can view alert')),\n ('change_alert', _('Can change alert')),\n ('delete_alert', _('Can delete alert')),\n ('manage_perm', _('Can manage perms')),\n ('select_survey', _('Can select answer')),\n ('view_log', _('Can view logs')),\n )\n\n def __unicode__(self):\n return self.name\n\n def get_users_with_perm(self, perm=None):\n qs = get_user_model().objects.filter(**{self.perm_model + '__content_object': self})\n if perm:\n qs = qs.filter(**{self.perm_model + '__permission__codename': perm})\n return qs.distinct().all()\n\n def get_absolute_url(self):\n return reverse('monitorings:details', kwargs={'slug': self.slug})\n\n def permission_map(self):\n dataset = (self.monitoringuserobjectpermission_set.select_related('permission', 'user').\n order_by('permission').all())\n user_list = {x.user for x in dataset}\n\n def index_generate():\n grouped = groupby(dataset, lambda x: x.permission)\n for perm, users in grouped:\n user_perm_list = [x.user for x in users]\n yield perm, [(perm, (user in user_perm_list)) for user in user_list]\n\n return user_list, index_generate()\n\n\nclass MonitoringUserObjectPermission(UserObjectPermissionBase):\n content_object = models.ForeignKey(Monitoring)\n\n\nclass MonitoringGroupObjectPermission(GroupObjectPermissionBase):\n content_object = models.ForeignKey(Monitoring)\n", "path": "feder/monitorings/models.py"}, {"content": "# -*- coding: utf-8 -*-\nimport django_filters\nfrom dal import autocomplete\nfrom django.contrib.auth import get_user_model\nfrom django.db.models import Count\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom .models import Monitoring\nfrom teryt_tree.dal_ext.filters import VoivodeshipFilter, CountyFilter, CommunityFilter\n\n\nclass MonitoringFilter(django_filters.FilterSet):\n created = django_filters.DateRangeFilter(label=_(\"Creation date\"))\n voivodeship = VoivodeshipFilter(\n widget=autocomplete.ModelSelect2(url='teryt:voivodeship-autocomplete')\n )\n county = CountyFilter(\n widget=autocomplete.ModelSelect2(url='teryt:county-autocomplete',\n forward=['voivodeship'])\n )\n community = CommunityFilter(\n widget=autocomplete.ModelSelect2(url='teryt:community-autocomplete',\n forward=['county'])\n )\n\n def __init__(self, *args, **kwargs):\n super(MonitoringFilter, self).__init__(*args, **kwargs)\n self.filters['name'].lookup_expr = 'icontains'\n self.filters['name'].label = _(\"Name\")\n # Limit users select to which have any cases\n qs = (get_user_model().objects.\n annotate(case_count=Count('case')).\n filter(case_count__gt=0).all())\n self.filters['user'].extra['queryset'] = qs\n self.filters['user'].widget = autocomplete.ModelSelect2(url='users:autocomplete')\n\n class Meta:\n model = Monitoring\n fields = ['name', 'user', 'created']\n order_by = ['created', '-created', '-case_count']\n order_by = [\n ('created', _('Creation date (ascending)')),\n ('-created', _('Creation date (descending)')),\n ('case_count', _('Cases count (ascending)')),\n ('-case_count', _('Cases count (descending)')),\n ]\n", "path": "feder/monitorings/filters.py"}]} | 1,699 | 389 |
gh_patches_debug_29515 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-1512 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Resources from third party add-ons are not being included in compiled plone-legacy bundle
Seems JS resources registered in Plone 5 using old approach (`jsregistry.xml`) are not included in the final compilation: I installed an add-on and, even as I can see the JS resources listed in `default.js`, the source code is not present.
If I enable development mode, then I can see the source code included in `plone-legacy-compiled.js` and it's executed normally.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Products/CMFPlone/resources/browser/combine.py`
Content:
```
1 import re
2 from zExceptions import NotFound
3 from Acquisition import aq_base
4 from datetime import datetime
5 from plone.registry.interfaces import IRegistry
6 from plone.resource.file import FilesystemFile
7 from plone.resource.interfaces import IResourceDirectory
8 from Products.CMFPlone.interfaces import IBundleRegistry
9 from Products.CMFPlone.interfaces.resources import (
10 OVERRIDE_RESOURCE_DIRECTORY_NAME,
11 )
12 from StringIO import StringIO
13 from zope.component import getUtility
14 from zope.component import queryUtility
15
16 PRODUCTION_RESOURCE_DIRECTORY = "production"
17
18
19 def get_production_resource_directory():
20 persistent_directory = queryUtility(IResourceDirectory, name="persistent")
21 if persistent_directory is None:
22 return ''
23 container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
24 try:
25 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
26 except NotFound:
27 return "%s/++unique++1" % PRODUCTION_RESOURCE_DIRECTORY
28 timestamp = production_folder.readFile('timestamp.txt')
29 return "%s/++unique++%s" % (
30 PRODUCTION_RESOURCE_DIRECTORY, timestamp)
31
32
33 def get_resource(context, path):
34 resource = context.unrestrictedTraverse(path)
35 if isinstance(resource, FilesystemFile):
36 (directory, sep, filename) = path.rpartition('/')
37 return context.unrestrictedTraverse(directory).readFile(filename)
38 else:
39 if hasattr(aq_base(resource), 'GET'):
40 # for FileResource
41 return resource.GET()
42 else:
43 # any BrowserView
44 return resource()
45
46
47 def write_js(context, folder, meta_bundle):
48 registry = getUtility(IRegistry)
49 resources = []
50
51 # default resources
52 if meta_bundle == 'default' and registry.records.get(
53 'plone.resources/jquery.js'
54 ):
55 resources.append(get_resource(context,
56 registry.records['plone.resources/jquery.js'].value))
57 resources.append(get_resource(context,
58 registry.records['plone.resources.requirejs'].value))
59 resources.append(get_resource(context,
60 registry.records['plone.resources.configjs'].value))
61
62 # bundles
63 bundles = registry.collectionOfInterface(
64 IBundleRegistry, prefix="plone.bundles", check=False)
65 for bundle in bundles.values():
66 if bundle.merge_with == meta_bundle:
67 resources.append(get_resource(context, bundle.jscompilation))
68
69 fi = StringIO()
70 for script in resources:
71 fi.write(script + '\n')
72 folder.writeFile(meta_bundle + ".js", fi)
73
74
75 def write_css(context, folder, meta_bundle):
76 registry = getUtility(IRegistry)
77 resources = []
78
79 bundles = registry.collectionOfInterface(
80 IBundleRegistry, prefix="plone.bundles", check=False)
81 for bundle in bundles.values():
82 if bundle.merge_with == meta_bundle:
83 css = get_resource(context, bundle.csscompilation)
84 # Preserve relative urls:
85 # we prefix with '../'' any url not starting with '/'
86 # or http: or data:
87 css = re.sub(
88 r"""(url\(['"]?(?!['"]?([a-z]+:|\/)))""",
89 r'\1../',
90 css)
91 resources.append(css)
92
93 fi = StringIO()
94 for script in resources:
95 fi.write(script + '\n')
96 folder.writeFile(meta_bundle + ".css", fi)
97
98
99 def combine_bundles(context):
100 persistent_directory = queryUtility(IResourceDirectory, name="persistent")
101 if persistent_directory is None:
102 return
103 if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:
104 persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)
105 container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
106 if PRODUCTION_RESOURCE_DIRECTORY not in container:
107 container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)
108 production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
109
110 # store timestamp
111 fi = StringIO()
112 fi.write(datetime.now().isoformat())
113 production_folder.writeFile("timestamp.txt", fi)
114
115 # generate new combined bundles
116 write_js(context, production_folder, 'default')
117 write_js(context, production_folder, 'logged-in')
118 write_css(context, production_folder, 'default')
119 write_css(context, production_folder, 'logged-in')
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py
--- a/Products/CMFPlone/resources/browser/combine.py
+++ b/Products/CMFPlone/resources/browser/combine.py
@@ -31,6 +31,14 @@
def get_resource(context, path):
+ if path.startswith('++plone++'):
+ # ++plone++ resources can be customized, we return their override
+ # value if any
+ overrides = get_override_directory(context)
+ filepath = path[9:]
+ if overrides.isFile(filepath):
+ return overrides.readFile(filepath)
+
resource = context.unrestrictedTraverse(path)
if isinstance(resource, FilesystemFile):
(directory, sep, filename) = path.rpartition('/')
@@ -96,13 +104,17 @@
folder.writeFile(meta_bundle + ".css", fi)
-def combine_bundles(context):
+def get_override_directory(context):
persistent_directory = queryUtility(IResourceDirectory, name="persistent")
if persistent_directory is None:
return
if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:
persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)
- container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
+ return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]
+
+
+def combine_bundles(context):
+ container = get_override_directory(context)
if PRODUCTION_RESOURCE_DIRECTORY not in container:
container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)
production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]
| {"golden_diff": "diff --git a/Products/CMFPlone/resources/browser/combine.py b/Products/CMFPlone/resources/browser/combine.py\n--- a/Products/CMFPlone/resources/browser/combine.py\n+++ b/Products/CMFPlone/resources/browser/combine.py\n@@ -31,6 +31,14 @@\n \n \n def get_resource(context, path):\n+ if path.startswith('++plone++'):\n+ # ++plone++ resources can be customized, we return their override\n+ # value if any\n+ overrides = get_override_directory(context)\n+ filepath = path[9:]\n+ if overrides.isFile(filepath):\n+ return overrides.readFile(filepath)\n+\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n@@ -96,13 +104,17 @@\n folder.writeFile(meta_bundle + \".css\", fi)\n \n \n-def combine_bundles(context):\n+def get_override_directory(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n- container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n+ return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n+\n+\n+def combine_bundles(context):\n+ container = get_override_directory(context)\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n", "issue": "Resources from third party add-ons are not being included in compiled plone-legacy bundle\nSeems JS resources registered in Plone 5 using old approach (`jsregistry.xml`) are not included in the final compilation: I installed an add-on and, even as I can see the JS resources listed in `default.js`, the source code is not present.\n\nIf I enable development mode, then I can see the source code included in `plone-legacy-compiled.js` and it's executed normally.\n\n", "before_files": [{"content": "import re\nfrom zExceptions import NotFound\nfrom Acquisition import aq_base\nfrom datetime import datetime\nfrom plone.registry.interfaces import IRegistry\nfrom plone.resource.file import FilesystemFile\nfrom plone.resource.interfaces import IResourceDirectory\nfrom Products.CMFPlone.interfaces import IBundleRegistry\nfrom Products.CMFPlone.interfaces.resources import (\n OVERRIDE_RESOURCE_DIRECTORY_NAME,\n)\nfrom StringIO import StringIO\nfrom zope.component import getUtility\nfrom zope.component import queryUtility\n\nPRODUCTION_RESOURCE_DIRECTORY = \"production\"\n\n\ndef get_production_resource_directory():\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return ''\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n try:\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n except NotFound:\n return \"%s/++unique++1\" % PRODUCTION_RESOURCE_DIRECTORY\n timestamp = production_folder.readFile('timestamp.txt')\n return \"%s/++unique++%s\" % (\n PRODUCTION_RESOURCE_DIRECTORY, timestamp)\n\n\ndef get_resource(context, path):\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n return context.unrestrictedTraverse(directory).readFile(filename)\n else:\n if hasattr(aq_base(resource), 'GET'):\n # for FileResource\n return resource.GET()\n else:\n # any BrowserView\n return resource()\n\n\ndef write_js(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n # default resources\n if meta_bundle == 'default' and registry.records.get(\n 'plone.resources/jquery.js'\n ):\n resources.append(get_resource(context,\n registry.records['plone.resources/jquery.js'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.requirejs'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.configjs'].value))\n\n # bundles\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle:\n resources.append(get_resource(context, bundle.jscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".js\", fi)\n\n\ndef write_css(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle:\n css = get_resource(context, bundle.csscompilation)\n # Preserve relative urls:\n # we prefix with '../'' any url not starting with '/'\n # or http: or data:\n css = re.sub(\n r\"\"\"(url\\(['\"]?(?!['\"]?([a-z]+:|\\/)))\"\"\",\n r'\\1../',\n css)\n resources.append(css)\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".css\", fi)\n\n\ndef combine_bundles(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n\n # store timestamp\n fi = StringIO()\n fi.write(datetime.now().isoformat())\n production_folder.writeFile(\"timestamp.txt\", fi)\n\n # generate new combined bundles\n write_js(context, production_folder, 'default')\n write_js(context, production_folder, 'logged-in')\n write_css(context, production_folder, 'default')\n write_css(context, production_folder, 'logged-in')\n", "path": "Products/CMFPlone/resources/browser/combine.py"}], "after_files": [{"content": "import re\nfrom zExceptions import NotFound\nfrom Acquisition import aq_base\nfrom datetime import datetime\nfrom plone.registry.interfaces import IRegistry\nfrom plone.resource.file import FilesystemFile\nfrom plone.resource.interfaces import IResourceDirectory\nfrom Products.CMFPlone.interfaces import IBundleRegistry\nfrom Products.CMFPlone.interfaces.resources import (\n OVERRIDE_RESOURCE_DIRECTORY_NAME,\n)\nfrom StringIO import StringIO\nfrom zope.component import getUtility\nfrom zope.component import queryUtility\n\nPRODUCTION_RESOURCE_DIRECTORY = \"production\"\n\n\ndef get_production_resource_directory():\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return ''\n container = persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n try:\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n except NotFound:\n return \"%s/++unique++1\" % PRODUCTION_RESOURCE_DIRECTORY\n timestamp = production_folder.readFile('timestamp.txt')\n return \"%s/++unique++%s\" % (\n PRODUCTION_RESOURCE_DIRECTORY, timestamp)\n\n\ndef get_resource(context, path):\n if path.startswith('++plone++'):\n # ++plone++ resources can be customized, we return their override\n # value if any\n overrides = get_override_directory(context)\n filepath = path[9:]\n if overrides.isFile(filepath):\n return overrides.readFile(filepath)\n\n resource = context.unrestrictedTraverse(path)\n if isinstance(resource, FilesystemFile):\n (directory, sep, filename) = path.rpartition('/')\n return context.unrestrictedTraverse(directory).readFile(filename)\n else:\n if hasattr(aq_base(resource), 'GET'):\n # for FileResource\n return resource.GET()\n else:\n # any BrowserView\n return resource()\n\n\ndef write_js(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n # default resources\n if meta_bundle == 'default' and registry.records.get(\n 'plone.resources/jquery.js'\n ):\n resources.append(get_resource(context,\n registry.records['plone.resources/jquery.js'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.requirejs'].value))\n resources.append(get_resource(context,\n registry.records['plone.resources.configjs'].value))\n\n # bundles\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle:\n resources.append(get_resource(context, bundle.jscompilation))\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".js\", fi)\n\n\ndef write_css(context, folder, meta_bundle):\n registry = getUtility(IRegistry)\n resources = []\n\n bundles = registry.collectionOfInterface(\n IBundleRegistry, prefix=\"plone.bundles\", check=False)\n for bundle in bundles.values():\n if bundle.merge_with == meta_bundle:\n css = get_resource(context, bundle.csscompilation)\n # Preserve relative urls:\n # we prefix with '../'' any url not starting with '/'\n # or http: or data:\n css = re.sub(\n r\"\"\"(url\\(['\"]?(?!['\"]?([a-z]+:|\\/)))\"\"\",\n r'\\1../',\n css)\n resources.append(css)\n\n fi = StringIO()\n for script in resources:\n fi.write(script + '\\n')\n folder.writeFile(meta_bundle + \".css\", fi)\n\n\ndef get_override_directory(context):\n persistent_directory = queryUtility(IResourceDirectory, name=\"persistent\")\n if persistent_directory is None:\n return\n if OVERRIDE_RESOURCE_DIRECTORY_NAME not in persistent_directory:\n persistent_directory.makeDirectory(OVERRIDE_RESOURCE_DIRECTORY_NAME)\n return persistent_directory[OVERRIDE_RESOURCE_DIRECTORY_NAME]\n\n\ndef combine_bundles(context):\n container = get_override_directory(context)\n if PRODUCTION_RESOURCE_DIRECTORY not in container:\n container.makeDirectory(PRODUCTION_RESOURCE_DIRECTORY)\n production_folder = container[PRODUCTION_RESOURCE_DIRECTORY]\n\n # store timestamp\n fi = StringIO()\n fi.write(datetime.now().isoformat())\n production_folder.writeFile(\"timestamp.txt\", fi)\n\n # generate new combined bundles\n write_js(context, production_folder, 'default')\n write_js(context, production_folder, 'logged-in')\n write_css(context, production_folder, 'default')\n write_css(context, production_folder, 'logged-in')\n", "path": "Products/CMFPlone/resources/browser/combine.py"}]} | 1,482 | 339 |
gh_patches_debug_24757 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-7635 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CT-2479] replace all instances of set-output and node16
Details in https://github.com/dbt-labs/actions/issues/39.
### Acceptance Criteria
- [ ] Verified there are no workflows to update
_or_
- [ ] removed all uses of `set-output` - either directly or up updating any marketplace actions we reference
- [ ] removed all references to node16 - either directly or up updating any marketplace actions we reference
- [ ] backport changes
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `.github/actions/latest-wrangler/main.py`
Content:
```
1 import os
2 import sys
3 import requests
4 from distutils.util import strtobool
5 from typing import Union
6 from packaging.version import parse, Version
7
8 if __name__ == "__main__":
9
10 # get inputs
11 package = os.environ["INPUT_PACKAGE"]
12 new_version = parse(os.environ["INPUT_NEW_VERSION"])
13 gh_token = os.environ["INPUT_GH_TOKEN"]
14 halt_on_missing = strtobool(os.environ.get("INPUT_HALT_ON_MISSING", "False"))
15
16 # get package metadata from github
17 package_request = requests.get(
18 f"https://api.github.com/orgs/dbt-labs/packages/container/{package}/versions",
19 auth=("", gh_token),
20 )
21 package_meta = package_request.json()
22
23 # Log info if we don't get a 200
24 if package_request.status_code != 200:
25 print(f"Call to GH API failed: {package_request.status_code} {package_meta['message']}")
26
27 # Make an early exit if there is no matching package in github
28 if package_request.status_code == 404:
29 if halt_on_missing:
30 sys.exit(1)
31 else:
32 # everything is the latest if the package doesn't exist
33 print(f"::set-output name=latest::{True}")
34 print(f"::set-output name=minor_latest::{True}")
35 sys.exit(0)
36
37 # TODO: verify package meta is "correct"
38 # https://github.com/dbt-labs/dbt-core/issues/4640
39
40 # map versions and tags
41 version_tag_map = {
42 version["id"]: version["metadata"]["container"]["tags"] for version in package_meta
43 }
44
45 # is pre-release
46 pre_rel = True if any(x in str(new_version) for x in ["a", "b", "rc"]) else False
47
48 # semver of current latest
49 for version, tags in version_tag_map.items():
50 if "latest" in tags:
51 # N.B. This seems counterintuitive, but we expect any version tagged
52 # 'latest' to have exactly three associated tags:
53 # latest, major.minor.latest, and major.minor.patch.
54 # Subtracting everything that contains the string 'latest' gets us
55 # the major.minor.patch which is what's needed for comparison.
56 current_latest = parse([tag for tag in tags if "latest" not in tag][0])
57 else:
58 current_latest = False
59
60 # semver of current_minor_latest
61 for version, tags in version_tag_map.items():
62 if f"{new_version.major}.{new_version.minor}.latest" in tags:
63 # Similar to above, only now we expect exactly two tags:
64 # major.minor.patch and major.minor.latest
65 current_minor_latest = parse([tag for tag in tags if "latest" not in tag][0])
66 else:
67 current_minor_latest = False
68
69 def is_latest(
70 pre_rel: bool, new_version: Version, remote_latest: Union[bool, Version]
71 ) -> bool:
72 """Determine if a given contaier should be tagged 'latest' based on:
73 - it's pre-release status
74 - it's version
75 - the version of a previously identified container tagged 'latest'
76
77 :param pre_rel: Wether or not the version of the new container is a pre-release
78 :param new_version: The version of the new container
79 :param remote_latest: The version of the previously identified container that's
80 already tagged latest or False
81 """
82 # is a pre-release = not latest
83 if pre_rel:
84 return False
85 # + no latest tag found = is latest
86 if not remote_latest:
87 return True
88 # + if remote version is lower than current = is latest, else not latest
89 return True if remote_latest <= new_version else False
90
91 latest = is_latest(pre_rel, new_version, current_latest)
92 minor_latest = is_latest(pre_rel, new_version, current_minor_latest)
93
94 print(f"::set-output name=latest::{latest}")
95 print(f"::set-output name=minor_latest::{minor_latest}")
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/.github/actions/latest-wrangler/main.py b/.github/actions/latest-wrangler/main.py
--- a/.github/actions/latest-wrangler/main.py
+++ b/.github/actions/latest-wrangler/main.py
@@ -28,11 +28,12 @@
if package_request.status_code == 404:
if halt_on_missing:
sys.exit(1)
- else:
- # everything is the latest if the package doesn't exist
- print(f"::set-output name=latest::{True}")
- print(f"::set-output name=minor_latest::{True}")
- sys.exit(0)
+ # everything is the latest if the package doesn't exist
+ github_output = os.environ.get("GITHUB_OUTPUT")
+ with open(github_output, "at", encoding="utf-8") as gh_output:
+ gh_output.write("latest=True")
+ gh_output.write("minor_latest=True")
+ sys.exit(0)
# TODO: verify package meta is "correct"
# https://github.com/dbt-labs/dbt-core/issues/4640
@@ -91,5 +92,7 @@
latest = is_latest(pre_rel, new_version, current_latest)
minor_latest = is_latest(pre_rel, new_version, current_minor_latest)
- print(f"::set-output name=latest::{latest}")
- print(f"::set-output name=minor_latest::{minor_latest}")
+ github_output = os.environ.get("GITHUB_OUTPUT")
+ with open(github_output, "at", encoding="utf-8") as gh_output:
+ gh_output.write(f"latest={latest}")
+ gh_output.write(f"minor_latest={minor_latest}")
| {"golden_diff": "diff --git a/.github/actions/latest-wrangler/main.py b/.github/actions/latest-wrangler/main.py\n--- a/.github/actions/latest-wrangler/main.py\n+++ b/.github/actions/latest-wrangler/main.py\n@@ -28,11 +28,12 @@\n if package_request.status_code == 404:\n if halt_on_missing:\n sys.exit(1)\n- else:\n- # everything is the latest if the package doesn't exist\n- print(f\"::set-output name=latest::{True}\")\n- print(f\"::set-output name=minor_latest::{True}\")\n- sys.exit(0)\n+ # everything is the latest if the package doesn't exist\n+ github_output = os.environ.get(\"GITHUB_OUTPUT\")\n+ with open(github_output, \"at\", encoding=\"utf-8\") as gh_output:\n+ gh_output.write(\"latest=True\")\n+ gh_output.write(\"minor_latest=True\")\n+ sys.exit(0)\n \n # TODO: verify package meta is \"correct\"\n # https://github.com/dbt-labs/dbt-core/issues/4640\n@@ -91,5 +92,7 @@\n latest = is_latest(pre_rel, new_version, current_latest)\n minor_latest = is_latest(pre_rel, new_version, current_minor_latest)\n \n- print(f\"::set-output name=latest::{latest}\")\n- print(f\"::set-output name=minor_latest::{minor_latest}\")\n+ github_output = os.environ.get(\"GITHUB_OUTPUT\")\n+ with open(github_output, \"at\", encoding=\"utf-8\") as gh_output:\n+ gh_output.write(f\"latest={latest}\")\n+ gh_output.write(f\"minor_latest={minor_latest}\")\n", "issue": "[CT-2479] replace all instances of set-output and node16\nDetails in https://github.com/dbt-labs/actions/issues/39.\r\n\r\n### Acceptance Criteria\r\n- [ ] Verified there are no workflows to update\r\n_or_\r\n- [ ] removed all uses of `set-output` - either directly or up updating any marketplace actions we reference\r\n- [ ] removed all references to node16 - either directly or up updating any marketplace actions we reference\r\n- [ ] backport changes\n", "before_files": [{"content": "import os\nimport sys\nimport requests\nfrom distutils.util import strtobool\nfrom typing import Union\nfrom packaging.version import parse, Version\n\nif __name__ == \"__main__\":\n\n # get inputs\n package = os.environ[\"INPUT_PACKAGE\"]\n new_version = parse(os.environ[\"INPUT_NEW_VERSION\"])\n gh_token = os.environ[\"INPUT_GH_TOKEN\"]\n halt_on_missing = strtobool(os.environ.get(\"INPUT_HALT_ON_MISSING\", \"False\"))\n\n # get package metadata from github\n package_request = requests.get(\n f\"https://api.github.com/orgs/dbt-labs/packages/container/{package}/versions\",\n auth=(\"\", gh_token),\n )\n package_meta = package_request.json()\n\n # Log info if we don't get a 200\n if package_request.status_code != 200:\n print(f\"Call to GH API failed: {package_request.status_code} {package_meta['message']}\")\n\n # Make an early exit if there is no matching package in github\n if package_request.status_code == 404:\n if halt_on_missing:\n sys.exit(1)\n else:\n # everything is the latest if the package doesn't exist\n print(f\"::set-output name=latest::{True}\")\n print(f\"::set-output name=minor_latest::{True}\")\n sys.exit(0)\n\n # TODO: verify package meta is \"correct\"\n # https://github.com/dbt-labs/dbt-core/issues/4640\n\n # map versions and tags\n version_tag_map = {\n version[\"id\"]: version[\"metadata\"][\"container\"][\"tags\"] for version in package_meta\n }\n\n # is pre-release\n pre_rel = True if any(x in str(new_version) for x in [\"a\", \"b\", \"rc\"]) else False\n\n # semver of current latest\n for version, tags in version_tag_map.items():\n if \"latest\" in tags:\n # N.B. This seems counterintuitive, but we expect any version tagged\n # 'latest' to have exactly three associated tags:\n # latest, major.minor.latest, and major.minor.patch.\n # Subtracting everything that contains the string 'latest' gets us\n # the major.minor.patch which is what's needed for comparison.\n current_latest = parse([tag for tag in tags if \"latest\" not in tag][0])\n else:\n current_latest = False\n\n # semver of current_minor_latest\n for version, tags in version_tag_map.items():\n if f\"{new_version.major}.{new_version.minor}.latest\" in tags:\n # Similar to above, only now we expect exactly two tags:\n # major.minor.patch and major.minor.latest\n current_minor_latest = parse([tag for tag in tags if \"latest\" not in tag][0])\n else:\n current_minor_latest = False\n\n def is_latest(\n pre_rel: bool, new_version: Version, remote_latest: Union[bool, Version]\n ) -> bool:\n \"\"\"Determine if a given contaier should be tagged 'latest' based on:\n - it's pre-release status\n - it's version\n - the version of a previously identified container tagged 'latest'\n\n :param pre_rel: Wether or not the version of the new container is a pre-release\n :param new_version: The version of the new container\n :param remote_latest: The version of the previously identified container that's\n already tagged latest or False\n \"\"\"\n # is a pre-release = not latest\n if pre_rel:\n return False\n # + no latest tag found = is latest\n if not remote_latest:\n return True\n # + if remote version is lower than current = is latest, else not latest\n return True if remote_latest <= new_version else False\n\n latest = is_latest(pre_rel, new_version, current_latest)\n minor_latest = is_latest(pre_rel, new_version, current_minor_latest)\n\n print(f\"::set-output name=latest::{latest}\")\n print(f\"::set-output name=minor_latest::{minor_latest}\")\n", "path": ".github/actions/latest-wrangler/main.py"}], "after_files": [{"content": "import os\nimport sys\nimport requests\nfrom distutils.util import strtobool\nfrom typing import Union\nfrom packaging.version import parse, Version\n\nif __name__ == \"__main__\":\n\n # get inputs\n package = os.environ[\"INPUT_PACKAGE\"]\n new_version = parse(os.environ[\"INPUT_NEW_VERSION\"])\n gh_token = os.environ[\"INPUT_GH_TOKEN\"]\n halt_on_missing = strtobool(os.environ.get(\"INPUT_HALT_ON_MISSING\", \"False\"))\n\n # get package metadata from github\n package_request = requests.get(\n f\"https://api.github.com/orgs/dbt-labs/packages/container/{package}/versions\",\n auth=(\"\", gh_token),\n )\n package_meta = package_request.json()\n\n # Log info if we don't get a 200\n if package_request.status_code != 200:\n print(f\"Call to GH API failed: {package_request.status_code} {package_meta['message']}\")\n\n # Make an early exit if there is no matching package in github\n if package_request.status_code == 404:\n if halt_on_missing:\n sys.exit(1)\n # everything is the latest if the package doesn't exist\n github_output = os.environ.get(\"GITHUB_OUTPUT\")\n with open(github_output, \"at\", encoding=\"utf-8\") as gh_output:\n gh_output.write(\"latest=True\")\n gh_output.write(\"minor_latest=True\")\n sys.exit(0)\n\n # TODO: verify package meta is \"correct\"\n # https://github.com/dbt-labs/dbt-core/issues/4640\n\n # map versions and tags\n version_tag_map = {\n version[\"id\"]: version[\"metadata\"][\"container\"][\"tags\"] for version in package_meta\n }\n\n # is pre-release\n pre_rel = True if any(x in str(new_version) for x in [\"a\", \"b\", \"rc\"]) else False\n\n # semver of current latest\n for version, tags in version_tag_map.items():\n if \"latest\" in tags:\n # N.B. This seems counterintuitive, but we expect any version tagged\n # 'latest' to have exactly three associated tags:\n # latest, major.minor.latest, and major.minor.patch.\n # Subtracting everything that contains the string 'latest' gets us\n # the major.minor.patch which is what's needed for comparison.\n current_latest = parse([tag for tag in tags if \"latest\" not in tag][0])\n else:\n current_latest = False\n\n # semver of current_minor_latest\n for version, tags in version_tag_map.items():\n if f\"{new_version.major}.{new_version.minor}.latest\" in tags:\n # Similar to above, only now we expect exactly two tags:\n # major.minor.patch and major.minor.latest\n current_minor_latest = parse([tag for tag in tags if \"latest\" not in tag][0])\n else:\n current_minor_latest = False\n\n def is_latest(\n pre_rel: bool, new_version: Version, remote_latest: Union[bool, Version]\n ) -> bool:\n \"\"\"Determine if a given contaier should be tagged 'latest' based on:\n - it's pre-release status\n - it's version\n - the version of a previously identified container tagged 'latest'\n\n :param pre_rel: Wether or not the version of the new container is a pre-release\n :param new_version: The version of the new container\n :param remote_latest: The version of the previously identified container that's\n already tagged latest or False\n \"\"\"\n # is a pre-release = not latest\n if pre_rel:\n return False\n # + no latest tag found = is latest\n if not remote_latest:\n return True\n # + if remote version is lower than current = is latest, else not latest\n return True if remote_latest <= new_version else False\n\n latest = is_latest(pre_rel, new_version, current_latest)\n minor_latest = is_latest(pre_rel, new_version, current_minor_latest)\n\n github_output = os.environ.get(\"GITHUB_OUTPUT\")\n with open(github_output, \"at\", encoding=\"utf-8\") as gh_output:\n gh_output.write(f\"latest={latest}\")\n gh_output.write(f\"minor_latest={minor_latest}\")\n", "path": ".github/actions/latest-wrangler/main.py"}]} | 1,441 | 377 |
gh_patches_debug_16178 | rasdani/github-patches | git_diff | safe-global__safe-config-service-58 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change default renderer to JSONRenderer
- The renderer in production should show the json payload without a browsable API
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/config/settings.py`
Content:
```
1 """
2 Django settings for safe_client_config_service project.
3
4 Generated by 'django-admin startproject' using Django 3.2.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/3.2/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/3.2/ref/settings/
11 """
12 import os
13 from distutils.util import strtobool
14 from pathlib import Path
15
16 # Build paths inside the project like this: BASE_DIR / 'subdir'.
17 BASE_DIR = Path(__file__).resolve().parent.parent
18
19 # Quick-start development settings - unsuitable for production
20 # See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/
21
22 # SECURITY WARNING: keep the secret key used in production secret!
23 SECRET_KEY = os.getenv("SECRET_KEY", None)
24
25 # SECURITY WARNING: don't run with debug turned on in production!
26 DEBUG = bool(strtobool(os.getenv("DEBUG", "false")))
27
28 # https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-ALLOWED_HOSTS
29 allowed_hosts = os.getenv("DJANGO_ALLOWED_HOSTS", ".localhost,127.0.0.1,[::1]")
30 ALLOWED_HOSTS = list(map(str.strip, allowed_hosts.split(",")))
31
32 # Application definition
33
34 INSTALLED_APPS = [
35 "safe_apps.apps.AppsConfig",
36 "django.contrib.admin",
37 "django.contrib.auth",
38 "django.contrib.contenttypes",
39 "django.contrib.sessions",
40 "django.contrib.messages",
41 "django.contrib.staticfiles",
42 "rest_framework",
43 ]
44
45 MIDDLEWARE = [
46 "config.middleware.LoggingMiddleware",
47 "django.middleware.security.SecurityMiddleware",
48 "django.contrib.sessions.middleware.SessionMiddleware",
49 "django.middleware.common.CommonMiddleware",
50 "django.middleware.csrf.CsrfViewMiddleware",
51 "django.contrib.auth.middleware.AuthenticationMiddleware",
52 "django.contrib.messages.middleware.MessageMiddleware",
53 "django.middleware.clickjacking.XFrameOptionsMiddleware",
54 ]
55
56 CACHES = {
57 "default": {
58 "BACKEND": "django.core.cache.backends.locmem.LocMemCache",
59 },
60 "safe-apps": {
61 "BACKEND": "django.core.cache.backends.locmem.LocMemCache",
62 },
63 }
64
65 LOGGING = {
66 "version": 1,
67 "disable_existing_loggers": False,
68 "formatters": {
69 "short": {"format": "%(asctime)s %(message)s"},
70 "verbose": {
71 "format": "%(asctime)s [%(levelname)s] [%(processName)s] %(message)s"
72 },
73 },
74 "handlers": {
75 "console": {
76 "class": "logging.StreamHandler",
77 "formatter": "verbose",
78 },
79 "console_short": {
80 "class": "logging.StreamHandler",
81 "formatter": "short",
82 },
83 },
84 "root": {
85 "handlers": ["console"],
86 "level": os.getenv("ROOT_LOG_LEVEL", "INFO"),
87 },
88 "loggers": {
89 "LoggingMiddleware": {
90 "handlers": ["console_short"],
91 "level": "INFO",
92 "propagate": False,
93 },
94 },
95 }
96
97 ROOT_URLCONF = "config.urls"
98
99 TEMPLATES = [
100 {
101 "BACKEND": "django.template.backends.django.DjangoTemplates",
102 "DIRS": [],
103 "APP_DIRS": True,
104 "OPTIONS": {
105 "context_processors": [
106 "django.template.context_processors.debug",
107 "django.template.context_processors.request",
108 "django.contrib.auth.context_processors.auth",
109 "django.contrib.messages.context_processors.messages",
110 ],
111 },
112 },
113 ]
114
115 WSGI_APPLICATION = "config.wsgi.application"
116
117 # Database
118 # https://docs.djangoproject.com/en/3.2/ref/settings/#databases
119
120 DATABASES = {
121 "default": {
122 "ENGINE": "django.db.backends.postgresql",
123 "NAME": os.getenv("POSTGRES_NAME", "postgres"),
124 "USER": os.getenv("POSTGRES_USER", "postgres"),
125 "PASSWORD": os.getenv("POSTGRES_PASSWORD", "postgres"),
126 "HOST": os.getenv("POSTGRES_HOST", "db"),
127 "PORT": os.getenv("POSTGRES_PORT", "5432"),
128 }
129 }
130
131 # Password validation
132 # https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators
133
134 AUTH_PASSWORD_VALIDATORS = [
135 {
136 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
137 },
138 {
139 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
140 },
141 {
142 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
143 },
144 {
145 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
146 },
147 ]
148
149 # Internationalization
150 # https://docs.djangoproject.com/en/3.2/topics/i18n/
151
152 LANGUAGE_CODE = "en-us"
153
154 TIME_ZONE = "UTC"
155
156 USE_I18N = True
157
158 USE_L10N = True
159
160 USE_TZ = True
161
162 # Static files (CSS, JavaScript, Images)
163 # https://docs.djangoproject.com/en/3.2/howto/static-files/
164
165 STATIC_URL = "/static/"
166
167 # Default primary key field type
168 # https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field
169
170 DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
171
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/config/settings.py b/src/config/settings.py
--- a/src/config/settings.py
+++ b/src/config/settings.py
@@ -27,10 +27,21 @@
# https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-ALLOWED_HOSTS
allowed_hosts = os.getenv("DJANGO_ALLOWED_HOSTS", ".localhost,127.0.0.1,[::1]")
-ALLOWED_HOSTS = list(map(str.strip, allowed_hosts.split(",")))
+ALLOWED_HOSTS = [allowed_host.strip() for allowed_host in allowed_hosts.split(",")]
# Application definition
+default_renderer_classes = os.getenv(
+ "REST_DEFAULT_RENDERER_CLASSES", "rest_framework.renderers.JSONRenderer"
+)
+REST_FRAMEWORK = {
+ # https://www.django-rest-framework.org/api-guide/renderers/
+ "DEFAULT_RENDERER_CLASSES": [
+ default_renderer_class.strip()
+ for default_renderer_class in default_renderer_classes.split(",")
+ ]
+}
+
INSTALLED_APPS = [
"safe_apps.apps.AppsConfig",
"django.contrib.admin",
| {"golden_diff": "diff --git a/src/config/settings.py b/src/config/settings.py\n--- a/src/config/settings.py\n+++ b/src/config/settings.py\n@@ -27,10 +27,21 @@\n \n # https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-ALLOWED_HOSTS\n allowed_hosts = os.getenv(\"DJANGO_ALLOWED_HOSTS\", \".localhost,127.0.0.1,[::1]\")\n-ALLOWED_HOSTS = list(map(str.strip, allowed_hosts.split(\",\")))\n+ALLOWED_HOSTS = [allowed_host.strip() for allowed_host in allowed_hosts.split(\",\")]\n \n # Application definition\n \n+default_renderer_classes = os.getenv(\n+ \"REST_DEFAULT_RENDERER_CLASSES\", \"rest_framework.renderers.JSONRenderer\"\n+)\n+REST_FRAMEWORK = {\n+ # https://www.django-rest-framework.org/api-guide/renderers/\n+ \"DEFAULT_RENDERER_CLASSES\": [\n+ default_renderer_class.strip()\n+ for default_renderer_class in default_renderer_classes.split(\",\")\n+ ]\n+}\n+\n INSTALLED_APPS = [\n \"safe_apps.apps.AppsConfig\",\n \"django.contrib.admin\",\n", "issue": "Change default renderer to JSONRenderer\n- The renderer in production should show the json payload without a browsable API\n", "before_files": [{"content": "\"\"\"\nDjango settings for safe_client_config_service project.\n\nGenerated by 'django-admin startproject' using Django 3.2.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.2/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.2/ref/settings/\n\"\"\"\nimport os\nfrom distutils.util import strtobool\nfrom pathlib import Path\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.getenv(\"SECRET_KEY\", None)\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = bool(strtobool(os.getenv(\"DEBUG\", \"false\")))\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-ALLOWED_HOSTS\nallowed_hosts = os.getenv(\"DJANGO_ALLOWED_HOSTS\", \".localhost,127.0.0.1,[::1]\")\nALLOWED_HOSTS = list(map(str.strip, allowed_hosts.split(\",\")))\n\n# Application definition\n\nINSTALLED_APPS = [\n \"safe_apps.apps.AppsConfig\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n]\n\nMIDDLEWARE = [\n \"config.middleware.LoggingMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nCACHES = {\n \"default\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n },\n \"safe-apps\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n },\n}\n\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"short\": {\"format\": \"%(asctime)s %(message)s\"},\n \"verbose\": {\n \"format\": \"%(asctime)s [%(levelname)s] [%(processName)s] %(message)s\"\n },\n },\n \"handlers\": {\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"verbose\",\n },\n \"console_short\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"short\",\n },\n },\n \"root\": {\n \"handlers\": [\"console\"],\n \"level\": os.getenv(\"ROOT_LOG_LEVEL\", \"INFO\"),\n },\n \"loggers\": {\n \"LoggingMiddleware\": {\n \"handlers\": [\"console_short\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n },\n}\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.2/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": os.getenv(\"POSTGRES_NAME\", \"postgres\"),\n \"USER\": os.getenv(\"POSTGRES_USER\", \"postgres\"),\n \"PASSWORD\": os.getenv(\"POSTGRES_PASSWORD\", \"postgres\"),\n \"HOST\": os.getenv(\"POSTGRES_HOST\", \"db\"),\n \"PORT\": os.getenv(\"POSTGRES_PORT\", \"5432\"),\n }\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.2/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.2/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\n# Default primary key field type\n# https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field\n\nDEFAULT_AUTO_FIELD = \"django.db.models.BigAutoField\"\n", "path": "src/config/settings.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for safe_client_config_service project.\n\nGenerated by 'django-admin startproject' using Django 3.2.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.2/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.2/ref/settings/\n\"\"\"\nimport os\nfrom distutils.util import strtobool\nfrom pathlib import Path\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.getenv(\"SECRET_KEY\", None)\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = bool(strtobool(os.getenv(\"DEBUG\", \"false\")))\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#std:setting-ALLOWED_HOSTS\nallowed_hosts = os.getenv(\"DJANGO_ALLOWED_HOSTS\", \".localhost,127.0.0.1,[::1]\")\nALLOWED_HOSTS = [allowed_host.strip() for allowed_host in allowed_hosts.split(\",\")]\n\n# Application definition\n\ndefault_renderer_classes = os.getenv(\n \"REST_DEFAULT_RENDERER_CLASSES\", \"rest_framework.renderers.JSONRenderer\"\n)\nREST_FRAMEWORK = {\n # https://www.django-rest-framework.org/api-guide/renderers/\n \"DEFAULT_RENDERER_CLASSES\": [\n default_renderer_class.strip()\n for default_renderer_class in default_renderer_classes.split(\",\")\n ]\n}\n\nINSTALLED_APPS = [\n \"safe_apps.apps.AppsConfig\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n]\n\nMIDDLEWARE = [\n \"config.middleware.LoggingMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nCACHES = {\n \"default\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n },\n \"safe-apps\": {\n \"BACKEND\": \"django.core.cache.backends.locmem.LocMemCache\",\n },\n}\n\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"short\": {\"format\": \"%(asctime)s %(message)s\"},\n \"verbose\": {\n \"format\": \"%(asctime)s [%(levelname)s] [%(processName)s] %(message)s\"\n },\n },\n \"handlers\": {\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"verbose\",\n },\n \"console_short\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"short\",\n },\n },\n \"root\": {\n \"handlers\": [\"console\"],\n \"level\": os.getenv(\"ROOT_LOG_LEVEL\", \"INFO\"),\n },\n \"loggers\": {\n \"LoggingMiddleware\": {\n \"handlers\": [\"console_short\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n },\n}\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.2/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": os.getenv(\"POSTGRES_NAME\", \"postgres\"),\n \"USER\": os.getenv(\"POSTGRES_USER\", \"postgres\"),\n \"PASSWORD\": os.getenv(\"POSTGRES_PASSWORD\", \"postgres\"),\n \"HOST\": os.getenv(\"POSTGRES_HOST\", \"db\"),\n \"PORT\": os.getenv(\"POSTGRES_PORT\", \"5432\"),\n }\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.2/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.2/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\n# Default primary key field type\n# https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field\n\nDEFAULT_AUTO_FIELD = \"django.db.models.BigAutoField\"\n", "path": "src/config/settings.py"}]} | 1,826 | 236 |
gh_patches_debug_53536 | rasdani/github-patches | git_diff | quantumlib__Cirq-2374 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cirq should ship a py.typed file
[PEP 561](https://www.python.org/dev/peps/pep-0561/) says that any packages that ship with type information should have a py.typed file in their package. Otherwise, type checkers like mypy can't find Cirq. (FWIW I just did `touch ~/.virtualenvs/.../cirq/py.typed`, and then mypy type-checks the file correctly.)
Other than that, Cirq seems pretty awesome so far :ok_hand:.
Cirq should ship a py.typed file
[PEP 561](https://www.python.org/dev/peps/pep-0561/) says that any packages that ship with type information should have a py.typed file in their package. Otherwise, type checkers like mypy can't find Cirq. (FWIW I just did `touch ~/.virtualenvs/.../cirq/py.typed`, and then mypy type-checks the file correctly.)
Other than that, Cirq seems pretty awesome so far :ok_hand:.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17 from setuptools import find_packages, setup
18
19 # This reads the __version__ variable from cirq/_version.py
20 __version__ = ''
21 exec(open('cirq/_version.py').read())
22
23 name = 'cirq'
24
25 description = ('A framework for creating, editing, and invoking '
26 'Noisy Intermediate Scale Quantum (NISQ) circuits.')
27
28 # README file as long_description.
29 long_description = io.open('README.rst', encoding='utf-8').read()
30
31 # If CIRQ_DEV_VERSION is set then we use cirq-dev as the name of the package
32 # and update the version to this value.
33 if 'CIRQ_DEV_VERSION' in os.environ:
34 name = 'cirq-dev'
35 __version__ = os.environ['CIRQ_DEV_VERSION']
36 long_description = (
37 "**This is a development version of Cirq and may be "
38 "unstable.**\n\n**For the latest stable release of Cirq "
39 "see**\n`here <https://pypi.org/project/cirq>`__.\n\n" +
40 long_description)
41
42 # Read in requirements
43 requirements = open('requirements.txt').readlines()
44 requirements = [r.strip() for r in requirements]
45 contrib_requirements = open('cirq/contrib/contrib-requirements.txt').readlines()
46 contrib_requirements = [r.strip() for r in contrib_requirements]
47 dev_requirements = open('dev_tools/conf/pip-list-dev-tools.txt').readlines()
48 dev_requirements = [r.strip() for r in dev_requirements]
49
50 cirq_packages = ['cirq'] + [
51 'cirq.' + package for package in find_packages(where='cirq')
52 ]
53
54 # Sanity check
55 assert __version__, 'Version string cannot be empty'
56
57 setup(name=name,
58 version=__version__,
59 url='http://github.com/quantumlib/cirq',
60 author='The Cirq Developers',
61 author_email='[email protected]',
62 python_requires=('>=3.6.0'),
63 install_requires=requirements,
64 extras_require={
65 'contrib': contrib_requirements,
66 'dev_env': dev_requirements + contrib_requirements,
67 },
68 license='Apache 2',
69 description=description,
70 long_description=long_description,
71 packages=cirq_packages,
72 package_data={
73 'cirq.api.google.v1': ['*.proto'],
74 'cirq.api.google.v2': ['*.proto'],
75 'cirq.google.api.v1': ['*.proto'],
76 'cirq.google.api.v2': ['*.proto'],
77 })
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -70,6 +70,7 @@
long_description=long_description,
packages=cirq_packages,
package_data={
+ 'cirq': ['py.typed'],
'cirq.api.google.v1': ['*.proto'],
'cirq.api.google.v2': ['*.proto'],
'cirq.google.api.v1': ['*.proto'],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -70,6 +70,7 @@\n long_description=long_description,\n packages=cirq_packages,\n package_data={\n+ 'cirq': ['py.typed'],\n 'cirq.api.google.v1': ['*.proto'],\n 'cirq.api.google.v2': ['*.proto'],\n 'cirq.google.api.v1': ['*.proto'],\n", "issue": "Cirq should ship a py.typed file\n[PEP 561](https://www.python.org/dev/peps/pep-0561/) says that any packages that ship with type information should have a py.typed file in their package. Otherwise, type checkers like mypy can't find Cirq. (FWIW I just did `touch ~/.virtualenvs/.../cirq/py.typed`, and then mypy type-checks the file correctly.)\r\n\r\nOther than that, Cirq seems pretty awesome so far :ok_hand:.\nCirq should ship a py.typed file\n[PEP 561](https://www.python.org/dev/peps/pep-0561/) says that any packages that ship with type information should have a py.typed file in their package. Otherwise, type checkers like mypy can't find Cirq. (FWIW I just did `touch ~/.virtualenvs/.../cirq/py.typed`, and then mypy type-checks the file correctly.)\r\n\r\nOther than that, Cirq seems pretty awesome so far :ok_hand:.\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\nfrom setuptools import find_packages, setup\n\n# This reads the __version__ variable from cirq/_version.py\n__version__ = ''\nexec(open('cirq/_version.py').read())\n\nname = 'cirq'\n\ndescription = ('A framework for creating, editing, and invoking '\n 'Noisy Intermediate Scale Quantum (NISQ) circuits.')\n\n# README file as long_description.\nlong_description = io.open('README.rst', encoding='utf-8').read()\n\n# If CIRQ_DEV_VERSION is set then we use cirq-dev as the name of the package\n# and update the version to this value.\nif 'CIRQ_DEV_VERSION' in os.environ:\n name = 'cirq-dev'\n __version__ = os.environ['CIRQ_DEV_VERSION']\n long_description = (\n \"**This is a development version of Cirq and may be \"\n \"unstable.**\\n\\n**For the latest stable release of Cirq \"\n \"see**\\n`here <https://pypi.org/project/cirq>`__.\\n\\n\" +\n long_description)\n\n# Read in requirements\nrequirements = open('requirements.txt').readlines()\nrequirements = [r.strip() for r in requirements]\ncontrib_requirements = open('cirq/contrib/contrib-requirements.txt').readlines()\ncontrib_requirements = [r.strip() for r in contrib_requirements]\ndev_requirements = open('dev_tools/conf/pip-list-dev-tools.txt').readlines()\ndev_requirements = [r.strip() for r in dev_requirements]\n\ncirq_packages = ['cirq'] + [\n 'cirq.' + package for package in find_packages(where='cirq')\n]\n\n# Sanity check\nassert __version__, 'Version string cannot be empty'\n\nsetup(name=name,\n version=__version__,\n url='http://github.com/quantumlib/cirq',\n author='The Cirq Developers',\n author_email='[email protected]',\n python_requires=('>=3.6.0'),\n install_requires=requirements,\n extras_require={\n 'contrib': contrib_requirements,\n 'dev_env': dev_requirements + contrib_requirements,\n },\n license='Apache 2',\n description=description,\n long_description=long_description,\n packages=cirq_packages,\n package_data={\n 'cirq.api.google.v1': ['*.proto'],\n 'cirq.api.google.v2': ['*.proto'],\n 'cirq.google.api.v1': ['*.proto'],\n 'cirq.google.api.v2': ['*.proto'],\n })\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\nfrom setuptools import find_packages, setup\n\n# This reads the __version__ variable from cirq/_version.py\n__version__ = ''\nexec(open('cirq/_version.py').read())\n\nname = 'cirq'\n\ndescription = ('A framework for creating, editing, and invoking '\n 'Noisy Intermediate Scale Quantum (NISQ) circuits.')\n\n# README file as long_description.\nlong_description = io.open('README.rst', encoding='utf-8').read()\n\n# If CIRQ_DEV_VERSION is set then we use cirq-dev as the name of the package\n# and update the version to this value.\nif 'CIRQ_DEV_VERSION' in os.environ:\n name = 'cirq-dev'\n __version__ = os.environ['CIRQ_DEV_VERSION']\n long_description = (\n \"**This is a development version of Cirq and may be \"\n \"unstable.**\\n\\n**For the latest stable release of Cirq \"\n \"see**\\n`here <https://pypi.org/project/cirq>`__.\\n\\n\" +\n long_description)\n\n# Read in requirements\nrequirements = open('requirements.txt').readlines()\nrequirements = [r.strip() for r in requirements]\ncontrib_requirements = open('cirq/contrib/contrib-requirements.txt').readlines()\ncontrib_requirements = [r.strip() for r in contrib_requirements]\ndev_requirements = open('dev_tools/conf/pip-list-dev-tools.txt').readlines()\ndev_requirements = [r.strip() for r in dev_requirements]\n\ncirq_packages = ['cirq'] + [\n 'cirq.' + package for package in find_packages(where='cirq')\n]\n\n# Sanity check\nassert __version__, 'Version string cannot be empty'\n\nsetup(name=name,\n version=__version__,\n url='http://github.com/quantumlib/cirq',\n author='The Cirq Developers',\n author_email='[email protected]',\n python_requires=('>=3.6.0'),\n install_requires=requirements,\n extras_require={\n 'contrib': contrib_requirements,\n 'dev_env': dev_requirements + contrib_requirements,\n },\n license='Apache 2',\n description=description,\n long_description=long_description,\n packages=cirq_packages,\n package_data={\n 'cirq': ['py.typed'],\n 'cirq.api.google.v1': ['*.proto'],\n 'cirq.api.google.v2': ['*.proto'],\n 'cirq.google.api.v1': ['*.proto'],\n 'cirq.google.api.v2': ['*.proto'],\n })\n", "path": "setup.py"}]} | 1,312 | 99 |
gh_patches_debug_8566 | rasdani/github-patches | git_diff | great-expectations__great_expectations-1229 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Great Expectations is still marked as Python 2 compatible
It looks like running `pip install great_expectations==0.9.7` in a Python 2 environment starts working, before failing when pulling `marshmallow`. This is expected since this PR: https://github.com/great-expectations/great_expectations/pull/1187 but on PyPI, GE is still marked as Python 2 compatible because of the `setup.py` file.
I'm opening a PR that fixes this in a sec, but feel free to close if I'm missing something! :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2 import versioneer
3
4 # Parse requirements.txt
5 with open('requirements.txt') as f:
6 required = f.read().splitlines()
7
8 #try:
9 # import pypandoc
10 # long_description = pypandoc.convert_file('README.md', 'rst')
11 #except (IOError, ImportError):
12 long_description = 'Always know what to expect from your data. (See https://github.com/great-expectations/great_expectations for full description).'
13
14 config = {
15 'description': 'Always know what to expect from your data.',
16 'author': 'The Great Expectations Team',
17 'url': 'https://github.com/great-expectations/great_expectations',
18 'author_email': '[email protected]',
19 'version': versioneer.get_version(),
20 'cmdclass': versioneer.get_cmdclass(),
21 'install_requires': required,
22 'extras_require': {
23 'spark': ['pyspark>=2.3.2'],
24 'sqlalchemy': ['sqlalchemy>=1.2'],
25 'airflow': ['apache-airflow[s3]>=1.9.0', 'boto3>=1.7.3']
26 },
27 'packages': find_packages(exclude=['docs', 'tests', 'examples']),
28 'entry_points': {
29 'console_scripts': ['great_expectations=great_expectations.cli:main']
30 },
31 'name': 'great_expectations',
32 'long_description': long_description,
33 'license': 'Apache-2.0',
34 'keywords': 'data science testing pipeline data quality dataquality validation datavalidation',
35 'include_package_data': True,
36 'classifiers': [
37 'Development Status :: 4 - Beta',
38 'Intended Audience :: Developers',
39 'Intended Audience :: Science/Research',
40 'Intended Audience :: Other Audience',
41 'Topic :: Scientific/Engineering',
42 'Topic :: Software Development',
43 'Topic :: Software Development :: Testing',
44 'License :: OSI Approved :: Apache Software License',
45 'Programming Language :: Python :: 2',
46 'Programming Language :: Python :: 2.7',
47 'Programming Language :: Python :: 3',
48 'Programming Language :: Python :: 3.6',
49 'Programming Language :: Python :: 3.7',
50 ]
51 }
52
53 setup(**config)
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -42,8 +42,6 @@
'Topic :: Software Development',
'Topic :: Software Development :: Testing',
'License :: OSI Approved :: Apache Software License',
- 'Programming Language :: Python :: 2',
- 'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -42,8 +42,6 @@\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Testing',\n 'License :: OSI Approved :: Apache Software License',\n- 'Programming Language :: Python :: 2',\n- 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n", "issue": "Great Expectations is still marked as Python 2 compatible\nIt looks like running `pip install great_expectations==0.9.7` in a Python 2 environment starts working, before failing when pulling `marshmallow`. This is expected since this PR: https://github.com/great-expectations/great_expectations/pull/1187 but on PyPI, GE is still marked as Python 2 compatible because of the `setup.py` file.\r\n\r\nI'm opening a PR that fixes this in a sec, but feel free to close if I'm missing something! :)\n", "before_files": [{"content": "from setuptools import setup, find_packages\nimport versioneer\n\n# Parse requirements.txt\nwith open('requirements.txt') as f:\n required = f.read().splitlines()\n\n#try:\n# import pypandoc\n# long_description = pypandoc.convert_file('README.md', 'rst')\n#except (IOError, ImportError):\nlong_description = 'Always know what to expect from your data. (See https://github.com/great-expectations/great_expectations for full description).'\n\nconfig = {\n 'description': 'Always know what to expect from your data.',\n 'author': 'The Great Expectations Team',\n 'url': 'https://github.com/great-expectations/great_expectations',\n 'author_email': '[email protected]',\n 'version': versioneer.get_version(),\n 'cmdclass': versioneer.get_cmdclass(),\n 'install_requires': required,\n 'extras_require': {\n 'spark': ['pyspark>=2.3.2'],\n 'sqlalchemy': ['sqlalchemy>=1.2'],\n 'airflow': ['apache-airflow[s3]>=1.9.0', 'boto3>=1.7.3']\n },\n 'packages': find_packages(exclude=['docs', 'tests', 'examples']),\n 'entry_points': {\n 'console_scripts': ['great_expectations=great_expectations.cli:main']\n },\n 'name': 'great_expectations',\n 'long_description': long_description,\n 'license': 'Apache-2.0',\n 'keywords': 'data science testing pipeline data quality dataquality validation datavalidation',\n 'include_package_data': True,\n 'classifiers': [\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Other Audience',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Testing',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ]\n}\n\nsetup(**config)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\nimport versioneer\n\n# Parse requirements.txt\nwith open('requirements.txt') as f:\n required = f.read().splitlines()\n\n#try:\n# import pypandoc\n# long_description = pypandoc.convert_file('README.md', 'rst')\n#except (IOError, ImportError):\nlong_description = 'Always know what to expect from your data. (See https://github.com/great-expectations/great_expectations for full description).'\n\nconfig = {\n 'description': 'Always know what to expect from your data.',\n 'author': 'The Great Expectations Team',\n 'url': 'https://github.com/great-expectations/great_expectations',\n 'author_email': '[email protected]',\n 'version': versioneer.get_version(),\n 'cmdclass': versioneer.get_cmdclass(),\n 'install_requires': required,\n 'extras_require': {\n 'spark': ['pyspark>=2.3.2'],\n 'sqlalchemy': ['sqlalchemy>=1.2'],\n 'airflow': ['apache-airflow[s3]>=1.9.0', 'boto3>=1.7.3']\n },\n 'packages': find_packages(exclude=['docs', 'tests', 'examples']),\n 'entry_points': {\n 'console_scripts': ['great_expectations=great_expectations.cli:main']\n },\n 'name': 'great_expectations',\n 'long_description': long_description,\n 'license': 'Apache-2.0',\n 'keywords': 'data science testing pipeline data quality dataquality validation datavalidation',\n 'include_package_data': True,\n 'classifiers': [\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Other Audience',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Testing',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ]\n}\n\nsetup(**config)\n", "path": "setup.py"}]} | 975 | 117 |
gh_patches_debug_6345 | rasdani/github-patches | git_diff | carpentries__amy-743 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Escaped tags in every revision page

Escaped tags in every revision page

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `workshops/templatetags/diff.py`
Content:
```
1 from django import template
2
3 from reversion.helpers import generate_patch_html
4
5 register = template.Library()
6
7
8 @register.simple_tag
9 def semantic_diff(left, right, field):
10 return generate_patch_html(left, right, field, cleanup='semantic')
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/workshops/templatetags/diff.py b/workshops/templatetags/diff.py
--- a/workshops/templatetags/diff.py
+++ b/workshops/templatetags/diff.py
@@ -1,4 +1,5 @@
from django import template
+from django.utils.safestring import mark_safe
from reversion.helpers import generate_patch_html
@@ -7,4 +8,4 @@
@register.simple_tag
def semantic_diff(left, right, field):
- return generate_patch_html(left, right, field, cleanup='semantic')
+ return mark_safe(generate_patch_html(left, right, field, cleanup='semantic'))
| {"golden_diff": "diff --git a/workshops/templatetags/diff.py b/workshops/templatetags/diff.py\n--- a/workshops/templatetags/diff.py\n+++ b/workshops/templatetags/diff.py\n@@ -1,4 +1,5 @@\n from django import template\n+from django.utils.safestring import mark_safe\n \n from reversion.helpers import generate_patch_html\n \n@@ -7,4 +8,4 @@\n \n @register.simple_tag\n def semantic_diff(left, right, field):\n- return generate_patch_html(left, right, field, cleanup='semantic')\n+ return mark_safe(generate_patch_html(left, right, field, cleanup='semantic'))\n", "issue": "Escaped tags in every revision page\n\n\nEscaped tags in every revision page\n\n\n", "before_files": [{"content": "from django import template\n\nfrom reversion.helpers import generate_patch_html\n\nregister = template.Library()\n\n\[email protected]_tag\ndef semantic_diff(left, right, field):\n return generate_patch_html(left, right, field, cleanup='semantic')\n", "path": "workshops/templatetags/diff.py"}], "after_files": [{"content": "from django import template\nfrom django.utils.safestring import mark_safe\n\nfrom reversion.helpers import generate_patch_html\n\nregister = template.Library()\n\n\[email protected]_tag\ndef semantic_diff(left, right, field):\n return mark_safe(generate_patch_html(left, right, field, cleanup='semantic'))\n", "path": "workshops/templatetags/diff.py"}]} | 507 | 147 |
gh_patches_debug_5199 | rasdani/github-patches | git_diff | PrefectHQ__prefect-1165 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot raise a skip signal with a result
I am filing an issue by suggestion of @cicdw after a conversation on gitter.
I came up with the following use case: a task that raises a skip signal with a result because its logic has detected that there is no work to do and the result is already calculated somewhere. I could just return it, but it would be useful for me to know that the _heavy_ part of the task did not actually execute.
An example of the use case would be:
```python
from prefect import task, Flow
from prefect.engine import signals
@task
def test_skipped():
raise signals.SKIP('skipping', result=5)
f = Flow("test", tasks=[test_skipped])
flow_state = f.run()
```
which fails because of how the `PrefectStateSignal` constructor handles its initialization:
```
Traceback (most recent call last):
File ".../prefect/engine/signals.py", line 27, in __init__
result=self, message=message, *args, **kwargs
TypeError: type object got multiple values for keyword argument 'result'
```
Chris suggested the following workaround, which works correctly, but still pointed out that the case above should work.
```python
from prefect import task, Flow
from prefect.engine.runner import ENDRUN
from prefect.engine.state import Skipped
@task
def test_skipped():
skip = Skipped("skipping", result=5)
raise ENDRUN(state=skip)
f = Flow("test", tasks=[test_skipped])
flow_state = f.run()
flow_state.result[test_skipped].result # 5
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/engine/signals.py`
Content:
```
1 """
2 These Exceptions, when raised, are used to signal state changes when tasks or flows are running. Signals
3 are used in TaskRunners and FlowRunners as a way of communicating the changes in states.
4 """
5
6 from prefect.engine import state
7 from prefect.utilities.exceptions import PrefectError
8
9
10 class PrefectStateSignal(PrefectError):
11 """
12 Create a new PrefectStateSignal object.
13
14 Args:
15 - message (Any, optional): Defaults to `None`. A message about the signal.
16 - *args (Any, optional): additional arguments to pass to this Signal's
17 associated state constructor
18 - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's
19 associated state constructor
20 """
21
22 _state_cls = state.State
23
24 def __init__(self, message: str = None, *args, **kwargs): # type: ignore
25 super().__init__(message) # type: ignore
26 self.state = self._state_cls( # type: ignore
27 result=self, message=message, *args, **kwargs
28 )
29
30
31 class FAIL(PrefectStateSignal):
32 """
33 Indicates that a task failed.
34
35 Args:
36 - message (Any, optional): Defaults to `None`. A message about the signal.
37 - *args (Any, optional): additional arguments to pass to this Signal's
38 associated state constructor
39 - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's
40 associated state constructor
41 """
42
43 _state_cls = state.Failed
44
45
46 class TRIGGERFAIL(FAIL):
47 """
48 Indicates that a task trigger failed.
49
50 Args:
51 - message (Any, optional): Defaults to `None`. A message about the signal.
52 - *args (Any, optional): additional arguments to pass to this Signal's
53 associated state constructor
54 - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's
55 associated state constructor
56 """
57
58 _state_cls = state.TriggerFailed
59
60
61 class SUCCESS(PrefectStateSignal):
62 """
63 Indicates that a task succeeded.
64
65 Args:
66 - message (Any, optional): Defaults to `None`. A message about the signal.
67 - *args (Any, optional): additional arguments to pass to this Signal's
68 associated state constructor
69 - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's
70 associated state constructor
71 """
72
73 _state_cls = state.Success
74
75
76 class RETRY(PrefectStateSignal):
77 """
78 Used to indicate that a task should be retried.
79
80 Args:
81 - message (Any, optional): Defaults to `None`. A message about the signal.
82 - *args (Any, optional): additional arguments to pass to this Signal's
83 associated state constructor
84 - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's
85 associated state constructor
86 """
87
88 _state_cls = state.Retrying
89
90
91 class SKIP(PrefectStateSignal):
92 """
93 Indicates that a task was skipped. By default, downstream tasks will
94 act as if skipped tasks succeeded.
95
96 Args:
97 - message (Any, optional): Defaults to `None`. A message about the signal.
98 - *args (Any, optional): additional arguments to pass to this Signal's
99 associated state constructor
100 - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's
101 associated state constructor
102 """
103
104 _state_cls = state.Skipped
105
106
107 class PAUSE(PrefectStateSignal):
108 """
109 Indicates that a task should not run and wait for manual execution.
110
111 Args:
112 - message (Any, optional): Defaults to `None`. A message about the signal.
113 - *args (Any, optional): additional arguments to pass to this Signal's
114 associated state constructor
115 - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's
116 associated state constructor
117 """
118
119 _state_cls = state.Paused
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/prefect/engine/signals.py b/src/prefect/engine/signals.py
--- a/src/prefect/engine/signals.py
+++ b/src/prefect/engine/signals.py
@@ -23,8 +23,9 @@
def __init__(self, message: str = None, *args, **kwargs): # type: ignore
super().__init__(message) # type: ignore
+ kwargs.setdefault("result", self)
self.state = self._state_cls( # type: ignore
- result=self, message=message, *args, **kwargs
+ message=message, *args, **kwargs
)
| {"golden_diff": "diff --git a/src/prefect/engine/signals.py b/src/prefect/engine/signals.py\n--- a/src/prefect/engine/signals.py\n+++ b/src/prefect/engine/signals.py\n@@ -23,8 +23,9 @@\n \n def __init__(self, message: str = None, *args, **kwargs): # type: ignore\n super().__init__(message) # type: ignore\n+ kwargs.setdefault(\"result\", self)\n self.state = self._state_cls( # type: ignore\n- result=self, message=message, *args, **kwargs\n+ message=message, *args, **kwargs\n )\n", "issue": "Cannot raise a skip signal with a result\nI am filing an issue by suggestion of @cicdw after a conversation on gitter.\r\nI came up with the following use case: a task that raises a skip signal with a result because its logic has detected that there is no work to do and the result is already calculated somewhere. I could just return it, but it would be useful for me to know that the _heavy_ part of the task did not actually execute.\r\n\r\nAn example of the use case would be:\r\n\r\n```python\r\nfrom prefect import task, Flow\r\nfrom prefect.engine import signals\r\n\r\n@task\r\ndef test_skipped():\r\n raise signals.SKIP('skipping', result=5)\r\n\r\nf = Flow(\"test\", tasks=[test_skipped])\r\nflow_state = f.run()\r\n```\r\n\r\nwhich fails because of how the `PrefectStateSignal` constructor handles its initialization:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \".../prefect/engine/signals.py\", line 27, in __init__\r\n result=self, message=message, *args, **kwargs\r\nTypeError: type object got multiple values for keyword argument 'result'\r\n```\r\n\r\nChris suggested the following workaround, which works correctly, but still pointed out that the case above should work.\r\n\r\n```python\r\nfrom prefect import task, Flow\r\nfrom prefect.engine.runner import ENDRUN\r\nfrom prefect.engine.state import Skipped\r\n\r\n@task\r\ndef test_skipped():\r\n skip = Skipped(\"skipping\", result=5)\r\n raise ENDRUN(state=skip)\r\n\r\nf = Flow(\"test\", tasks=[test_skipped])\r\nflow_state = f.run()\r\n\r\nflow_state.result[test_skipped].result # 5\r\n```\n", "before_files": [{"content": "\"\"\"\nThese Exceptions, when raised, are used to signal state changes when tasks or flows are running. Signals\nare used in TaskRunners and FlowRunners as a way of communicating the changes in states.\n\"\"\"\n\nfrom prefect.engine import state\nfrom prefect.utilities.exceptions import PrefectError\n\n\nclass PrefectStateSignal(PrefectError):\n \"\"\"\n Create a new PrefectStateSignal object.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.State\n\n def __init__(self, message: str = None, *args, **kwargs): # type: ignore\n super().__init__(message) # type: ignore\n self.state = self._state_cls( # type: ignore\n result=self, message=message, *args, **kwargs\n )\n\n\nclass FAIL(PrefectStateSignal):\n \"\"\"\n Indicates that a task failed.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Failed\n\n\nclass TRIGGERFAIL(FAIL):\n \"\"\"\n Indicates that a task trigger failed.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.TriggerFailed\n\n\nclass SUCCESS(PrefectStateSignal):\n \"\"\"\n Indicates that a task succeeded.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Success\n\n\nclass RETRY(PrefectStateSignal):\n \"\"\"\n Used to indicate that a task should be retried.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Retrying\n\n\nclass SKIP(PrefectStateSignal):\n \"\"\"\n Indicates that a task was skipped. By default, downstream tasks will\n act as if skipped tasks succeeded.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Skipped\n\n\nclass PAUSE(PrefectStateSignal):\n \"\"\"\n Indicates that a task should not run and wait for manual execution.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Paused\n", "path": "src/prefect/engine/signals.py"}], "after_files": [{"content": "\"\"\"\nThese Exceptions, when raised, are used to signal state changes when tasks or flows are running. Signals\nare used in TaskRunners and FlowRunners as a way of communicating the changes in states.\n\"\"\"\n\nfrom prefect.engine import state\nfrom prefect.utilities.exceptions import PrefectError\n\n\nclass PrefectStateSignal(PrefectError):\n \"\"\"\n Create a new PrefectStateSignal object.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.State\n\n def __init__(self, message: str = None, *args, **kwargs): # type: ignore\n super().__init__(message) # type: ignore\n kwargs.setdefault(\"result\", self)\n self.state = self._state_cls( # type: ignore\n message=message, *args, **kwargs\n )\n\n\nclass FAIL(PrefectStateSignal):\n \"\"\"\n Indicates that a task failed.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Failed\n\n\nclass TRIGGERFAIL(FAIL):\n \"\"\"\n Indicates that a task trigger failed.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.TriggerFailed\n\n\nclass SUCCESS(PrefectStateSignal):\n \"\"\"\n Indicates that a task succeeded.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Success\n\n\nclass RETRY(PrefectStateSignal):\n \"\"\"\n Used to indicate that a task should be retried.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Retrying\n\n\nclass SKIP(PrefectStateSignal):\n \"\"\"\n Indicates that a task was skipped. By default, downstream tasks will\n act as if skipped tasks succeeded.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Skipped\n\n\nclass PAUSE(PrefectStateSignal):\n \"\"\"\n Indicates that a task should not run and wait for manual execution.\n\n Args:\n - message (Any, optional): Defaults to `None`. A message about the signal.\n - *args (Any, optional): additional arguments to pass to this Signal's\n associated state constructor\n - **kwargs (Any, optional): additional keyword arguments to pass to this Signal's\n associated state constructor\n \"\"\"\n\n _state_cls = state.Paused\n", "path": "src/prefect/engine/signals.py"}]} | 1,733 | 146 |
gh_patches_debug_15255 | rasdani/github-patches | git_diff | chainer__chainer-1421 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SerialIterator's shuffle does not work under certain batch sizes
When we give `shuffle=True` to `SerialIterator`, re-shuffling after an epoch is skipped if `len(dataset)` is divisible by `batch_size`.

https://github.com/pfnet/chainer/blob/master/chainer/iterators/serial_iterator.py#L65
Variable `_order` is never re-shuffled if `rest` > 0 (i.e., `len(dataset)` is divisible by `batch_size`).
(If it is okay, I'm interested in working on this issue at the development meeting tomorrow.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/iterators/serial_iterator.py`
Content:
```
1 from __future__ import division
2
3 import numpy
4
5 from chainer.dataset import iterator
6
7
8 class SerialIterator(iterator.Iterator):
9
10 """Dataset iterator that serially reads the examples.
11
12 This is a simple implementation of :class:`~chainer.dataset.Iterator`
13 that just visits each example in either the order of indexes or a shuffled
14 order.
15
16 To avoid unintentional performance degradation, the ``shuffle`` option is
17 set to ``True`` by default. For validation, it is better to set it to
18 ``False`` when the underlying dataset supports fast slicing. If the
19 order of examples has an important meaning and the updater depends on the
20 original order, this option should be set to ``False``.
21
22 Args:
23 dataset: Dataset to iterate.
24 batch_size (int): Number of examples within each batch.
25 repeat (bool): If ``True``, it infinitely loops over the dataset.
26 Otherwise, it stops iteration at the end of the first epoch.
27 shuffle (bool): If ``True``, the order of examples is shuffled at the
28 beginning of each epoch. Otherwise, examples are extracted in the
29 order of indexes.
30
31 """
32 def __init__(self, dataset, batch_size, repeat=True, shuffle=True):
33 self.dataset = dataset
34 self.batch_size = batch_size
35 self._repeat = repeat
36 if shuffle:
37 self._order = numpy.random.permutation(len(dataset))
38 else:
39 self._order = None
40
41 self.current_position = 0
42 self.epoch = 0
43 self.is_new_epoch = False
44
45 def __next__(self):
46 if not self._repeat and self.epoch > 0:
47 raise StopIteration
48
49 i = self.current_position
50 i_end = i + self.batch_size
51 N = len(self.dataset)
52
53 if self._order is None:
54 batch = self.dataset[i:i_end]
55 else:
56 batch = [self.dataset[index] for index in self._order[i:i_end]]
57
58 if i_end >= N:
59 if self._repeat:
60 rest = i_end - N
61 if rest > 0:
62 if self._order is None:
63 batch += list(self.dataset[:rest])
64 else:
65 numpy.random.shuffle(self._order)
66 batch += [self.dataset[index]
67 for index in self._order[:rest]]
68 self.current_position = rest
69 else:
70 self.current_position = N
71
72 self.epoch += 1
73 self.is_new_epoch = True
74 else:
75 self.is_new_epoch = False
76 self.current_position = i_end
77
78 return batch
79
80 next = __next__
81
82 @property
83 def epoch_detail(self):
84 return self.epoch + self.current_position / len(self.dataset)
85
86 def serialize(self, serializer):
87 self.current_position = serializer('current_position',
88 self.current_position)
89 self.epoch = serializer('epoch', self.epoch)
90 self.is_new_epoch = serializer('is_new_epoch', self.is_new_epoch)
91 if self._order is not None:
92 serializer('_order', self._order)
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/iterators/serial_iterator.py b/chainer/iterators/serial_iterator.py
--- a/chainer/iterators/serial_iterator.py
+++ b/chainer/iterators/serial_iterator.py
@@ -58,11 +58,12 @@
if i_end >= N:
if self._repeat:
rest = i_end - N
+ if self._order is not None:
+ numpy.random.shuffle(self._order)
if rest > 0:
if self._order is None:
batch += list(self.dataset[:rest])
else:
- numpy.random.shuffle(self._order)
batch += [self.dataset[index]
for index in self._order[:rest]]
self.current_position = rest
| {"golden_diff": "diff --git a/chainer/iterators/serial_iterator.py b/chainer/iterators/serial_iterator.py\n--- a/chainer/iterators/serial_iterator.py\n+++ b/chainer/iterators/serial_iterator.py\n@@ -58,11 +58,12 @@\n if i_end >= N:\n if self._repeat:\n rest = i_end - N\n+ if self._order is not None:\n+ numpy.random.shuffle(self._order)\n if rest > 0:\n if self._order is None:\n batch += list(self.dataset[:rest])\n else:\n- numpy.random.shuffle(self._order)\n batch += [self.dataset[index]\n for index in self._order[:rest]]\n self.current_position = rest\n", "issue": "SerialIterator's shuffle does not work under certain batch sizes\nWhen we give `shuffle=True` to `SerialIterator`, re-shuffling after an epoch is skipped if `len(dataset)` is divisible by `batch_size`.\n\n\n\nhttps://github.com/pfnet/chainer/blob/master/chainer/iterators/serial_iterator.py#L65\n\nVariable `_order` is never re-shuffled if `rest` > 0 (i.e., `len(dataset)` is divisible by `batch_size`).\n\n(If it is okay, I'm interested in working on this issue at the development meeting tomorrow.)\n\n", "before_files": [{"content": "from __future__ import division\n\nimport numpy\n\nfrom chainer.dataset import iterator\n\n\nclass SerialIterator(iterator.Iterator):\n\n \"\"\"Dataset iterator that serially reads the examples.\n\n This is a simple implementation of :class:`~chainer.dataset.Iterator`\n that just visits each example in either the order of indexes or a shuffled\n order.\n\n To avoid unintentional performance degradation, the ``shuffle`` option is\n set to ``True`` by default. For validation, it is better to set it to\n ``False`` when the underlying dataset supports fast slicing. If the\n order of examples has an important meaning and the updater depends on the\n original order, this option should be set to ``False``.\n\n Args:\n dataset: Dataset to iterate.\n batch_size (int): Number of examples within each batch.\n repeat (bool): If ``True``, it infinitely loops over the dataset.\n Otherwise, it stops iteration at the end of the first epoch.\n shuffle (bool): If ``True``, the order of examples is shuffled at the\n beginning of each epoch. Otherwise, examples are extracted in the\n order of indexes.\n\n \"\"\"\n def __init__(self, dataset, batch_size, repeat=True, shuffle=True):\n self.dataset = dataset\n self.batch_size = batch_size\n self._repeat = repeat\n if shuffle:\n self._order = numpy.random.permutation(len(dataset))\n else:\n self._order = None\n\n self.current_position = 0\n self.epoch = 0\n self.is_new_epoch = False\n\n def __next__(self):\n if not self._repeat and self.epoch > 0:\n raise StopIteration\n\n i = self.current_position\n i_end = i + self.batch_size\n N = len(self.dataset)\n\n if self._order is None:\n batch = self.dataset[i:i_end]\n else:\n batch = [self.dataset[index] for index in self._order[i:i_end]]\n\n if i_end >= N:\n if self._repeat:\n rest = i_end - N\n if rest > 0:\n if self._order is None:\n batch += list(self.dataset[:rest])\n else:\n numpy.random.shuffle(self._order)\n batch += [self.dataset[index]\n for index in self._order[:rest]]\n self.current_position = rest\n else:\n self.current_position = N\n\n self.epoch += 1\n self.is_new_epoch = True\n else:\n self.is_new_epoch = False\n self.current_position = i_end\n\n return batch\n\n next = __next__\n\n @property\n def epoch_detail(self):\n return self.epoch + self.current_position / len(self.dataset)\n\n def serialize(self, serializer):\n self.current_position = serializer('current_position',\n self.current_position)\n self.epoch = serializer('epoch', self.epoch)\n self.is_new_epoch = serializer('is_new_epoch', self.is_new_epoch)\n if self._order is not None:\n serializer('_order', self._order)\n", "path": "chainer/iterators/serial_iterator.py"}], "after_files": [{"content": "from __future__ import division\n\nimport numpy\n\nfrom chainer.dataset import iterator\n\n\nclass SerialIterator(iterator.Iterator):\n\n \"\"\"Dataset iterator that serially reads the examples.\n\n This is a simple implementation of :class:`~chainer.dataset.Iterator`\n that just visits each example in either the order of indexes or a shuffled\n order.\n\n To avoid unintentional performance degradation, the ``shuffle`` option is\n set to ``True`` by default. For validation, it is better to set it to\n ``False`` when the underlying dataset supports fast slicing. If the\n order of examples has an important meaning and the updater depends on the\n original order, this option should be set to ``False``.\n\n Args:\n dataset: Dataset to iterate.\n batch_size (int): Number of examples within each batch.\n repeat (bool): If ``True``, it infinitely loops over the dataset.\n Otherwise, it stops iteration at the end of the first epoch.\n shuffle (bool): If ``True``, the order of examples is shuffled at the\n beginning of each epoch. Otherwise, examples are extracted in the\n order of indexes.\n\n \"\"\"\n def __init__(self, dataset, batch_size, repeat=True, shuffle=True):\n self.dataset = dataset\n self.batch_size = batch_size\n self._repeat = repeat\n if shuffle:\n self._order = numpy.random.permutation(len(dataset))\n else:\n self._order = None\n\n self.current_position = 0\n self.epoch = 0\n self.is_new_epoch = False\n\n def __next__(self):\n if not self._repeat and self.epoch > 0:\n raise StopIteration\n\n i = self.current_position\n i_end = i + self.batch_size\n N = len(self.dataset)\n\n if self._order is None:\n batch = self.dataset[i:i_end]\n else:\n batch = [self.dataset[index] for index in self._order[i:i_end]]\n\n if i_end >= N:\n if self._repeat:\n rest = i_end - N\n if self._order is not None:\n numpy.random.shuffle(self._order)\n if rest > 0:\n if self._order is None:\n batch += list(self.dataset[:rest])\n else:\n batch += [self.dataset[index]\n for index in self._order[:rest]]\n self.current_position = rest\n else:\n self.current_position = N\n\n self.epoch += 1\n self.is_new_epoch = True\n else:\n self.is_new_epoch = False\n self.current_position = i_end\n\n return batch\n\n next = __next__\n\n @property\n def epoch_detail(self):\n return self.epoch + self.current_position / len(self.dataset)\n\n def serialize(self, serializer):\n self.current_position = serializer('current_position',\n self.current_position)\n self.epoch = serializer('epoch', self.epoch)\n self.is_new_epoch = serializer('is_new_epoch', self.is_new_epoch)\n if self._order is not None:\n serializer('_order', self._order)\n", "path": "chainer/iterators/serial_iterator.py"}]} | 1,281 | 163 |
gh_patches_debug_12636 | rasdani/github-patches | git_diff | Mailu__Mailu-2630 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docker container crashes if IPv6 is disabled at the system level.
If listen [::] is found somewhere in the configs, but IPv6 is disabled at the host system level and in the docker, then the process crashes, and, accordingly, the docker container also crashes.
This can be manually climbed into each container, corrected, but it is not very convenient.
docker exec mailu_front_1 sed -i '/listen \[/d' /conf/nginx.conf
docker exec mailu_front_1 sed -i '/listen \[/d' /etc/nginx/nginx.conf
docker exec mailu_front_1 sed -i '/listen \[/d' /etc/nginx/http.d/default.conf
docker restart mailu_front_1
docker restart mailu_webdav_1 && docker exec -it mailu_webdav_1 sed -i 's/hosts =.*\[::\].*/hosts = 0.0.0.0:5232/g' /radicale.conf && docker restart mailu_webdav_1
Can you add a container launch option to remove listen [::] from configs?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/start.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import os
4 import logging as log
5 import sys
6 from socrate import system
7
8 os.system("chown mailu:mailu -R /dkim")
9 os.system("find /data | grep -v /fetchmail | xargs -n1 chown mailu:mailu")
10 system.drop_privs_to('mailu')
11
12 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "INFO"))
13 system.set_env(['SECRET'])
14
15 os.system("flask mailu advertise")
16 os.system("flask db upgrade")
17
18 account = os.environ.get("INITIAL_ADMIN_ACCOUNT")
19 domain = os.environ.get("INITIAL_ADMIN_DOMAIN")
20 password = os.environ.get("INITIAL_ADMIN_PW")
21
22 if account is not None and domain is not None and password is not None:
23 mode = os.environ.get("INITIAL_ADMIN_MODE", default="ifmissing")
24 log.info("Creating initial admin account %s@%s with mode %s", account, domain, mode)
25 os.system("flask mailu admin %s %s '%s' --mode %s" % (account, domain, password, mode))
26
27 def test_DNS():
28 import dns.resolver
29 import dns.exception
30 import dns.flags
31 import dns.rdtypes
32 import dns.rdatatype
33 import dns.rdataclass
34 import time
35 # DNS stub configured to do DNSSEC enabled queries
36 resolver = dns.resolver.Resolver()
37 resolver.use_edns(0, dns.flags.DO, 1232)
38 resolver.flags = dns.flags.AD | dns.flags.RD
39 nameservers = resolver.nameservers
40 for ns in nameservers:
41 resolver.nameservers=[ns]
42 while True:
43 try:
44 result = resolver.resolve('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)
45 except Exception as e:
46 log.critical("Your DNS resolver at %s is not working (%s). Please see https://mailu.io/master/faq.html#the-admin-container-won-t-start-and-its-log-says-critical-your-dns-resolver-isn-t-doing-dnssec-validation", ns, e)
47 else:
48 if result.response.flags & dns.flags.AD:
49 break
50 log.critical("Your DNS resolver at %s isn't doing DNSSEC validation; Please see https://mailu.io/master/faq.html#the-admin-container-won-t-start-and-its-log-says-critical-your-dns-resolver-isn-t-doing-dnssec-validation.", ns)
51 time.sleep(5)
52
53 test_DNS()
54
55 start_command=" ".join([
56 "gunicorn",
57 f"--threads {str(os.cpu_count())}",
58 "-b :80",
59 "--logger-class mailu.Logger",
60 "--worker-tmp-dir /dev/shm",
61 "--access-logfile -" if (log.root.level<=log.INFO) else "",
62 "--error-logfile -",
63 "--preload",
64 "'mailu:create_app()'"])
65
66 os.system(start_command)
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/start.py b/core/admin/start.py
--- a/core/admin/start.py
+++ b/core/admin/start.py
@@ -52,15 +52,21 @@
test_DNS()
-start_command=" ".join([
- "gunicorn",
- f"--threads {str(os.cpu_count())}",
- "-b :80",
+cmdline = [
+ "gunicorn",
+ "--threads", f"{os.cpu_count()}",
+ # If SUBNET6 is defined, gunicorn must listen on IPv6 as well as IPv4
+ "-b", f"{'[::]' if os.environ.get('SUBNET6') else ''}:80",
"--logger-class mailu.Logger",
"--worker-tmp-dir /dev/shm",
- "--access-logfile -" if (log.root.level<=log.INFO) else "",
- "--error-logfile -",
- "--preload",
- "'mailu:create_app()'"])
+ "--error-logfile", "-",
+ "--preload"
+]
-os.system(start_command)
+# logging
+if log.root.level <= log.INFO:
+ cmdline.extend(["--access-logfile", "-"])
+
+cmdline.append("'mailu:create_app()'")
+
+os.system(" ".join(cmdline))
| {"golden_diff": "diff --git a/core/admin/start.py b/core/admin/start.py\n--- a/core/admin/start.py\n+++ b/core/admin/start.py\n@@ -52,15 +52,21 @@\n \n test_DNS()\n \n-start_command=\" \".join([\n- \"gunicorn\",\n- f\"--threads {str(os.cpu_count())}\",\n- \"-b :80\",\n+cmdline = [\n+\t\"gunicorn\",\n+\t\"--threads\", f\"{os.cpu_count()}\",\n+\t# If SUBNET6 is defined, gunicorn must listen on IPv6 as well as IPv4\n+\t\"-b\", f\"{'[::]' if os.environ.get('SUBNET6') else ''}:80\",\n \"--logger-class mailu.Logger\",\n \"--worker-tmp-dir /dev/shm\",\n- \"--access-logfile -\" if (log.root.level<=log.INFO) else \"\",\n- \"--error-logfile -\",\n- \"--preload\",\n- \"'mailu:create_app()'\"])\n+\t\"--error-logfile\", \"-\",\n+\t\"--preload\"\n+]\n \n-os.system(start_command)\n+# logging\n+if log.root.level <= log.INFO:\n+\tcmdline.extend([\"--access-logfile\", \"-\"])\n+\n+cmdline.append(\"'mailu:create_app()'\")\n+\n+os.system(\" \".join(cmdline))\n", "issue": "Docker container crashes if IPv6 is disabled at the system level.\nIf listen [::] is found somewhere in the configs, but IPv6 is disabled at the host system level and in the docker, then the process crashes, and, accordingly, the docker container also crashes.\r\n\r\nThis can be manually climbed into each container, corrected, but it is not very convenient.\r\n\r\ndocker exec mailu_front_1 sed -i '/listen \\[/d' /conf/nginx.conf\r\ndocker exec mailu_front_1 sed -i '/listen \\[/d' /etc/nginx/nginx.conf\r\ndocker exec mailu_front_1 sed -i '/listen \\[/d' /etc/nginx/http.d/default.conf\r\ndocker restart mailu_front_1\r\n\r\ndocker restart mailu_webdav_1 && docker exec -it mailu_webdav_1 sed -i 's/hosts =.*\\[::\\].*/hosts = 0.0.0.0:5232/g' /radicale.conf && docker restart mailu_webdav_1\r\n\r\n\r\nCan you add a container launch option to remove listen [::] from configs?\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport os\nimport logging as log\nimport sys\nfrom socrate import system\n\nos.system(\"chown mailu:mailu -R /dkim\")\nos.system(\"find /data | grep -v /fetchmail | xargs -n1 chown mailu:mailu\")\nsystem.drop_privs_to('mailu')\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"INFO\"))\nsystem.set_env(['SECRET'])\n\nos.system(\"flask mailu advertise\")\nos.system(\"flask db upgrade\")\n\naccount = os.environ.get(\"INITIAL_ADMIN_ACCOUNT\")\ndomain = os.environ.get(\"INITIAL_ADMIN_DOMAIN\")\npassword = os.environ.get(\"INITIAL_ADMIN_PW\")\n\nif account is not None and domain is not None and password is not None:\n mode = os.environ.get(\"INITIAL_ADMIN_MODE\", default=\"ifmissing\")\n log.info(\"Creating initial admin account %s@%s with mode %s\", account, domain, mode)\n os.system(\"flask mailu admin %s %s '%s' --mode %s\" % (account, domain, password, mode))\n\ndef test_DNS():\n import dns.resolver\n import dns.exception\n import dns.flags\n import dns.rdtypes\n import dns.rdatatype\n import dns.rdataclass\n import time\n # DNS stub configured to do DNSSEC enabled queries\n resolver = dns.resolver.Resolver()\n resolver.use_edns(0, dns.flags.DO, 1232)\n resolver.flags = dns.flags.AD | dns.flags.RD\n nameservers = resolver.nameservers\n for ns in nameservers:\n resolver.nameservers=[ns]\n while True:\n try:\n result = resolver.resolve('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)\n except Exception as e:\n log.critical(\"Your DNS resolver at %s is not working (%s). Please see https://mailu.io/master/faq.html#the-admin-container-won-t-start-and-its-log-says-critical-your-dns-resolver-isn-t-doing-dnssec-validation\", ns, e)\n else:\n if result.response.flags & dns.flags.AD:\n break\n log.critical(\"Your DNS resolver at %s isn't doing DNSSEC validation; Please see https://mailu.io/master/faq.html#the-admin-container-won-t-start-and-its-log-says-critical-your-dns-resolver-isn-t-doing-dnssec-validation.\", ns)\n time.sleep(5)\n\ntest_DNS()\n\nstart_command=\" \".join([\n \"gunicorn\",\n f\"--threads {str(os.cpu_count())}\",\n \"-b :80\",\n \"--logger-class mailu.Logger\",\n \"--worker-tmp-dir /dev/shm\",\n \"--access-logfile -\" if (log.root.level<=log.INFO) else \"\",\n \"--error-logfile -\",\n \"--preload\",\n \"'mailu:create_app()'\"])\n\nos.system(start_command)\n", "path": "core/admin/start.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport os\nimport logging as log\nimport sys\nfrom socrate import system\n\nos.system(\"chown mailu:mailu -R /dkim\")\nos.system(\"find /data | grep -v /fetchmail | xargs -n1 chown mailu:mailu\")\nsystem.drop_privs_to('mailu')\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"INFO\"))\nsystem.set_env(['SECRET'])\n\nos.system(\"flask mailu advertise\")\nos.system(\"flask db upgrade\")\n\naccount = os.environ.get(\"INITIAL_ADMIN_ACCOUNT\")\ndomain = os.environ.get(\"INITIAL_ADMIN_DOMAIN\")\npassword = os.environ.get(\"INITIAL_ADMIN_PW\")\n\nif account is not None and domain is not None and password is not None:\n mode = os.environ.get(\"INITIAL_ADMIN_MODE\", default=\"ifmissing\")\n log.info(\"Creating initial admin account %s@%s with mode %s\", account, domain, mode)\n os.system(\"flask mailu admin %s %s '%s' --mode %s\" % (account, domain, password, mode))\n\ndef test_DNS():\n import dns.resolver\n import dns.exception\n import dns.flags\n import dns.rdtypes\n import dns.rdatatype\n import dns.rdataclass\n import time\n # DNS stub configured to do DNSSEC enabled queries\n resolver = dns.resolver.Resolver()\n resolver.use_edns(0, dns.flags.DO, 1232)\n resolver.flags = dns.flags.AD | dns.flags.RD\n nameservers = resolver.nameservers\n for ns in nameservers:\n resolver.nameservers=[ns]\n while True:\n try:\n result = resolver.resolve('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)\n except Exception as e:\n log.critical(\"Your DNS resolver at %s is not working (%s). Please see https://mailu.io/master/faq.html#the-admin-container-won-t-start-and-its-log-says-critical-your-dns-resolver-isn-t-doing-dnssec-validation\", ns, e)\n else:\n if result.response.flags & dns.flags.AD:\n break\n log.critical(\"Your DNS resolver at %s isn't doing DNSSEC validation; Please see https://mailu.io/master/faq.html#the-admin-container-won-t-start-and-its-log-says-critical-your-dns-resolver-isn-t-doing-dnssec-validation.\", ns)\n time.sleep(5)\n\ntest_DNS()\n\ncmdline = [\n\t\"gunicorn\",\n\t\"--threads\", f\"{os.cpu_count()}\",\n\t# If SUBNET6 is defined, gunicorn must listen on IPv6 as well as IPv4\n\t\"-b\", f\"{'[::]' if os.environ.get('SUBNET6') else ''}:80\",\n \"--logger-class mailu.Logger\",\n \"--worker-tmp-dir /dev/shm\",\n\t\"--error-logfile\", \"-\",\n\t\"--preload\"\n]\n\n# logging\nif log.root.level <= log.INFO:\n\tcmdline.extend([\"--access-logfile\", \"-\"])\n\ncmdline.append(\"'mailu:create_app()'\")\n\nos.system(\" \".join(cmdline))\n", "path": "core/admin/start.py"}]} | 1,263 | 279 |
gh_patches_debug_3119 | rasdani/github-patches | git_diff | Kinto__kinto-186 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow POST on buckets using the id_generator or the id provided in the data.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/views/buckets.py`
Content:
```
1 from six import text_type
2 from uuid import UUID
3
4 from pyramid.httpexceptions import (HTTPForbidden, HTTPPreconditionFailed,
5 HTTPException)
6 from pyramid.security import NO_PERMISSION_REQUIRED
7 from pyramid.view import view_config
8
9 from cliquet import resource
10 from cliquet.utils import hmac_digest, build_request, reapply_cors
11
12 from kinto.views import NameGenerator
13
14
15 def create_bucket(request, bucket_id):
16 """Create a bucket if it doesn't exists."""
17 bucket_put = (request.method.lower() == 'put' and
18 request.path.endswith('buckets/default'))
19
20 if not bucket_put:
21 subrequest = build_request(request, {
22 'method': 'PUT',
23 'path': '/buckets/%s' % bucket_id,
24 'body': {"data": {}},
25 'headers': {'If-None-Match': '*'.encode('utf-8')}
26 })
27
28 try:
29 request.invoke_subrequest(subrequest)
30 except HTTPPreconditionFailed:
31 # The bucket already exists
32 pass
33
34
35 def create_collection(request, bucket_id):
36 subpath = request.matchdict.get('subpath')
37 if subpath and subpath.startswith('collections/'):
38 collection_id = subpath.split('/')[1]
39 collection_put = (request.method.lower() == 'put' and
40 request.path.endswith(collection_id))
41 if not collection_put:
42 subrequest = build_request(request, {
43 'method': 'PUT',
44 'path': '/buckets/%s/collections/%s' % (
45 bucket_id, collection_id),
46 'body': {"data": {}},
47 'headers': {'If-None-Match': '*'.encode('utf-8')}
48 })
49 try:
50 request.invoke_subrequest(subrequest)
51 except HTTPPreconditionFailed:
52 # The collection already exists
53 pass
54
55
56 @view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)
57 @view_config(route_name='default_bucket_collection',
58 permission=NO_PERMISSION_REQUIRED)
59 def default_bucket(request):
60 if request.method.lower() == 'options':
61 path = request.path.replace('default', 'unknown')
62 subrequest = build_request(request, {
63 'method': 'OPTIONS',
64 'path': path
65 })
66 return request.invoke_subrequest(subrequest)
67
68 if getattr(request, 'prefixed_userid', None) is None:
69 raise HTTPForbidden # Pass through the forbidden_view_config
70
71 settings = request.registry.settings
72 hmac_secret = settings['cliquet.userid_hmac_secret']
73 # Build the user unguessable bucket_id UUID from its user_id
74 digest = hmac_digest(hmac_secret, request.prefixed_userid)
75 bucket_id = text_type(UUID(digest[:32]))
76 path = request.path.replace('/buckets/default', '/buckets/%s' % bucket_id)
77 querystring = request.url[(request.url.index(request.path) +
78 len(request.path)):]
79
80 # Make sure bucket exists
81 create_bucket(request, bucket_id)
82
83 # Make sure the collection exists
84 create_collection(request, bucket_id)
85
86 subrequest = build_request(request, {
87 'method': request.method,
88 'path': path + querystring,
89 'body': request.body
90 })
91
92 try:
93 response = request.invoke_subrequest(subrequest)
94 except HTTPException as error:
95 response = reapply_cors(subrequest, error)
96 return response
97
98
99 @resource.register(name='bucket',
100 collection_methods=('GET',),
101 collection_path='/buckets',
102 record_path='/buckets/{{id}}')
103 class Bucket(resource.ProtectedResource):
104 permissions = ('read', 'write', 'collection:create', 'group:create')
105
106 def __init__(self, *args, **kwargs):
107 super(Bucket, self).__init__(*args, **kwargs)
108 self.collection.id_generator = NameGenerator()
109
110 def get_parent_id(self, request):
111 # Buckets are not isolated by user, unlike Cliquet resources.
112 return ''
113
114 def delete(self):
115 result = super(Bucket, self).delete()
116
117 # Delete groups.
118 storage = self.collection.storage
119 parent_id = '/buckets/%s' % self.record_id
120 storage.delete_all(collection_id='group',
121 parent_id=parent_id,
122 with_deleted=False)
123 storage.purge_deleted(collection_id='group',
124 parent_id=parent_id)
125
126 # Delete collections.
127 deleted = storage.delete_all(collection_id='collection',
128 parent_id=parent_id,
129 with_deleted=False)
130 storage.purge_deleted(collection_id='collection',
131 parent_id=parent_id)
132
133 # Delete records.
134 id_field = self.collection.id_field
135 for collection in deleted:
136 parent_id = '/buckets/%s/collections/%s' % (self.record_id,
137 collection[id_field])
138 storage.delete_all(collection_id='record',
139 parent_id=parent_id,
140 with_deleted=False)
141 storage.purge_deleted(collection_id='record', parent_id=parent_id)
142
143 return result
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py
--- a/kinto/views/buckets.py
+++ b/kinto/views/buckets.py
@@ -97,7 +97,7 @@
@resource.register(name='bucket',
- collection_methods=('GET',),
+ collection_methods=('GET', 'POST'),
collection_path='/buckets',
record_path='/buckets/{{id}}')
class Bucket(resource.ProtectedResource):
| {"golden_diff": "diff --git a/kinto/views/buckets.py b/kinto/views/buckets.py\n--- a/kinto/views/buckets.py\n+++ b/kinto/views/buckets.py\n@@ -97,7 +97,7 @@\n \n \n @resource.register(name='bucket',\n- collection_methods=('GET',),\n+ collection_methods=('GET', 'POST'),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\n class Bucket(resource.ProtectedResource):\n", "issue": "Allow POST on buckets using the id_generator or the id provided in the data.\n\n", "before_files": [{"content": "from six import text_type\nfrom uuid import UUID\n\nfrom pyramid.httpexceptions import (HTTPForbidden, HTTPPreconditionFailed,\n HTTPException)\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request, reapply_cors\n\nfrom kinto.views import NameGenerator\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n\n if not bucket_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s' % bucket_id,\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The bucket already exists\n pass\n\n\ndef create_collection(request, bucket_id):\n subpath = request.matchdict.get('subpath')\n if subpath and subpath.startswith('collections/'):\n collection_id = subpath.split('/')[1]\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if not collection_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s/collections/%s' % (\n bucket_id, collection_id),\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The collection already exists\n pass\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\n@view_config(route_name='default_bucket_collection',\n permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if request.method.lower() == 'options':\n path = request.path.replace('default', 'unknown')\n subrequest = build_request(request, {\n 'method': 'OPTIONS',\n 'path': path\n })\n return request.invoke_subrequest(subrequest)\n\n if getattr(request, 'prefixed_userid', None) is None:\n raise HTTPForbidden # Pass through the forbidden_view_config\n\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n digest = hmac_digest(hmac_secret, request.prefixed_userid)\n bucket_id = text_type(UUID(digest[:32]))\n path = request.path.replace('/buckets/default', '/buckets/%s' % bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path + querystring,\n 'body': request.body\n })\n\n try:\n response = request.invoke_subrequest(subrequest)\n except HTTPException as error:\n response = reapply_cors(subrequest, error)\n return response\n\n\[email protected](name='bucket',\n collection_methods=('GET',),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.collection.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='group',\n parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.collection.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/buckets.py"}], "after_files": [{"content": "from six import text_type\nfrom uuid import UUID\n\nfrom pyramid.httpexceptions import (HTTPForbidden, HTTPPreconditionFailed,\n HTTPException)\nfrom pyramid.security import NO_PERMISSION_REQUIRED\nfrom pyramid.view import view_config\n\nfrom cliquet import resource\nfrom cliquet.utils import hmac_digest, build_request, reapply_cors\n\nfrom kinto.views import NameGenerator\n\n\ndef create_bucket(request, bucket_id):\n \"\"\"Create a bucket if it doesn't exists.\"\"\"\n bucket_put = (request.method.lower() == 'put' and\n request.path.endswith('buckets/default'))\n\n if not bucket_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s' % bucket_id,\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The bucket already exists\n pass\n\n\ndef create_collection(request, bucket_id):\n subpath = request.matchdict.get('subpath')\n if subpath and subpath.startswith('collections/'):\n collection_id = subpath.split('/')[1]\n collection_put = (request.method.lower() == 'put' and\n request.path.endswith(collection_id))\n if not collection_put:\n subrequest = build_request(request, {\n 'method': 'PUT',\n 'path': '/buckets/%s/collections/%s' % (\n bucket_id, collection_id),\n 'body': {\"data\": {}},\n 'headers': {'If-None-Match': '*'.encode('utf-8')}\n })\n try:\n request.invoke_subrequest(subrequest)\n except HTTPPreconditionFailed:\n # The collection already exists\n pass\n\n\n@view_config(route_name='default_bucket', permission=NO_PERMISSION_REQUIRED)\n@view_config(route_name='default_bucket_collection',\n permission=NO_PERMISSION_REQUIRED)\ndef default_bucket(request):\n if request.method.lower() == 'options':\n path = request.path.replace('default', 'unknown')\n subrequest = build_request(request, {\n 'method': 'OPTIONS',\n 'path': path\n })\n return request.invoke_subrequest(subrequest)\n\n if getattr(request, 'prefixed_userid', None) is None:\n raise HTTPForbidden # Pass through the forbidden_view_config\n\n settings = request.registry.settings\n hmac_secret = settings['cliquet.userid_hmac_secret']\n # Build the user unguessable bucket_id UUID from its user_id\n digest = hmac_digest(hmac_secret, request.prefixed_userid)\n bucket_id = text_type(UUID(digest[:32]))\n path = request.path.replace('/buckets/default', '/buckets/%s' % bucket_id)\n querystring = request.url[(request.url.index(request.path) +\n len(request.path)):]\n\n # Make sure bucket exists\n create_bucket(request, bucket_id)\n\n # Make sure the collection exists\n create_collection(request, bucket_id)\n\n subrequest = build_request(request, {\n 'method': request.method,\n 'path': path + querystring,\n 'body': request.body\n })\n\n try:\n response = request.invoke_subrequest(subrequest)\n except HTTPException as error:\n response = reapply_cors(subrequest, error)\n return response\n\n\[email protected](name='bucket',\n collection_methods=('GET', 'POST'),\n collection_path='/buckets',\n record_path='/buckets/{{id}}')\nclass Bucket(resource.ProtectedResource):\n permissions = ('read', 'write', 'collection:create', 'group:create')\n\n def __init__(self, *args, **kwargs):\n super(Bucket, self).__init__(*args, **kwargs)\n self.collection.id_generator = NameGenerator()\n\n def get_parent_id(self, request):\n # Buckets are not isolated by user, unlike Cliquet resources.\n return ''\n\n def delete(self):\n result = super(Bucket, self).delete()\n\n # Delete groups.\n storage = self.collection.storage\n parent_id = '/buckets/%s' % self.record_id\n storage.delete_all(collection_id='group',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='group',\n parent_id=parent_id)\n\n # Delete collections.\n deleted = storage.delete_all(collection_id='collection',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='collection',\n parent_id=parent_id)\n\n # Delete records.\n id_field = self.collection.id_field\n for collection in deleted:\n parent_id = '/buckets/%s/collections/%s' % (self.record_id,\n collection[id_field])\n storage.delete_all(collection_id='record',\n parent_id=parent_id,\n with_deleted=False)\n storage.purge_deleted(collection_id='record', parent_id=parent_id)\n\n return result\n", "path": "kinto/views/buckets.py"}]} | 1,651 | 96 |
gh_patches_debug_42915 | rasdani/github-patches | git_diff | TheAlgorithms__Python-1403 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`Head` and `temp` names should change
Hi,
In your [Linked List implementation](https://github.com/TheAlgorithms/Python/blob/master/data_structures/linked_list/singly_linked_list.py), I think `temp` is wrongly spelled as `tamp`. The code works but for readability purpose all `tamp` should be replaced by `temp`.
Also, I find it strange to name the `head` with a capital `Head`. Generally, capitalization in Python is saved for Class names, not class attributes or methods. If you think the code should be more *Pythonic*, please consider changing all `Head` to `head` in the class attributes for Linked List.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `data_structures/linked_list/singly_linked_list.py`
Content:
```
1 class Node: # create a Node
2 def __init__(self, data):
3 self.data = data # given data
4 self.next = None # given next to None
5
6
7 class Linked_List:
8 def __init__(self):
9 self.Head = None # Initialize Head to None
10
11 def insert_tail(self, data):
12 if self.Head is None:
13 self.insert_head(data) # If this is first node, call insert_head
14 else:
15 temp = self.Head
16 while temp.next != None: # traverse to last node
17 temp = temp.next
18 temp.next = Node(data) # create node & link to tail
19
20 def insert_head(self, data):
21 newNod = Node(data) # create a new node
22 if self.Head != None:
23 newNod.next = self.Head # link newNode to head
24 self.Head = newNod # make NewNode as Head
25
26 def printList(self): # print every node data
27 tamp = self.Head
28 while tamp is not None:
29 print(tamp.data)
30 tamp = tamp.next
31
32 def delete_head(self): # delete from head
33 temp = self.Head
34 if self.Head != None:
35 self.Head = self.Head.next
36 temp.next = None
37 return temp
38
39 def delete_tail(self): # delete from tail
40 tamp = self.Head
41 if self.Head != None:
42 if self.Head.next is None: # if Head is the only Node in the Linked List
43 self.Head = None
44 else:
45 while tamp.next.next is not None: # find the 2nd last element
46 tamp = tamp.next
47 tamp.next, tamp = (
48 None,
49 tamp.next,
50 ) # (2nd last element).next = None and tamp = last element
51 return tamp
52
53 def isEmpty(self):
54 return self.Head is None # Return if Head is none
55
56 def reverse(self):
57 prev = None
58 current = self.Head
59
60 while current:
61 # Store the current node's next node.
62 next_node = current.next
63 # Make the current node's next point backwards
64 current.next = prev
65 # Make the previous node be the current node
66 prev = current
67 # Make the current node the next node (to progress iteration)
68 current = next_node
69 # Return prev in order to put the head at the end
70 self.Head = prev
71
72
73 def main():
74 A = Linked_List()
75 print("Inserting 1st at Head")
76 a1 = input()
77 A.insert_head(a1)
78 print("Inserting 2nd at Head")
79 a2 = input()
80 A.insert_head(a2)
81 print("\nPrint List : ")
82 A.printList()
83 print("\nInserting 1st at Tail")
84 a3 = input()
85 A.insert_tail(a3)
86 print("Inserting 2nd at Tail")
87 a4 = input()
88 A.insert_tail(a4)
89 print("\nPrint List : ")
90 A.printList()
91 print("\nDelete Head")
92 A.delete_head()
93 print("Delete Tail")
94 A.delete_tail()
95 print("\nPrint List : ")
96 A.printList()
97 print("\nReverse Linked List")
98 A.reverse()
99 print("\nPrint List : ")
100 A.printList()
101
102
103 if __name__ == "__main__":
104 main()
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/data_structures/linked_list/singly_linked_list.py b/data_structures/linked_list/singly_linked_list.py
--- a/data_structures/linked_list/singly_linked_list.py
+++ b/data_structures/linked_list/singly_linked_list.py
@@ -6,56 +6,56 @@
class Linked_List:
def __init__(self):
- self.Head = None # Initialize Head to None
+ self.head = None # Initialize head to None
def insert_tail(self, data):
- if self.Head is None:
+ if self.head is None:
self.insert_head(data) # If this is first node, call insert_head
else:
- temp = self.Head
+ temp = self.head
while temp.next != None: # traverse to last node
temp = temp.next
temp.next = Node(data) # create node & link to tail
def insert_head(self, data):
newNod = Node(data) # create a new node
- if self.Head != None:
- newNod.next = self.Head # link newNode to head
- self.Head = newNod # make NewNode as Head
+ if self.head != None:
+ newNod.next = self.head # link newNode to head
+ self.head = newNod # make NewNode as head
def printList(self): # print every node data
- tamp = self.Head
- while tamp is not None:
- print(tamp.data)
- tamp = tamp.next
+ temp = self.head
+ while temp is not None:
+ print(temp.data)
+ temp = temp.next
def delete_head(self): # delete from head
- temp = self.Head
- if self.Head != None:
- self.Head = self.Head.next
+ temp = self.head
+ if self.head != None:
+ self.head = self.head.next
temp.next = None
return temp
def delete_tail(self): # delete from tail
- tamp = self.Head
- if self.Head != None:
- if self.Head.next is None: # if Head is the only Node in the Linked List
- self.Head = None
+ temp = self.head
+ if self.head != None:
+ if self.head.next is None: # if head is the only Node in the Linked List
+ self.head = None
else:
- while tamp.next.next is not None: # find the 2nd last element
- tamp = tamp.next
- tamp.next, tamp = (
+ while temp.next.next is not None: # find the 2nd last element
+ temp = temp.next
+ temp.next, temp = (
None,
- tamp.next,
- ) # (2nd last element).next = None and tamp = last element
- return tamp
+ temp.next,
+ ) # (2nd last element).next = None and temp = last element
+ return temp
def isEmpty(self):
- return self.Head is None # Return if Head is none
+ return self.head is None # Return if head is none
def reverse(self):
prev = None
- current = self.Head
+ current = self.head
while current:
# Store the current node's next node.
@@ -67,15 +67,15 @@
# Make the current node the next node (to progress iteration)
current = next_node
# Return prev in order to put the head at the end
- self.Head = prev
+ self.head = prev
def main():
A = Linked_List()
- print("Inserting 1st at Head")
+ print("Inserting 1st at head")
a1 = input()
A.insert_head(a1)
- print("Inserting 2nd at Head")
+ print("Inserting 2nd at head")
a2 = input()
A.insert_head(a2)
print("\nPrint List : ")
@@ -88,7 +88,7 @@
A.insert_tail(a4)
print("\nPrint List : ")
A.printList()
- print("\nDelete Head")
+ print("\nDelete head")
A.delete_head()
print("Delete Tail")
A.delete_tail()
| {"golden_diff": "diff --git a/data_structures/linked_list/singly_linked_list.py b/data_structures/linked_list/singly_linked_list.py\n--- a/data_structures/linked_list/singly_linked_list.py\n+++ b/data_structures/linked_list/singly_linked_list.py\n@@ -6,56 +6,56 @@\n \n class Linked_List:\n def __init__(self):\n- self.Head = None # Initialize Head to None\n+ self.head = None # Initialize head to None\n \n def insert_tail(self, data):\n- if self.Head is None:\n+ if self.head is None:\n self.insert_head(data) # If this is first node, call insert_head\n else:\n- temp = self.Head\n+ temp = self.head\n while temp.next != None: # traverse to last node\n temp = temp.next\n temp.next = Node(data) # create node & link to tail\n \n def insert_head(self, data):\n newNod = Node(data) # create a new node\n- if self.Head != None:\n- newNod.next = self.Head # link newNode to head\n- self.Head = newNod # make NewNode as Head\n+ if self.head != None:\n+ newNod.next = self.head # link newNode to head\n+ self.head = newNod # make NewNode as head\n \n def printList(self): # print every node data\n- tamp = self.Head\n- while tamp is not None:\n- print(tamp.data)\n- tamp = tamp.next\n+ temp = self.head\n+ while temp is not None:\n+ print(temp.data)\n+ temp = temp.next\n \n def delete_head(self): # delete from head\n- temp = self.Head\n- if self.Head != None:\n- self.Head = self.Head.next\n+ temp = self.head\n+ if self.head != None:\n+ self.head = self.head.next\n temp.next = None\n return temp\n \n def delete_tail(self): # delete from tail\n- tamp = self.Head\n- if self.Head != None:\n- if self.Head.next is None: # if Head is the only Node in the Linked List\n- self.Head = None\n+ temp = self.head\n+ if self.head != None:\n+ if self.head.next is None: # if head is the only Node in the Linked List\n+ self.head = None\n else:\n- while tamp.next.next is not None: # find the 2nd last element\n- tamp = tamp.next\n- tamp.next, tamp = (\n+ while temp.next.next is not None: # find the 2nd last element\n+ temp = temp.next\n+ temp.next, temp = (\n None,\n- tamp.next,\n- ) # (2nd last element).next = None and tamp = last element\n- return tamp\n+ temp.next,\n+ ) # (2nd last element).next = None and temp = last element\n+ return temp\n \n def isEmpty(self):\n- return self.Head is None # Return if Head is none\n+ return self.head is None # Return if head is none\n \n def reverse(self):\n prev = None\n- current = self.Head\n+ current = self.head\n \n while current:\n # Store the current node's next node.\n@@ -67,15 +67,15 @@\n # Make the current node the next node (to progress iteration)\n current = next_node\n # Return prev in order to put the head at the end\n- self.Head = prev\n+ self.head = prev\n \n \n def main():\n A = Linked_List()\n- print(\"Inserting 1st at Head\")\n+ print(\"Inserting 1st at head\")\n a1 = input()\n A.insert_head(a1)\n- print(\"Inserting 2nd at Head\")\n+ print(\"Inserting 2nd at head\")\n a2 = input()\n A.insert_head(a2)\n print(\"\\nPrint List : \")\n@@ -88,7 +88,7 @@\n A.insert_tail(a4)\n print(\"\\nPrint List : \")\n A.printList()\n- print(\"\\nDelete Head\")\n+ print(\"\\nDelete head\")\n A.delete_head()\n print(\"Delete Tail\")\n A.delete_tail()\n", "issue": "`Head` and `temp` names should change\nHi,\r\n\r\nIn your [Linked List implementation](https://github.com/TheAlgorithms/Python/blob/master/data_structures/linked_list/singly_linked_list.py), I think `temp` is wrongly spelled as `tamp`. The code works but for readability purpose all `tamp` should be replaced by `temp`.\r\n\r\nAlso, I find it strange to name the `head` with a capital `Head`. Generally, capitalization in Python is saved for Class names, not class attributes or methods. If you think the code should be more *Pythonic*, please consider changing all `Head` to `head` in the class attributes for Linked List.\r\n\r\n\n", "before_files": [{"content": "class Node: # create a Node\n def __init__(self, data):\n self.data = data # given data\n self.next = None # given next to None\n\n\nclass Linked_List:\n def __init__(self):\n self.Head = None # Initialize Head to None\n\n def insert_tail(self, data):\n if self.Head is None:\n self.insert_head(data) # If this is first node, call insert_head\n else:\n temp = self.Head\n while temp.next != None: # traverse to last node\n temp = temp.next\n temp.next = Node(data) # create node & link to tail\n\n def insert_head(self, data):\n newNod = Node(data) # create a new node\n if self.Head != None:\n newNod.next = self.Head # link newNode to head\n self.Head = newNod # make NewNode as Head\n\n def printList(self): # print every node data\n tamp = self.Head\n while tamp is not None:\n print(tamp.data)\n tamp = tamp.next\n\n def delete_head(self): # delete from head\n temp = self.Head\n if self.Head != None:\n self.Head = self.Head.next\n temp.next = None\n return temp\n\n def delete_tail(self): # delete from tail\n tamp = self.Head\n if self.Head != None:\n if self.Head.next is None: # if Head is the only Node in the Linked List\n self.Head = None\n else:\n while tamp.next.next is not None: # find the 2nd last element\n tamp = tamp.next\n tamp.next, tamp = (\n None,\n tamp.next,\n ) # (2nd last element).next = None and tamp = last element\n return tamp\n\n def isEmpty(self):\n return self.Head is None # Return if Head is none\n\n def reverse(self):\n prev = None\n current = self.Head\n\n while current:\n # Store the current node's next node.\n next_node = current.next\n # Make the current node's next point backwards\n current.next = prev\n # Make the previous node be the current node\n prev = current\n # Make the current node the next node (to progress iteration)\n current = next_node\n # Return prev in order to put the head at the end\n self.Head = prev\n\n\ndef main():\n A = Linked_List()\n print(\"Inserting 1st at Head\")\n a1 = input()\n A.insert_head(a1)\n print(\"Inserting 2nd at Head\")\n a2 = input()\n A.insert_head(a2)\n print(\"\\nPrint List : \")\n A.printList()\n print(\"\\nInserting 1st at Tail\")\n a3 = input()\n A.insert_tail(a3)\n print(\"Inserting 2nd at Tail\")\n a4 = input()\n A.insert_tail(a4)\n print(\"\\nPrint List : \")\n A.printList()\n print(\"\\nDelete Head\")\n A.delete_head()\n print(\"Delete Tail\")\n A.delete_tail()\n print(\"\\nPrint List : \")\n A.printList()\n print(\"\\nReverse Linked List\")\n A.reverse()\n print(\"\\nPrint List : \")\n A.printList()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "data_structures/linked_list/singly_linked_list.py"}], "after_files": [{"content": "class Node: # create a Node\n def __init__(self, data):\n self.data = data # given data\n self.next = None # given next to None\n\n\nclass Linked_List:\n def __init__(self):\n self.head = None # Initialize head to None\n\n def insert_tail(self, data):\n if self.head is None:\n self.insert_head(data) # If this is first node, call insert_head\n else:\n temp = self.head\n while temp.next != None: # traverse to last node\n temp = temp.next\n temp.next = Node(data) # create node & link to tail\n\n def insert_head(self, data):\n newNod = Node(data) # create a new node\n if self.head != None:\n newNod.next = self.head # link newNode to head\n self.head = newNod # make NewNode as head\n\n def printList(self): # print every node data\n temp = self.head\n while temp is not None:\n print(temp.data)\n temp = temp.next\n\n def delete_head(self): # delete from head\n temp = self.head\n if self.head != None:\n self.head = self.head.next\n temp.next = None\n return temp\n\n def delete_tail(self): # delete from tail\n temp = self.head\n if self.head != None:\n if self.head.next is None: # if head is the only Node in the Linked List\n self.head = None\n else:\n while temp.next.next is not None: # find the 2nd last element\n temp = temp.next\n temp.next, temp = (\n None,\n temp.next,\n ) # (2nd last element).next = None and temp = last element\n return temp\n\n def isEmpty(self):\n return self.head is None # Return if head is none\n\n def reverse(self):\n prev = None\n current = self.head\n\n while current:\n # Store the current node's next node.\n next_node = current.next\n # Make the current node's next point backwards\n current.next = prev\n # Make the previous node be the current node\n prev = current\n # Make the current node the next node (to progress iteration)\n current = next_node\n # Return prev in order to put the head at the end\n self.head = prev\n\n\ndef main():\n A = Linked_List()\n print(\"Inserting 1st at head\")\n a1 = input()\n A.insert_head(a1)\n print(\"Inserting 2nd at head\")\n a2 = input()\n A.insert_head(a2)\n print(\"\\nPrint List : \")\n A.printList()\n print(\"\\nInserting 1st at Tail\")\n a3 = input()\n A.insert_tail(a3)\n print(\"Inserting 2nd at Tail\")\n a4 = input()\n A.insert_tail(a4)\n print(\"\\nPrint List : \")\n A.printList()\n print(\"\\nDelete head\")\n A.delete_head()\n print(\"Delete Tail\")\n A.delete_tail()\n print(\"\\nPrint List : \")\n A.printList()\n print(\"\\nReverse Linked List\")\n A.reverse()\n print(\"\\nPrint List : \")\n A.printList()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "data_structures/linked_list/singly_linked_list.py"}]} | 1,363 | 971 |
gh_patches_debug_38558 | rasdani/github-patches | git_diff | hylang__hy-1431 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
repl shouldn't crash
```Hy
=> (defmacro bad [] `(macro-error 'x ""))
<function <lambda> at 0x000001D01D0ED7B8>
=> (bad)
Traceback (most recent call last):
File "c:\users\me\documents\github\hy\hy\cmdline.py", line 99, in runsource
ast_callback)
File "c:\users\me\documents\github\hy\hy\importer.py", line 198, in hy_eval
eval(ast_compile(_ast, "<eval_body>", "exec"), namespace)
File "<eval_body>", line 1, in <module>
hy.errors.HyMacroExpansionError: <exception str() failed>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\ME\workspace\hy36-gilch\Scripts\hy-script.py", line 11, in <module>
load_entry_point('hy', 'console_scripts', 'hy')()
File "c:\users\me\documents\github\hy\hy\cmdline.py", line 346, in hy_main
sys.exit(cmdline_handler("hy", sys.argv))
File "c:\users\me\documents\github\hy\hy\cmdline.py", line 341, in cmdline_handler
return run_repl(spy=options.spy, output_fn=options.repl_output_fn)
File "c:\users\me\documents\github\hy\hy\cmdline.py", line 236, in run_repl
os=platform.system()
File "C:\Users\ME\AppData\Local\Programs\Python\Python36\lib\code.py", line 233, in interact
more = self.push(line)
File "C:\Users\ME\AppData\Local\Programs\Python\Python36\lib\code.py", line 259, in push
more = self.runsource(source, self.filename)
File "c:\users\me\documents\github\hy\hy\cmdline.py", line 105, in runsource
print(e, file=sys.stderr)
File "c:\users\me\documents\github\hy\hy\errors.py", line 46, in __str__
line = self.expression.start_line
AttributeError: 'HySymbol' object has no attribute 'start_line'
```
The repl should report errors, but not exit.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hy/errors.py`
Content:
```
1 # -*- encoding: utf-8 -*-
2 # Copyright 2017 the authors.
3 # This file is part of Hy, which is free software licensed under the Expat
4 # license. See the LICENSE.
5
6 import traceback
7
8 from clint.textui import colored
9
10
11 class HyError(Exception):
12 """
13 Generic Hy error. All internal Exceptions will be subclassed from this
14 Exception.
15 """
16 pass
17
18
19 class HyCompileError(HyError):
20 def __init__(self, exception, traceback=None):
21 self.exception = exception
22 self.traceback = traceback
23
24 def __str__(self):
25 if isinstance(self.exception, HyTypeError):
26 return str(self.exception)
27 if self.traceback:
28 tb = "".join(traceback.format_tb(self.traceback)).strip()
29 else:
30 tb = "No traceback available. 😟"
31 return("Internal Compiler Bug 😱\n⤷ %s: %s\nCompilation traceback:\n%s"
32 % (self.exception.__class__.__name__,
33 self.exception, tb))
34
35
36 class HyTypeError(TypeError):
37 def __init__(self, expression, message):
38 super(HyTypeError, self).__init__(message)
39 self.expression = expression
40 self.message = message
41 self.source = None
42 self.filename = None
43
44 def __str__(self):
45
46 line = self.expression.start_line
47 start = self.expression.start_column
48 end = self.expression.end_column
49
50 source = []
51 if self.source is not None:
52 source = self.source.split("\n")[line-1:self.expression.end_line]
53
54 if line == self.expression.end_line:
55 length = end - start
56 else:
57 length = len(source[0]) - start
58
59 result = ""
60
61 result += ' File "%s", line %d, column %d\n\n' % (self.filename,
62 line,
63 start)
64
65 if len(source) == 1:
66 result += ' %s\n' % colored.red(source[0])
67 result += ' %s%s\n' % (' '*(start-1),
68 colored.green('^' + '-'*(length-1) + '^'))
69 if len(source) > 1:
70 result += ' %s\n' % colored.red(source[0])
71 result += ' %s%s\n' % (' '*(start-1),
72 colored.green('^' + '-'*length))
73 if len(source) > 2: # write the middle lines
74 for line in source[1:-1]:
75 result += ' %s\n' % colored.red("".join(line))
76 result += ' %s\n' % colored.green("-"*len(line))
77
78 # write the last line
79 result += ' %s\n' % colored.red("".join(source[-1]))
80 result += ' %s\n' % colored.green('-'*(end-1) + '^')
81
82 result += colored.yellow("%s: %s\n\n" %
83 (self.__class__.__name__,
84 self.message.encode('utf-8')))
85
86 return result
87
88
89 class HyMacroExpansionError(HyTypeError):
90 pass
91
92
93 class HyIOError(HyError, IOError):
94 """
95 Trivial subclass of IOError and HyError, to distinguish between
96 IOErrors raised by Hy itself as opposed to Hy programs.
97 """
98 pass
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hy/errors.py b/hy/errors.py
--- a/hy/errors.py
+++ b/hy/errors.py
@@ -43,41 +43,47 @@
def __str__(self):
- line = self.expression.start_line
- start = self.expression.start_column
- end = self.expression.end_column
-
- source = []
- if self.source is not None:
- source = self.source.split("\n")[line-1:self.expression.end_line]
-
- if line == self.expression.end_line:
- length = end - start
- else:
- length = len(source[0]) - start
-
result = ""
- result += ' File "%s", line %d, column %d\n\n' % (self.filename,
- line,
- start)
-
- if len(source) == 1:
- result += ' %s\n' % colored.red(source[0])
- result += ' %s%s\n' % (' '*(start-1),
- colored.green('^' + '-'*(length-1) + '^'))
- if len(source) > 1:
- result += ' %s\n' % colored.red(source[0])
- result += ' %s%s\n' % (' '*(start-1),
- colored.green('^' + '-'*length))
- if len(source) > 2: # write the middle lines
- for line in source[1:-1]:
- result += ' %s\n' % colored.red("".join(line))
- result += ' %s\n' % colored.green("-"*len(line))
-
- # write the last line
- result += ' %s\n' % colored.red("".join(source[-1]))
- result += ' %s\n' % colored.green('-'*(end-1) + '^')
+ if all(getattr(self.expression, x, None) is not None
+ for x in ("start_line", "start_column", "end_column")):
+
+ line = self.expression.start_line
+ start = self.expression.start_column
+ end = self.expression.end_column
+
+ source = []
+ if self.source is not None:
+ source = self.source.split("\n")[line-1:self.expression.end_line]
+
+ if line == self.expression.end_line:
+ length = end - start
+ else:
+ length = len(source[0]) - start
+
+ result += ' File "%s", line %d, column %d\n\n' % (self.filename,
+ line,
+ start)
+
+ if len(source) == 1:
+ result += ' %s\n' % colored.red(source[0])
+ result += ' %s%s\n' % (' '*(start-1),
+ colored.green('^' + '-'*(length-1) + '^'))
+ if len(source) > 1:
+ result += ' %s\n' % colored.red(source[0])
+ result += ' %s%s\n' % (' '*(start-1),
+ colored.green('^' + '-'*length))
+ if len(source) > 2: # write the middle lines
+ for line in source[1:-1]:
+ result += ' %s\n' % colored.red("".join(line))
+ result += ' %s\n' % colored.green("-"*len(line))
+
+ # write the last line
+ result += ' %s\n' % colored.red("".join(source[-1]))
+ result += ' %s\n' % colored.green('-'*(end-1) + '^')
+
+ else:
+ result += ' File "%s", unknown location\n' % self.filename
result += colored.yellow("%s: %s\n\n" %
(self.__class__.__name__,
| {"golden_diff": "diff --git a/hy/errors.py b/hy/errors.py\n--- a/hy/errors.py\n+++ b/hy/errors.py\n@@ -43,41 +43,47 @@\n \n def __str__(self):\n \n- line = self.expression.start_line\n- start = self.expression.start_column\n- end = self.expression.end_column\n-\n- source = []\n- if self.source is not None:\n- source = self.source.split(\"\\n\")[line-1:self.expression.end_line]\n-\n- if line == self.expression.end_line:\n- length = end - start\n- else:\n- length = len(source[0]) - start\n-\n result = \"\"\n \n- result += ' File \"%s\", line %d, column %d\\n\\n' % (self.filename,\n- line,\n- start)\n-\n- if len(source) == 1:\n- result += ' %s\\n' % colored.red(source[0])\n- result += ' %s%s\\n' % (' '*(start-1),\n- colored.green('^' + '-'*(length-1) + '^'))\n- if len(source) > 1:\n- result += ' %s\\n' % colored.red(source[0])\n- result += ' %s%s\\n' % (' '*(start-1),\n- colored.green('^' + '-'*length))\n- if len(source) > 2: # write the middle lines\n- for line in source[1:-1]:\n- result += ' %s\\n' % colored.red(\"\".join(line))\n- result += ' %s\\n' % colored.green(\"-\"*len(line))\n-\n- # write the last line\n- result += ' %s\\n' % colored.red(\"\".join(source[-1]))\n- result += ' %s\\n' % colored.green('-'*(end-1) + '^')\n+ if all(getattr(self.expression, x, None) is not None\n+ for x in (\"start_line\", \"start_column\", \"end_column\")):\n+\n+ line = self.expression.start_line\n+ start = self.expression.start_column\n+ end = self.expression.end_column\n+\n+ source = []\n+ if self.source is not None:\n+ source = self.source.split(\"\\n\")[line-1:self.expression.end_line]\n+\n+ if line == self.expression.end_line:\n+ length = end - start\n+ else:\n+ length = len(source[0]) - start\n+\n+ result += ' File \"%s\", line %d, column %d\\n\\n' % (self.filename,\n+ line,\n+ start)\n+\n+ if len(source) == 1:\n+ result += ' %s\\n' % colored.red(source[0])\n+ result += ' %s%s\\n' % (' '*(start-1),\n+ colored.green('^' + '-'*(length-1) + '^'))\n+ if len(source) > 1:\n+ result += ' %s\\n' % colored.red(source[0])\n+ result += ' %s%s\\n' % (' '*(start-1),\n+ colored.green('^' + '-'*length))\n+ if len(source) > 2: # write the middle lines\n+ for line in source[1:-1]:\n+ result += ' %s\\n' % colored.red(\"\".join(line))\n+ result += ' %s\\n' % colored.green(\"-\"*len(line))\n+\n+ # write the last line\n+ result += ' %s\\n' % colored.red(\"\".join(source[-1]))\n+ result += ' %s\\n' % colored.green('-'*(end-1) + '^')\n+\n+ else:\n+ result += ' File \"%s\", unknown location\\n' % self.filename\n \n result += colored.yellow(\"%s: %s\\n\\n\" %\n (self.__class__.__name__,\n", "issue": "repl shouldn't crash\n```Hy\r\n=> (defmacro bad [] `(macro-error 'x \"\"))\r\n<function <lambda> at 0x000001D01D0ED7B8>\r\n=> (bad)\r\nTraceback (most recent call last):\r\n File \"c:\\users\\me\\documents\\github\\hy\\hy\\cmdline.py\", line 99, in runsource\r\n ast_callback)\r\n File \"c:\\users\\me\\documents\\github\\hy\\hy\\importer.py\", line 198, in hy_eval\r\n eval(ast_compile(_ast, \"<eval_body>\", \"exec\"), namespace)\r\n File \"<eval_body>\", line 1, in <module>\r\nhy.errors.HyMacroExpansionError: <exception str() failed>\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\ME\\workspace\\hy36-gilch\\Scripts\\hy-script.py\", line 11, in <module>\r\n load_entry_point('hy', 'console_scripts', 'hy')()\r\n File \"c:\\users\\me\\documents\\github\\hy\\hy\\cmdline.py\", line 346, in hy_main\r\n sys.exit(cmdline_handler(\"hy\", sys.argv))\r\n File \"c:\\users\\me\\documents\\github\\hy\\hy\\cmdline.py\", line 341, in cmdline_handler\r\n return run_repl(spy=options.spy, output_fn=options.repl_output_fn)\r\n File \"c:\\users\\me\\documents\\github\\hy\\hy\\cmdline.py\", line 236, in run_repl\r\n os=platform.system()\r\n File \"C:\\Users\\ME\\AppData\\Local\\Programs\\Python\\Python36\\lib\\code.py\", line 233, in interact\r\n more = self.push(line)\r\n File \"C:\\Users\\ME\\AppData\\Local\\Programs\\Python\\Python36\\lib\\code.py\", line 259, in push\r\n more = self.runsource(source, self.filename)\r\n File \"c:\\users\\me\\documents\\github\\hy\\hy\\cmdline.py\", line 105, in runsource\r\n print(e, file=sys.stderr)\r\n File \"c:\\users\\me\\documents\\github\\hy\\hy\\errors.py\", line 46, in __str__\r\n line = self.expression.start_line\r\nAttributeError: 'HySymbol' object has no attribute 'start_line'\r\n```\r\nThe repl should report errors, but not exit.\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\n# Copyright 2017 the authors.\n# This file is part of Hy, which is free software licensed under the Expat\n# license. See the LICENSE.\n\nimport traceback\n\nfrom clint.textui import colored\n\n\nclass HyError(Exception):\n \"\"\"\n Generic Hy error. All internal Exceptions will be subclassed from this\n Exception.\n \"\"\"\n pass\n\n\nclass HyCompileError(HyError):\n def __init__(self, exception, traceback=None):\n self.exception = exception\n self.traceback = traceback\n\n def __str__(self):\n if isinstance(self.exception, HyTypeError):\n return str(self.exception)\n if self.traceback:\n tb = \"\".join(traceback.format_tb(self.traceback)).strip()\n else:\n tb = \"No traceback available. \ud83d\ude1f\"\n return(\"Internal Compiler Bug \ud83d\ude31\\n\u2937 %s: %s\\nCompilation traceback:\\n%s\"\n % (self.exception.__class__.__name__,\n self.exception, tb))\n\n\nclass HyTypeError(TypeError):\n def __init__(self, expression, message):\n super(HyTypeError, self).__init__(message)\n self.expression = expression\n self.message = message\n self.source = None\n self.filename = None\n\n def __str__(self):\n\n line = self.expression.start_line\n start = self.expression.start_column\n end = self.expression.end_column\n\n source = []\n if self.source is not None:\n source = self.source.split(\"\\n\")[line-1:self.expression.end_line]\n\n if line == self.expression.end_line:\n length = end - start\n else:\n length = len(source[0]) - start\n\n result = \"\"\n\n result += ' File \"%s\", line %d, column %d\\n\\n' % (self.filename,\n line,\n start)\n\n if len(source) == 1:\n result += ' %s\\n' % colored.red(source[0])\n result += ' %s%s\\n' % (' '*(start-1),\n colored.green('^' + '-'*(length-1) + '^'))\n if len(source) > 1:\n result += ' %s\\n' % colored.red(source[0])\n result += ' %s%s\\n' % (' '*(start-1),\n colored.green('^' + '-'*length))\n if len(source) > 2: # write the middle lines\n for line in source[1:-1]:\n result += ' %s\\n' % colored.red(\"\".join(line))\n result += ' %s\\n' % colored.green(\"-\"*len(line))\n\n # write the last line\n result += ' %s\\n' % colored.red(\"\".join(source[-1]))\n result += ' %s\\n' % colored.green('-'*(end-1) + '^')\n\n result += colored.yellow(\"%s: %s\\n\\n\" %\n (self.__class__.__name__,\n self.message.encode('utf-8')))\n\n return result\n\n\nclass HyMacroExpansionError(HyTypeError):\n pass\n\n\nclass HyIOError(HyError, IOError):\n \"\"\"\n Trivial subclass of IOError and HyError, to distinguish between\n IOErrors raised by Hy itself as opposed to Hy programs.\n \"\"\"\n pass\n", "path": "hy/errors.py"}], "after_files": [{"content": "# -*- encoding: utf-8 -*-\n# Copyright 2017 the authors.\n# This file is part of Hy, which is free software licensed under the Expat\n# license. See the LICENSE.\n\nimport traceback\n\nfrom clint.textui import colored\n\n\nclass HyError(Exception):\n \"\"\"\n Generic Hy error. All internal Exceptions will be subclassed from this\n Exception.\n \"\"\"\n pass\n\n\nclass HyCompileError(HyError):\n def __init__(self, exception, traceback=None):\n self.exception = exception\n self.traceback = traceback\n\n def __str__(self):\n if isinstance(self.exception, HyTypeError):\n return str(self.exception)\n if self.traceback:\n tb = \"\".join(traceback.format_tb(self.traceback)).strip()\n else:\n tb = \"No traceback available. \ud83d\ude1f\"\n return(\"Internal Compiler Bug \ud83d\ude31\\n\u2937 %s: %s\\nCompilation traceback:\\n%s\"\n % (self.exception.__class__.__name__,\n self.exception, tb))\n\n\nclass HyTypeError(TypeError):\n def __init__(self, expression, message):\n super(HyTypeError, self).__init__(message)\n self.expression = expression\n self.message = message\n self.source = None\n self.filename = None\n\n def __str__(self):\n\n result = \"\"\n\n if all(getattr(self.expression, x, None) is not None\n for x in (\"start_line\", \"start_column\", \"end_column\")):\n\n line = self.expression.start_line\n start = self.expression.start_column\n end = self.expression.end_column\n\n source = []\n if self.source is not None:\n source = self.source.split(\"\\n\")[line-1:self.expression.end_line]\n\n if line == self.expression.end_line:\n length = end - start\n else:\n length = len(source[0]) - start\n\n result += ' File \"%s\", line %d, column %d\\n\\n' % (self.filename,\n line,\n start)\n\n if len(source) == 1:\n result += ' %s\\n' % colored.red(source[0])\n result += ' %s%s\\n' % (' '*(start-1),\n colored.green('^' + '-'*(length-1) + '^'))\n if len(source) > 1:\n result += ' %s\\n' % colored.red(source[0])\n result += ' %s%s\\n' % (' '*(start-1),\n colored.green('^' + '-'*length))\n if len(source) > 2: # write the middle lines\n for line in source[1:-1]:\n result += ' %s\\n' % colored.red(\"\".join(line))\n result += ' %s\\n' % colored.green(\"-\"*len(line))\n\n # write the last line\n result += ' %s\\n' % colored.red(\"\".join(source[-1]))\n result += ' %s\\n' % colored.green('-'*(end-1) + '^')\n\n else:\n result += ' File \"%s\", unknown location\\n' % self.filename\n\n result += colored.yellow(\"%s: %s\\n\\n\" %\n (self.__class__.__name__,\n self.message.encode('utf-8')))\n\n return result\n\n\nclass HyMacroExpansionError(HyTypeError):\n pass\n\n\nclass HyIOError(HyError, IOError):\n \"\"\"\n Trivial subclass of IOError and HyError, to distinguish between\n IOErrors raised by Hy itself as opposed to Hy programs.\n \"\"\"\n pass\n", "path": "hy/errors.py"}]} | 1,724 | 867 |
gh_patches_debug_1599 | rasdani/github-patches | git_diff | bridgecrewio__checkov-2214 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_AZURE_80 - dotnet_framework_version with v6.0 fails
**Describe the issue**
Currently .NET 6.0 is the latest LTS version. However, CKV_AZURE_80 expects that latest version is v5.0.
**Examples**
```
resource "azurerm_app_service" "searchApi" {
...
site_config {
dotnet_framework_version = "v6.0"
}
}
```
There should be no warning for CKV_AZURE_80 with the above configuration.
**Version (please complete the following information):**
- Checkov Version 2.0.717
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py`
Content:
```
1 from checkov.common.models.enums import CheckCategories
2 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
3
4
5 class AppServiceDotnetFrameworkVersion(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure that 'Net Framework' version is the latest, if used as a part of the web app"
8 id = "CKV_AZURE_80"
9 supported_resources = ['azurerm_app_service']
10 categories = [CheckCategories.GENERAL_SECURITY]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self):
14 return "site_config/0/dotnet_framework_version"
15
16 def get_expected_value(self):
17 return "v5.0"
18
19
20 check = AppServiceDotnetFrameworkVersion()
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py b/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py
--- a/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py
+++ b/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py
@@ -14,7 +14,7 @@
return "site_config/0/dotnet_framework_version"
def get_expected_value(self):
- return "v5.0"
+ return "v6.0"
check = AppServiceDotnetFrameworkVersion()
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py b/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py\n--- a/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py\n+++ b/checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py\n@@ -14,7 +14,7 @@\n return \"site_config/0/dotnet_framework_version\"\n \n def get_expected_value(self):\n- return \"v5.0\"\n+ return \"v6.0\"\n \n \n check = AppServiceDotnetFrameworkVersion()\n", "issue": "CKV_AZURE_80 - dotnet_framework_version with v6.0 fails\n**Describe the issue**\r\nCurrently .NET 6.0 is the latest LTS version. However, CKV_AZURE_80 expects that latest version is v5.0.\r\n\r\n**Examples**\r\n```\r\nresource \"azurerm_app_service\" \"searchApi\" {\r\n ...\r\n site_config {\r\n dotnet_framework_version = \"v6.0\"\r\n }\r\n}\r\n```\r\nThere should be no warning for CKV_AZURE_80 with the above configuration.\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.0.717\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass AppServiceDotnetFrameworkVersion(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure that 'Net Framework' version is the latest, if used as a part of the web app\"\n id = \"CKV_AZURE_80\"\n supported_resources = ['azurerm_app_service']\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"site_config/0/dotnet_framework_version\"\n\n def get_expected_value(self):\n return \"v5.0\"\n\n\ncheck = AppServiceDotnetFrameworkVersion()\n", "path": "checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass AppServiceDotnetFrameworkVersion(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure that 'Net Framework' version is the latest, if used as a part of the web app\"\n id = \"CKV_AZURE_80\"\n supported_resources = ['azurerm_app_service']\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"site_config/0/dotnet_framework_version\"\n\n def get_expected_value(self):\n return \"v6.0\"\n\n\ncheck = AppServiceDotnetFrameworkVersion()\n", "path": "checkov/terraform/checks/resource/azure/AppServiceDotnetFrameworkVersion.py"}]} | 624 | 137 |
gh_patches_debug_1157 | rasdani/github-patches | git_diff | cal-itp__benefits-213 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Send X-XSS-Protection header
The X-XSS-Protection header can be used to manage certain browser's protection against reflected cross-site scripting (XSS), stopping a page from being loaded if an attack is detected. In modern browsers, the Content-Security-Policy header can provide better protection against XSS and setting X-XSS-Protection might be redundant (#203 tracks CSP implementation).
See more at https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection
We'll want the following header/value:
```
X-XSS-Protection: 1; mode=block
```
This can be done in a new Middleware and configured in [`settings.py`](https://github.com/cal-itp/benefits/blob/dev/benefits/settings.py#L45) for all requests/responses.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benefits/settings.py`
Content:
```
1 """
2 Django settings for benefits project.
3 """
4 import os
5
6 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
7 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
8
9 # SECURITY WARNING: keep the secret key used in production secret!
10 SECRET_KEY = os.environ["DJANGO_SECRET_KEY"]
11
12 # SECURITY WARNING: don't run with debug turned on in production!
13 DEBUG = os.environ.get("DJANGO_DEBUG", "False").lower() == "true"
14
15 ADMIN = os.environ.get("DJANGO_ADMIN", "False").lower() == "true"
16
17 ALLOWED_HOSTS = []
18
19 if DEBUG:
20 ALLOWED_HOSTS.extend(["*"])
21 else:
22 hosts = os.environ["DJANGO_ALLOWED_HOSTS"].split(",")
23 ALLOWED_HOSTS.extend(hosts)
24
25 # Application definition
26
27 INSTALLED_APPS = [
28 "django.contrib.sessions",
29 "django.contrib.staticfiles",
30 "benefits.core",
31 "benefits.enrollment",
32 "benefits.eligibility",
33 ]
34
35 if ADMIN:
36 INSTALLED_APPS.extend(
37 [
38 "django.contrib.admin",
39 "django.contrib.auth",
40 "django.contrib.contenttypes",
41 "django.contrib.messages",
42 ]
43 )
44
45 MIDDLEWARE = [
46 "django.middleware.security.SecurityMiddleware",
47 "django.contrib.sessions.middleware.SessionMiddleware",
48 "django.middleware.locale.LocaleMiddleware",
49 "benefits.core.middleware.Healthcheck",
50 "django.middleware.common.CommonMiddleware",
51 "django.middleware.csrf.CsrfViewMiddleware",
52 "django.middleware.clickjacking.XFrameOptionsMiddleware",
53 "benefits.core.middleware.DebugSession",
54 "benefits.core.middleware.ChangedLanguageEvent",
55 ]
56
57 if ADMIN:
58 MIDDLEWARE.extend(
59 [
60 "django.contrib.auth.middleware.AuthenticationMiddleware",
61 "django.contrib.messages.middleware.MessageMiddleware",
62 ]
63 )
64
65 CSRF_COOKIE_AGE = None
66 CSRF_COOKIE_SAMESITE = "Strict"
67 CSRF_COOKIE_HTTPONLY = True
68
69 SESSION_COOKIE_SAMESITE = "Strict"
70 SESSION_ENGINE = "django.contrib.sessions.backends.signed_cookies"
71 SESSION_EXPIRE_AT_BROWSER_CLOSE = True
72
73 if not DEBUG:
74 CSRF_COOKIE_SECURE = True
75 CSRF_FAILURE_VIEW = "benefits.core.views.csrf_failure"
76 SESSION_COOKIE_SECURE = True
77
78 ROOT_URLCONF = "benefits.urls"
79
80 template_ctx_processors = [
81 "django.template.context_processors.request",
82 "benefits.core.context_processors.analytics",
83 ]
84
85 if DEBUG:
86 template_ctx_processors.extend(
87 [
88 "django.template.context_processors.debug",
89 "benefits.core.context_processors.debug",
90 ]
91 )
92
93 if ADMIN:
94 template_ctx_processors.extend(
95 [
96 "django.contrib.auth.context_processors.auth",
97 "django.contrib.messages.context_processors.messages",
98 ]
99 )
100
101 TEMPLATES = [
102 {
103 "BACKEND": "django.template.backends.django.DjangoTemplates",
104 "DIRS": [os.path.join(BASE_DIR, "benefits", "templates")],
105 "APP_DIRS": True,
106 "OPTIONS": {
107 "context_processors": template_ctx_processors,
108 },
109 },
110 ]
111
112 WSGI_APPLICATION = "benefits.wsgi.application"
113
114 DATABASES = {
115 "default": {
116 "ENGINE": "django.db.backends.sqlite3",
117 "NAME": os.environ.get("DJANGO_DB", "django") + ".db",
118 }
119 }
120
121 # Password validation
122
123 AUTH_PASSWORD_VALIDATORS = []
124
125 if ADMIN:
126 AUTH_PASSWORD_VALIDATORS.extend(
127 [
128 {
129 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
130 },
131 {
132 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
133 },
134 {
135 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
136 },
137 {
138 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
139 },
140 ]
141 )
142
143 # Internationalization
144
145 LANGUAGE_CODE = "en"
146
147 LANGUAGE_COOKIE_HTTPONLY = True
148 LANGUAGE_COOKIE_SAMESITE = "Strict"
149 LANGUAGE_COOKIE_SECURE = True
150
151 LANGUAGES = [("en", "English"), ("es", "Español")]
152
153 LOCALE_PATHS = [os.path.join(BASE_DIR, "benefits", "locale")]
154
155 USE_I18N = True
156 USE_L10N = True
157
158 TIME_ZONE = "UTC"
159 USE_TZ = True
160
161 # Static files (CSS, JavaScript, Images)
162
163 STATIC_URL = "/static/"
164 STATICFILES_DIRS = [os.path.join(BASE_DIR, "benefits", "static")]
165 STATIC_ROOT = os.path.join(BASE_DIR, "static")
166
167 # Logging configuration
168
169 LOG_LEVEL = os.environ.get("DJANGO_LOG_LEVEL", "DEBUG" if DEBUG else "WARNING")
170 LOGGING = {
171 "version": 1,
172 "disable_existing_loggers": False,
173 "formatters": {
174 "default": {
175 "format": "[{asctime}] {levelname} {name}:{lineno} {message}",
176 "datefmt": "%d/%b/%Y %H:%M:%S",
177 "style": "{",
178 },
179 },
180 "handlers": {
181 "default": {"class": "logging.StreamHandler", "formatter": "default"},
182 },
183 "root": {
184 "handlers": ["default"],
185 "level": LOG_LEVEL,
186 },
187 "loggers": {"django": {"handlers": ["default"], "propagate": False}},
188 }
189
190 # Analytics configuration
191
192 ANALYTICS_KEY = os.environ.get("ANALYTICS_KEY")
193
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/benefits/settings.py b/benefits/settings.py
--- a/benefits/settings.py
+++ b/benefits/settings.py
@@ -75,6 +75,8 @@
CSRF_FAILURE_VIEW = "benefits.core.views.csrf_failure"
SESSION_COOKIE_SECURE = True
+SECURE_BROWSER_XSS_FILTER = True
+
ROOT_URLCONF = "benefits.urls"
template_ctx_processors = [
| {"golden_diff": "diff --git a/benefits/settings.py b/benefits/settings.py\n--- a/benefits/settings.py\n+++ b/benefits/settings.py\n@@ -75,6 +75,8 @@\n CSRF_FAILURE_VIEW = \"benefits.core.views.csrf_failure\"\n SESSION_COOKIE_SECURE = True\n \n+SECURE_BROWSER_XSS_FILTER = True\n+\n ROOT_URLCONF = \"benefits.urls\"\n \n template_ctx_processors = [\n", "issue": "Send X-XSS-Protection header\nThe X-XSS-Protection header can be used to manage certain browser's protection against reflected cross-site scripting (XSS), stopping a page from being loaded if an attack is detected. In modern browsers, the Content-Security-Policy header can provide better protection against XSS and setting X-XSS-Protection might be redundant (#203 tracks CSP implementation).\r\n\r\nSee more at https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection\r\n\r\n\r\nWe'll want the following header/value:\r\n\r\n```\r\nX-XSS-Protection: 1; mode=block\r\n```\r\n\r\nThis can be done in a new Middleware and configured in [`settings.py`](https://github.com/cal-itp/benefits/blob/dev/benefits/settings.py#L45) for all requests/responses.\n", "before_files": [{"content": "\"\"\"\nDjango settings for benefits project.\n\"\"\"\nimport os\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.environ[\"DJANGO_SECRET_KEY\"]\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = os.environ.get(\"DJANGO_DEBUG\", \"False\").lower() == \"true\"\n\nADMIN = os.environ.get(\"DJANGO_ADMIN\", \"False\").lower() == \"true\"\n\nALLOWED_HOSTS = []\n\nif DEBUG:\n ALLOWED_HOSTS.extend([\"*\"])\nelse:\n hosts = os.environ[\"DJANGO_ALLOWED_HOSTS\"].split(\",\")\n ALLOWED_HOSTS.extend(hosts)\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.sessions\",\n \"django.contrib.staticfiles\",\n \"benefits.core\",\n \"benefits.enrollment\",\n \"benefits.eligibility\",\n]\n\nif ADMIN:\n INSTALLED_APPS.extend(\n [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.messages\",\n ]\n )\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"benefits.core.middleware.Healthcheck\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"benefits.core.middleware.DebugSession\",\n \"benefits.core.middleware.ChangedLanguageEvent\",\n]\n\nif ADMIN:\n MIDDLEWARE.extend(\n [\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n ]\n )\n\nCSRF_COOKIE_AGE = None\nCSRF_COOKIE_SAMESITE = \"Strict\"\nCSRF_COOKIE_HTTPONLY = True\n\nSESSION_COOKIE_SAMESITE = \"Strict\"\nSESSION_ENGINE = \"django.contrib.sessions.backends.signed_cookies\"\nSESSION_EXPIRE_AT_BROWSER_CLOSE = True\n\nif not DEBUG:\n CSRF_COOKIE_SECURE = True\n CSRF_FAILURE_VIEW = \"benefits.core.views.csrf_failure\"\n SESSION_COOKIE_SECURE = True\n\nROOT_URLCONF = \"benefits.urls\"\n\ntemplate_ctx_processors = [\n \"django.template.context_processors.request\",\n \"benefits.core.context_processors.analytics\",\n]\n\nif DEBUG:\n template_ctx_processors.extend(\n [\n \"django.template.context_processors.debug\",\n \"benefits.core.context_processors.debug\",\n ]\n )\n\nif ADMIN:\n template_ctx_processors.extend(\n [\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ]\n )\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"benefits\", \"templates\")],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": template_ctx_processors,\n },\n },\n]\n\nWSGI_APPLICATION = \"benefits.wsgi.application\"\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": os.environ.get(\"DJANGO_DB\", \"django\") + \".db\",\n }\n}\n\n# Password validation\n\nAUTH_PASSWORD_VALIDATORS = []\n\nif ADMIN:\n AUTH_PASSWORD_VALIDATORS.extend(\n [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n ]\n )\n\n# Internationalization\n\nLANGUAGE_CODE = \"en\"\n\nLANGUAGE_COOKIE_HTTPONLY = True\nLANGUAGE_COOKIE_SAMESITE = \"Strict\"\nLANGUAGE_COOKIE_SECURE = True\n\nLANGUAGES = [(\"en\", \"English\"), (\"es\", \"Espa\u00f1ol\")]\n\nLOCALE_PATHS = [os.path.join(BASE_DIR, \"benefits\", \"locale\")]\n\nUSE_I18N = True\nUSE_L10N = True\n\nTIME_ZONE = \"UTC\"\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n\nSTATIC_URL = \"/static/\"\nSTATICFILES_DIRS = [os.path.join(BASE_DIR, \"benefits\", \"static\")]\nSTATIC_ROOT = os.path.join(BASE_DIR, \"static\")\n\n# Logging configuration\n\nLOG_LEVEL = os.environ.get(\"DJANGO_LOG_LEVEL\", \"DEBUG\" if DEBUG else \"WARNING\")\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"default\": {\n \"format\": \"[{asctime}] {levelname} {name}:{lineno} {message}\",\n \"datefmt\": \"%d/%b/%Y %H:%M:%S\",\n \"style\": \"{\",\n },\n },\n \"handlers\": {\n \"default\": {\"class\": \"logging.StreamHandler\", \"formatter\": \"default\"},\n },\n \"root\": {\n \"handlers\": [\"default\"],\n \"level\": LOG_LEVEL,\n },\n \"loggers\": {\"django\": {\"handlers\": [\"default\"], \"propagate\": False}},\n}\n\n# Analytics configuration\n\nANALYTICS_KEY = os.environ.get(\"ANALYTICS_KEY\")\n", "path": "benefits/settings.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for benefits project.\n\"\"\"\nimport os\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.environ[\"DJANGO_SECRET_KEY\"]\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = os.environ.get(\"DJANGO_DEBUG\", \"False\").lower() == \"true\"\n\nADMIN = os.environ.get(\"DJANGO_ADMIN\", \"False\").lower() == \"true\"\n\nALLOWED_HOSTS = []\n\nif DEBUG:\n ALLOWED_HOSTS.extend([\"*\"])\nelse:\n hosts = os.environ[\"DJANGO_ALLOWED_HOSTS\"].split(\",\")\n ALLOWED_HOSTS.extend(hosts)\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.sessions\",\n \"django.contrib.staticfiles\",\n \"benefits.core\",\n \"benefits.enrollment\",\n \"benefits.eligibility\",\n]\n\nif ADMIN:\n INSTALLED_APPS.extend(\n [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.messages\",\n ]\n )\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"benefits.core.middleware.Healthcheck\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"benefits.core.middleware.DebugSession\",\n \"benefits.core.middleware.ChangedLanguageEvent\",\n]\n\nif ADMIN:\n MIDDLEWARE.extend(\n [\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n ]\n )\n\nCSRF_COOKIE_AGE = None\nCSRF_COOKIE_SAMESITE = \"Strict\"\nCSRF_COOKIE_HTTPONLY = True\n\nSESSION_COOKIE_SAMESITE = \"Strict\"\nSESSION_ENGINE = \"django.contrib.sessions.backends.signed_cookies\"\nSESSION_EXPIRE_AT_BROWSER_CLOSE = True\n\nif not DEBUG:\n CSRF_COOKIE_SECURE = True\n CSRF_FAILURE_VIEW = \"benefits.core.views.csrf_failure\"\n SESSION_COOKIE_SECURE = True\n\nSECURE_BROWSER_XSS_FILTER = True\n\nROOT_URLCONF = \"benefits.urls\"\n\ntemplate_ctx_processors = [\n \"django.template.context_processors.request\",\n \"benefits.core.context_processors.analytics\",\n]\n\nif DEBUG:\n template_ctx_processors.extend(\n [\n \"django.template.context_processors.debug\",\n \"benefits.core.context_processors.debug\",\n ]\n )\n\nif ADMIN:\n template_ctx_processors.extend(\n [\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ]\n )\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"benefits\", \"templates\")],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": template_ctx_processors,\n },\n },\n]\n\nWSGI_APPLICATION = \"benefits.wsgi.application\"\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": os.environ.get(\"DJANGO_DB\", \"django\") + \".db\",\n }\n}\n\n# Password validation\n\nAUTH_PASSWORD_VALIDATORS = []\n\nif ADMIN:\n AUTH_PASSWORD_VALIDATORS.extend(\n [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n ]\n )\n\n# Internationalization\n\nLANGUAGE_CODE = \"en\"\n\nLANGUAGE_COOKIE_HTTPONLY = True\nLANGUAGE_COOKIE_SAMESITE = \"Strict\"\nLANGUAGE_COOKIE_SECURE = True\n\nLANGUAGES = [(\"en\", \"English\"), (\"es\", \"Espa\u00f1ol\")]\n\nLOCALE_PATHS = [os.path.join(BASE_DIR, \"benefits\", \"locale\")]\n\nUSE_I18N = True\nUSE_L10N = True\n\nTIME_ZONE = \"UTC\"\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n\nSTATIC_URL = \"/static/\"\nSTATICFILES_DIRS = [os.path.join(BASE_DIR, \"benefits\", \"static\")]\nSTATIC_ROOT = os.path.join(BASE_DIR, \"static\")\n\n# Logging configuration\n\nLOG_LEVEL = os.environ.get(\"DJANGO_LOG_LEVEL\", \"DEBUG\" if DEBUG else \"WARNING\")\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"default\": {\n \"format\": \"[{asctime}] {levelname} {name}:{lineno} {message}\",\n \"datefmt\": \"%d/%b/%Y %H:%M:%S\",\n \"style\": \"{\",\n },\n },\n \"handlers\": {\n \"default\": {\"class\": \"logging.StreamHandler\", \"formatter\": \"default\"},\n },\n \"root\": {\n \"handlers\": [\"default\"],\n \"level\": LOG_LEVEL,\n },\n \"loggers\": {\"django\": {\"handlers\": [\"default\"], \"propagate\": False}},\n}\n\n# Analytics configuration\n\nANALYTICS_KEY = os.environ.get(\"ANALYTICS_KEY\")\n", "path": "benefits/settings.py"}]} | 2,043 | 90 |
gh_patches_debug_36974 | rasdani/github-patches | git_diff | pulp__pulpcore-2315 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clean up TaskReservedResources/task-table at migration to new-tasking-system
See https://bugzilla.redhat.com/show_bug.cgi?id=2031154 for details.
Migration that needs to be updated to purge taskreservedresource entries: 0064_add_new_style_task_columns.py
This wants to be cherrypicked into 3.14/15/16 (after which the offending table no longer exists)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/migrations/0064_add_new_style_task_columns.py`
Content:
```
1 # Generated by Django 2.2.20 on 2021-04-27 07:51
2
3 import django.contrib.postgres.fields
4 import django.contrib.postgres.fields.jsonb
5 from django.db import migrations, models
6
7
8 def copy_reserved_resources_record(apps, schema_editor):
9 Task = apps.get_model('core', 'Task')
10 for task in Task.objects.iterator():
11 task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))
12 task.save()
13
14
15 def noop(apps, schema_editor):
16 pass
17
18
19 class Migration(migrations.Migration):
20
21 dependencies = [
22 ('core', '0063_repository_retained_versions'),
23 ]
24
25 operations = [
26 migrations.AddField(
27 model_name='task',
28 name='args',
29 field=django.contrib.postgres.fields.jsonb.JSONField(null=True),
30 ),
31 migrations.AddField(
32 model_name='task',
33 name='kwargs',
34 field=django.contrib.postgres.fields.jsonb.JSONField(null=True),
35 ),
36 migrations.AddField(
37 model_name='task',
38 name='_reserved_resources_record',
39 field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=256), null=True, size=None),
40 ),
41 migrations.AlterField(
42 model_name='task',
43 name='_resource_job_id',
44 field=models.UUIDField(null=True),
45 ),
46 migrations.AlterField(
47 model_name='progressreport',
48 name='state',
49 field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')], default='waiting'),
50 ),
51 migrations.AlterField(
52 model_name='task',
53 name='state',
54 field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')]),
55 ),
56 migrations.AddIndex(
57 model_name='task',
58 index=models.Index(fields=['pulp_created'], name='core_task_pulp_cr_10223f_idx'),
59 ),
60 migrations.RunPython(
61 code=copy_reserved_resources_record,
62 reverse_code=noop,
63 ),
64 migrations.RemoveField(
65 model_name='taskreservedresourcerecord',
66 name='resource',
67 ),
68 migrations.RemoveField(
69 model_name='taskreservedresourcerecord',
70 name='task',
71 ),
72 migrations.DeleteModel(
73 name='ReservedResourceRecord',
74 ),
75 migrations.DeleteModel(
76 name='TaskReservedResourceRecord',
77 ),
78 migrations.RenameField(
79 model_name='task',
80 old_name='_reserved_resources_record',
81 new_name='reserved_resources_record',
82 ),
83 ]
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pulpcore/app/migrations/0064_add_new_style_task_columns.py b/pulpcore/app/migrations/0064_add_new_style_task_columns.py
--- a/pulpcore/app/migrations/0064_add_new_style_task_columns.py
+++ b/pulpcore/app/migrations/0064_add_new_style_task_columns.py
@@ -4,16 +4,34 @@
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
+TASK_BATCH_SIZE = 1000
+
def copy_reserved_resources_record(apps, schema_editor):
Task = apps.get_model('core', 'Task')
- for task in Task.objects.iterator():
+
+ # Update _reserved_resource_record for all tasks, 1000 tasks at a time.
+ # When we hit 1K tasks, go to the db for the batch.
+ # Make sure to update the final batch!
+ tasks = []
+ for task in Task.objects.iterator(chunk_size=TASK_BATCH_SIZE):
task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))
- task.save()
+ tasks.append(task)
+ if len(tasks) == TASK_BATCH_SIZE:
+ Task.objects.bulk_update(tasks, ["_reserved_resources_record"])
+ tasks.clear()
+
+ # Update last set of tasks
+ if len(tasks) > 0:
+ Task.objects.bulk_update(tasks, ["_reserved_resources_record"])
+
+def purge_reservedresources(apps, schema_editor):
+ TaskReservedResource = apps.get_model('core', 'TaskReservedResource')
+ TaskReservedResource.objects.all().delete()
-def noop(apps, schema_editor):
- pass
+ ReservedResource = apps.get_model('core', 'ReservedResource')
+ ReservedResource.objects.all().delete()
class Migration(migrations.Migration):
@@ -23,6 +41,12 @@
]
operations = [
+ # Purge any ReservedResource entries - if there are any, they're orphans
+ migrations.RunPython(
+ code=purge_reservedresources,
+ reverse_code=migrations.RunPython.noop,
+ ),
+ # Update entities for the new task-system
migrations.AddField(
model_name='task',
name='args',
@@ -59,7 +83,7 @@
),
migrations.RunPython(
code=copy_reserved_resources_record,
- reverse_code=noop,
+ reverse_code=migrations.RunPython.noop,
),
migrations.RemoveField(
model_name='taskreservedresourcerecord',
@@ -80,4 +104,5 @@
old_name='_reserved_resources_record',
new_name='reserved_resources_record',
),
+
]
| {"golden_diff": "diff --git a/pulpcore/app/migrations/0064_add_new_style_task_columns.py b/pulpcore/app/migrations/0064_add_new_style_task_columns.py\n--- a/pulpcore/app/migrations/0064_add_new_style_task_columns.py\n+++ b/pulpcore/app/migrations/0064_add_new_style_task_columns.py\n@@ -4,16 +4,34 @@\n import django.contrib.postgres.fields.jsonb\n from django.db import migrations, models\n \n+TASK_BATCH_SIZE = 1000\n+\n \n def copy_reserved_resources_record(apps, schema_editor):\n Task = apps.get_model('core', 'Task')\n- for task in Task.objects.iterator():\n+\n+ # Update _reserved_resource_record for all tasks, 1000 tasks at a time.\n+ # When we hit 1K tasks, go to the db for the batch.\n+ # Make sure to update the final batch!\n+ tasks = []\n+ for task in Task.objects.iterator(chunk_size=TASK_BATCH_SIZE):\n task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))\n- task.save()\n+ tasks.append(task)\n+ if len(tasks) == TASK_BATCH_SIZE:\n+ Task.objects.bulk_update(tasks, [\"_reserved_resources_record\"])\n+ tasks.clear()\n+\n+ # Update last set of tasks\n+ if len(tasks) > 0:\n+ Task.objects.bulk_update(tasks, [\"_reserved_resources_record\"])\n+\n \n+def purge_reservedresources(apps, schema_editor):\n+ TaskReservedResource = apps.get_model('core', 'TaskReservedResource')\n+ TaskReservedResource.objects.all().delete()\n \n-def noop(apps, schema_editor):\n- pass\n+ ReservedResource = apps.get_model('core', 'ReservedResource')\n+ ReservedResource.objects.all().delete()\n \n \n class Migration(migrations.Migration):\n@@ -23,6 +41,12 @@\n ]\n \n operations = [\n+ # Purge any ReservedResource entries - if there are any, they're orphans\n+ migrations.RunPython(\n+ code=purge_reservedresources,\n+ reverse_code=migrations.RunPython.noop,\n+ ),\n+ # Update entities for the new task-system\n migrations.AddField(\n model_name='task',\n name='args',\n@@ -59,7 +83,7 @@\n ),\n migrations.RunPython(\n code=copy_reserved_resources_record,\n- reverse_code=noop,\n+ reverse_code=migrations.RunPython.noop,\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n@@ -80,4 +104,5 @@\n old_name='_reserved_resources_record',\n new_name='reserved_resources_record',\n ),\n+\n ]\n", "issue": "Clean up TaskReservedResources/task-table at migration to new-tasking-system\nSee https://bugzilla.redhat.com/show_bug.cgi?id=2031154 for details.\r\n\r\nMigration that needs to be updated to purge taskreservedresource entries: 0064_add_new_style_task_columns.py\r\n\r\nThis wants to be cherrypicked into 3.14/15/16 (after which the offending table no longer exists)\n", "before_files": [{"content": "# Generated by Django 2.2.20 on 2021-04-27 07:51\n\nimport django.contrib.postgres.fields\nimport django.contrib.postgres.fields.jsonb\nfrom django.db import migrations, models\n\n\ndef copy_reserved_resources_record(apps, schema_editor):\n Task = apps.get_model('core', 'Task')\n for task in Task.objects.iterator():\n task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))\n task.save()\n\n\ndef noop(apps, schema_editor):\n pass\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0063_repository_retained_versions'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='task',\n name='args',\n field=django.contrib.postgres.fields.jsonb.JSONField(null=True),\n ),\n migrations.AddField(\n model_name='task',\n name='kwargs',\n field=django.contrib.postgres.fields.jsonb.JSONField(null=True),\n ),\n migrations.AddField(\n model_name='task',\n name='_reserved_resources_record',\n field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=256), null=True, size=None),\n ),\n migrations.AlterField(\n model_name='task',\n name='_resource_job_id',\n field=models.UUIDField(null=True),\n ),\n migrations.AlterField(\n model_name='progressreport',\n name='state',\n field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')], default='waiting'),\n ),\n migrations.AlterField(\n model_name='task',\n name='state',\n field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')]),\n ),\n migrations.AddIndex(\n model_name='task',\n index=models.Index(fields=['pulp_created'], name='core_task_pulp_cr_10223f_idx'),\n ),\n migrations.RunPython(\n code=copy_reserved_resources_record,\n reverse_code=noop,\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n name='resource',\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n name='task',\n ),\n migrations.DeleteModel(\n name='ReservedResourceRecord',\n ),\n migrations.DeleteModel(\n name='TaskReservedResourceRecord',\n ),\n migrations.RenameField(\n model_name='task',\n old_name='_reserved_resources_record',\n new_name='reserved_resources_record',\n ),\n ]\n", "path": "pulpcore/app/migrations/0064_add_new_style_task_columns.py"}], "after_files": [{"content": "# Generated by Django 2.2.20 on 2021-04-27 07:51\n\nimport django.contrib.postgres.fields\nimport django.contrib.postgres.fields.jsonb\nfrom django.db import migrations, models\n\nTASK_BATCH_SIZE = 1000\n\n\ndef copy_reserved_resources_record(apps, schema_editor):\n Task = apps.get_model('core', 'Task')\n\n # Update _reserved_resource_record for all tasks, 1000 tasks at a time.\n # When we hit 1K tasks, go to the db for the batch.\n # Make sure to update the final batch!\n tasks = []\n for task in Task.objects.iterator(chunk_size=TASK_BATCH_SIZE):\n task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))\n tasks.append(task)\n if len(tasks) == TASK_BATCH_SIZE:\n Task.objects.bulk_update(tasks, [\"_reserved_resources_record\"])\n tasks.clear()\n\n # Update last set of tasks\n if len(tasks) > 0:\n Task.objects.bulk_update(tasks, [\"_reserved_resources_record\"])\n\n\ndef purge_reservedresources(apps, schema_editor):\n TaskReservedResource = apps.get_model('core', 'TaskReservedResource')\n TaskReservedResource.objects.all().delete()\n\n ReservedResource = apps.get_model('core', 'ReservedResource')\n ReservedResource.objects.all().delete()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0063_repository_retained_versions'),\n ]\n\n operations = [\n # Purge any ReservedResource entries - if there are any, they're orphans\n migrations.RunPython(\n code=purge_reservedresources,\n reverse_code=migrations.RunPython.noop,\n ),\n # Update entities for the new task-system\n migrations.AddField(\n model_name='task',\n name='args',\n field=django.contrib.postgres.fields.jsonb.JSONField(null=True),\n ),\n migrations.AddField(\n model_name='task',\n name='kwargs',\n field=django.contrib.postgres.fields.jsonb.JSONField(null=True),\n ),\n migrations.AddField(\n model_name='task',\n name='_reserved_resources_record',\n field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=256), null=True, size=None),\n ),\n migrations.AlterField(\n model_name='task',\n name='_resource_job_id',\n field=models.UUIDField(null=True),\n ),\n migrations.AlterField(\n model_name='progressreport',\n name='state',\n field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')], default='waiting'),\n ),\n migrations.AlterField(\n model_name='task',\n name='state',\n field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')]),\n ),\n migrations.AddIndex(\n model_name='task',\n index=models.Index(fields=['pulp_created'], name='core_task_pulp_cr_10223f_idx'),\n ),\n migrations.RunPython(\n code=copy_reserved_resources_record,\n reverse_code=migrations.RunPython.noop,\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n name='resource',\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n name='task',\n ),\n migrations.DeleteModel(\n name='ReservedResourceRecord',\n ),\n migrations.DeleteModel(\n name='TaskReservedResourceRecord',\n ),\n migrations.RenameField(\n model_name='task',\n old_name='_reserved_resources_record',\n new_name='reserved_resources_record',\n ),\n\n ]\n", "path": "pulpcore/app/migrations/0064_add_new_style_task_columns.py"}]} | 1,140 | 600 |
gh_patches_debug_20603 | rasdani/github-patches | git_diff | DataBiosphere__toil-1324 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
setuptools 8.0.0 or later is now required for PEP 440 version support
python 2.7.9
setuptools 5.5.1
```
Traceback (most recent call last):
File "version_template.py", line 137, in <module>
_main()
File "version_template.py", line 133, in _main
sys.stdout.write(expand_(*sys.argv[1:]))
File "version_template.py", line 128, in expand_
return resolve(name)
File "version_template.py", line 122, in resolve
v = v()
File "version_template.py", line 61, in distVersion
if build_number is not None and parse_version(baseVersion).is_prerelease:
AttributeError: 'tuple' object has no attribute 'is_prerelease'
```
setuptools 8.0.0 or later is now required for PEP 440 version support
python 2.7.9
setuptools 5.5.1
```
Traceback (most recent call last):
File "version_template.py", line 137, in <module>
_main()
File "version_template.py", line 133, in _main
sys.stdout.write(expand_(*sys.argv[1:]))
File "version_template.py", line 128, in expand_
return resolve(name)
File "version_template.py", line 122, in resolve
v = v()
File "version_template.py", line 61, in distVersion
if build_number is not None and parse_version(baseVersion).is_prerelease:
AttributeError: 'tuple' object has no attribute 'is_prerelease'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `version_template.py`
Content:
```
1 # Copyright (C) 2015-2016 Regents of the University of California
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """This script is a template for src/toil/version.py. Running it without arguments echoes all
16 globals, i.e. module attributes. Constant assignments will be echoed verbatim while callables
17 will be invoked and their result echoed as an assignment using the function name as the left-hand
18 side and the return value of the function as right-hand side. To prevent a module attribute from
19 being echoed, start or end the attribute name with an underscore. To print the value of a single
20 symbol, pass the name of that attribute to the script as a command line argument. You can also
21 import the expand_ function and invoke it directly with either no or exactly one argument."""
22
23 # Note to maintainers:
24 #
25 # - don't import at module level unless you intend for the import to be included in the output
26 # - only import from the Python standard run-time library (you can't have any dependencies)
27
28 baseVersion = '3.5.0a1'
29
30 cgcloudVersion = '1.6.0a1.dev378'
31
32
33 def version():
34 """
35 A version identifier that includes the full-legth commit SHA1 and an optional suffix to
36 indicate that the working copy is dirty.
37 """
38 return _version()
39
40
41 def shortVersion():
42 """
43 A version identifier that includes the abbreviated commit SHA1 and an optional suffix to
44 indicate that the working copy is dirty.
45 """
46 return _version(shorten=True)
47
48
49 def _version(shorten=False):
50 return '-'.join(filter(None, [distVersion(),
51 currentCommit()[:7 if shorten else None],
52 ('dirty' if dirty() else None)]))
53
54
55 def distVersion():
56 """
57 The distribution version identifying a published release on PyPI.
58 """
59 from pkg_resources import parse_version
60 build_number = buildNumber()
61 if build_number is not None and parse_version(baseVersion).is_prerelease:
62 return baseVersion + '.dev' + build_number
63 else:
64 return baseVersion
65
66
67 def dockerTag():
68 """
69 The primary tag of the Docker image for the appliance. This uniquely identifies the appliance
70 image.
71 """
72 return version()
73
74
75 def dockerShortTag():
76 """
77 A secondary, shortened form of :func:`dockerTag` with which to tag the appliance image for
78 convenience.
79 """
80 return shortVersion()
81
82
83 def dockerMinimalTag():
84 """
85 A minimal tag with which to tag the appliance image for convenience. Does not include
86 information about the git commit or working copy dirtyness.
87 """
88 return distVersion()
89
90
91 dockerRegistry = 'quay.io/ucsc_cgl'
92
93 dockerName = 'toil'
94
95
96 def buildNumber():
97 """
98 The Jenkins build number, if defined, else None.
99 """
100 import os
101 return os.getenv('BUILD_NUMBER')
102
103
104 def currentCommit():
105 from subprocess import check_output
106 return check_output('git log --pretty=oneline -n 1 -- $(pwd)', shell=True).split()[0]
107
108
109 def dirty():
110 from subprocess import call
111 return 0 != call('(git diff --exit-code '
112 '&& git diff --cached --exit-code) > /dev/null', shell=True)
113
114
115 def expand_(name=None):
116 variables = {k: v for k, v in globals().iteritems()
117 if not k.startswith('_') and not k.endswith('_')}
118
119 def resolve(k):
120 v = variables[k]
121 if callable(v):
122 v = v()
123 return v
124
125 if name is None:
126 return ''.join("%s = %s\n" % (k, repr(resolve(k))) for k, v in variables.iteritems())
127 else:
128 return resolve(name)
129
130
131 def _main():
132 import sys
133 sys.stdout.write(expand_(*sys.argv[1:]))
134
135
136 if __name__ == '__main__':
137 _main()
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/version_template.py b/version_template.py
--- a/version_template.py
+++ b/version_template.py
@@ -22,7 +22,7 @@
# Note to maintainers:
#
-# - don't import at module level unless you intend for the import to be included in the output
+# - don't import at module level unless you want the imported value to be included in the output
# - only import from the Python standard run-time library (you can't have any dependencies)
baseVersion = '3.5.0a1'
@@ -58,7 +58,12 @@
"""
from pkg_resources import parse_version
build_number = buildNumber()
- if build_number is not None and parse_version(baseVersion).is_prerelease:
+ parsedBaseVersion = parse_version(baseVersion)
+ if isinstance(parsedBaseVersion, tuple):
+ raise RuntimeError("Setuptools version 8.0 or newer required. Update by running "
+ "'pip install setuptools --upgrade'")
+
+ if build_number is not None and parsedBaseVersion.is_prerelease:
return baseVersion + '.dev' + build_number
else:
return baseVersion
| {"golden_diff": "diff --git a/version_template.py b/version_template.py\n--- a/version_template.py\n+++ b/version_template.py\n@@ -22,7 +22,7 @@\n \n # Note to maintainers:\n #\n-# - don't import at module level unless you intend for the import to be included in the output\n+# - don't import at module level unless you want the imported value to be included in the output\n # - only import from the Python standard run-time library (you can't have any dependencies)\n \n baseVersion = '3.5.0a1'\n@@ -58,7 +58,12 @@\n \"\"\"\n from pkg_resources import parse_version\n build_number = buildNumber()\n- if build_number is not None and parse_version(baseVersion).is_prerelease:\n+ parsedBaseVersion = parse_version(baseVersion)\n+ if isinstance(parsedBaseVersion, tuple):\n+ raise RuntimeError(\"Setuptools version 8.0 or newer required. Update by running \"\n+ \"'pip install setuptools --upgrade'\")\n+\n+ if build_number is not None and parsedBaseVersion.is_prerelease:\n return baseVersion + '.dev' + build_number\n else:\n return baseVersion\n", "issue": "setuptools 8.0.0 or later is now required for PEP 440 version support\npython 2.7.9\r\nsetuptools 5.5.1\r\n```\r\nTraceback (most recent call last):\r\n File \"version_template.py\", line 137, in <module>\r\n _main()\r\n File \"version_template.py\", line 133, in _main\r\n sys.stdout.write(expand_(*sys.argv[1:]))\r\n File \"version_template.py\", line 128, in expand_\r\n return resolve(name)\r\n File \"version_template.py\", line 122, in resolve\r\n v = v()\r\n File \"version_template.py\", line 61, in distVersion\r\n if build_number is not None and parse_version(baseVersion).is_prerelease:\r\nAttributeError: 'tuple' object has no attribute 'is_prerelease'\r\n```\r\n\nsetuptools 8.0.0 or later is now required for PEP 440 version support\npython 2.7.9\r\nsetuptools 5.5.1\r\n```\r\nTraceback (most recent call last):\r\n File \"version_template.py\", line 137, in <module>\r\n _main()\r\n File \"version_template.py\", line 133, in _main\r\n sys.stdout.write(expand_(*sys.argv[1:]))\r\n File \"version_template.py\", line 128, in expand_\r\n return resolve(name)\r\n File \"version_template.py\", line 122, in resolve\r\n v = v()\r\n File \"version_template.py\", line 61, in distVersion\r\n if build_number is not None and parse_version(baseVersion).is_prerelease:\r\nAttributeError: 'tuple' object has no attribute 'is_prerelease'\r\n```\r\n\n", "before_files": [{"content": "# Copyright (C) 2015-2016 Regents of the University of California\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"This script is a template for src/toil/version.py. Running it without arguments echoes all\nglobals, i.e. module attributes. Constant assignments will be echoed verbatim while callables\nwill be invoked and their result echoed as an assignment using the function name as the left-hand\nside and the return value of the function as right-hand side. To prevent a module attribute from\nbeing echoed, start or end the attribute name with an underscore. To print the value of a single\nsymbol, pass the name of that attribute to the script as a command line argument. You can also\nimport the expand_ function and invoke it directly with either no or exactly one argument.\"\"\"\n\n# Note to maintainers:\n#\n# - don't import at module level unless you intend for the import to be included in the output\n# - only import from the Python standard run-time library (you can't have any dependencies)\n\nbaseVersion = '3.5.0a1'\n\ncgcloudVersion = '1.6.0a1.dev378'\n\n\ndef version():\n \"\"\"\n A version identifier that includes the full-legth commit SHA1 and an optional suffix to\n indicate that the working copy is dirty.\n \"\"\"\n return _version()\n\n\ndef shortVersion():\n \"\"\"\n A version identifier that includes the abbreviated commit SHA1 and an optional suffix to\n indicate that the working copy is dirty.\n \"\"\"\n return _version(shorten=True)\n\n\ndef _version(shorten=False):\n return '-'.join(filter(None, [distVersion(),\n currentCommit()[:7 if shorten else None],\n ('dirty' if dirty() else None)]))\n\n\ndef distVersion():\n \"\"\"\n The distribution version identifying a published release on PyPI.\n \"\"\"\n from pkg_resources import parse_version\n build_number = buildNumber()\n if build_number is not None and parse_version(baseVersion).is_prerelease:\n return baseVersion + '.dev' + build_number\n else:\n return baseVersion\n\n\ndef dockerTag():\n \"\"\"\n The primary tag of the Docker image for the appliance. This uniquely identifies the appliance\n image.\n \"\"\"\n return version()\n\n\ndef dockerShortTag():\n \"\"\"\n A secondary, shortened form of :func:`dockerTag` with which to tag the appliance image for\n convenience.\n \"\"\"\n return shortVersion()\n\n\ndef dockerMinimalTag():\n \"\"\"\n A minimal tag with which to tag the appliance image for convenience. Does not include\n information about the git commit or working copy dirtyness.\n \"\"\"\n return distVersion()\n\n\ndockerRegistry = 'quay.io/ucsc_cgl'\n\ndockerName = 'toil'\n\n\ndef buildNumber():\n \"\"\"\n The Jenkins build number, if defined, else None.\n \"\"\"\n import os\n return os.getenv('BUILD_NUMBER')\n\n\ndef currentCommit():\n from subprocess import check_output\n return check_output('git log --pretty=oneline -n 1 -- $(pwd)', shell=True).split()[0]\n\n\ndef dirty():\n from subprocess import call\n return 0 != call('(git diff --exit-code '\n '&& git diff --cached --exit-code) > /dev/null', shell=True)\n\n\ndef expand_(name=None):\n variables = {k: v for k, v in globals().iteritems()\n if not k.startswith('_') and not k.endswith('_')}\n\n def resolve(k):\n v = variables[k]\n if callable(v):\n v = v()\n return v\n\n if name is None:\n return ''.join(\"%s = %s\\n\" % (k, repr(resolve(k))) for k, v in variables.iteritems())\n else:\n return resolve(name)\n\n\ndef _main():\n import sys\n sys.stdout.write(expand_(*sys.argv[1:]))\n\n\nif __name__ == '__main__':\n _main()\n", "path": "version_template.py"}], "after_files": [{"content": "# Copyright (C) 2015-2016 Regents of the University of California\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"This script is a template for src/toil/version.py. Running it without arguments echoes all\nglobals, i.e. module attributes. Constant assignments will be echoed verbatim while callables\nwill be invoked and their result echoed as an assignment using the function name as the left-hand\nside and the return value of the function as right-hand side. To prevent a module attribute from\nbeing echoed, start or end the attribute name with an underscore. To print the value of a single\nsymbol, pass the name of that attribute to the script as a command line argument. You can also\nimport the expand_ function and invoke it directly with either no or exactly one argument.\"\"\"\n\n# Note to maintainers:\n#\n# - don't import at module level unless you want the imported value to be included in the output\n# - only import from the Python standard run-time library (you can't have any dependencies)\n\nbaseVersion = '3.5.0a1'\n\ncgcloudVersion = '1.6.0a1.dev378'\n\n\ndef version():\n \"\"\"\n A version identifier that includes the full-legth commit SHA1 and an optional suffix to\n indicate that the working copy is dirty.\n \"\"\"\n return _version()\n\n\ndef shortVersion():\n \"\"\"\n A version identifier that includes the abbreviated commit SHA1 and an optional suffix to\n indicate that the working copy is dirty.\n \"\"\"\n return _version(shorten=True)\n\n\ndef _version(shorten=False):\n return '-'.join(filter(None, [distVersion(),\n currentCommit()[:7 if shorten else None],\n ('dirty' if dirty() else None)]))\n\n\ndef distVersion():\n \"\"\"\n The distribution version identifying a published release on PyPI.\n \"\"\"\n from pkg_resources import parse_version\n build_number = buildNumber()\n parsedBaseVersion = parse_version(baseVersion)\n if isinstance(parsedBaseVersion, tuple):\n raise RuntimeError(\"Setuptools version 8.0 or newer required. Update by running \"\n \"'pip install setuptools --upgrade'\")\n\n if build_number is not None and parsedBaseVersion.is_prerelease:\n return baseVersion + '.dev' + build_number\n else:\n return baseVersion\n\n\ndef dockerTag():\n \"\"\"\n The primary tag of the Docker image for the appliance. This uniquely identifies the appliance\n image.\n \"\"\"\n return version()\n\n\ndef dockerShortTag():\n \"\"\"\n A secondary, shortened form of :func:`dockerTag` with which to tag the appliance image for\n convenience.\n \"\"\"\n return shortVersion()\n\n\ndef dockerMinimalTag():\n \"\"\"\n A minimal tag with which to tag the appliance image for convenience. Does not include\n information about the git commit or working copy dirtyness.\n \"\"\"\n return distVersion()\n\n\ndockerRegistry = 'quay.io/ucsc_cgl'\n\ndockerName = 'toil'\n\n\ndef buildNumber():\n \"\"\"\n The Jenkins build number, if defined, else None.\n \"\"\"\n import os\n return os.getenv('BUILD_NUMBER')\n\n\ndef currentCommit():\n from subprocess import check_output\n return check_output('git log --pretty=oneline -n 1 -- $(pwd)', shell=True).split()[0]\n\n\ndef dirty():\n from subprocess import call\n return 0 != call('(git diff --exit-code '\n '&& git diff --cached --exit-code) > /dev/null', shell=True)\n\n\ndef expand_(name=None):\n variables = {k: v for k, v in globals().iteritems()\n if not k.startswith('_') and not k.endswith('_')}\n\n def resolve(k):\n v = variables[k]\n if callable(v):\n v = v()\n return v\n\n if name is None:\n return ''.join(\"%s = %s\\n\" % (k, repr(resolve(k))) for k, v in variables.iteritems())\n else:\n return resolve(name)\n\n\ndef _main():\n import sys\n sys.stdout.write(expand_(*sys.argv[1:]))\n\n\nif __name__ == '__main__':\n _main()\n", "path": "version_template.py"}]} | 1,928 | 259 |
gh_patches_debug_35139 | rasdani/github-patches | git_diff | spotify__luigi-1744 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
HdfsTarget commands fail when many targets are instantiated
I've recently added an existence check to a large mapreduce task for which some input files may be missing. With a large enough set of inputs, it will fail every time. I've simplified it to the following code:
``` python
from luigi.contrib.hdfs import HdfsTarget
many_targets = [HdfsTarget('/') for _ in range(2000)]
all(target.exists() for target in many_targets)
```
This will break if I use any past 1000 or so. Here the client uses snakebite. For a more direct triggering, we can also do
``` python
from snakebite.client import AutoConfigClient
clients = [AutoConfigClient() for _ in range(10000)]
all(client.test('/', exists=True) for client in clients)
```
In either case, the bug goes away if I use a generator expression rather than a list comprehension. The problem is that when I'm dealing with objects coming out of luigi calls like input_hadoop, it's too late for me to decide between lists and iterators. I can code around this by instantiating all of my HdfsTargets with the same client, but I'm not sure this is safe. It could also be fixed in luigi if we had get_autoconfig_client return the same object each time. Is there any reason this wouldn't work?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `luigi/contrib/hdfs/clients.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2012-2015 Spotify AB
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 #
17
18 """
19 The implementations of the hdfs clients. The hadoop cli client and the
20 snakebite client.
21 """
22
23
24 from luigi.contrib.hdfs import config as hdfs_config
25 from luigi.contrib.hdfs import snakebite_client as hdfs_snakebite_client
26 from luigi.contrib.hdfs import webhdfs_client as hdfs_webhdfs_client
27 from luigi.contrib.hdfs import hadoopcli_clients as hdfs_hadoopcli_clients
28 import luigi.contrib.target
29 import logging
30
31 logger = logging.getLogger('luigi-interface')
32
33
34 def get_autoconfig_client():
35 """
36 Creates the client as specified in the `luigi.cfg` configuration.
37 """
38 configured_client = hdfs_config.get_configured_hdfs_client()
39 if configured_client == "webhdfs":
40 return hdfs_webhdfs_client.WebHdfsClient()
41 if configured_client == "snakebite":
42 return hdfs_snakebite_client.SnakebiteHdfsClient()
43 if configured_client == "snakebite_with_hadoopcli_fallback":
44 return luigi.contrib.target.CascadingClient([hdfs_snakebite_client.SnakebiteHdfsClient(),
45 hdfs_hadoopcli_clients.create_hadoopcli_client()])
46 if configured_client == "hadoopcli":
47 return hdfs_hadoopcli_clients.create_hadoopcli_client()
48 raise Exception("Unknown hdfs client " + configured_client)
49
50
51 def _with_ac(method_name):
52 def result(*args, **kwargs):
53 return getattr(get_autoconfig_client(), method_name)(*args, **kwargs)
54 return result
55
56 exists = _with_ac('exists')
57 rename = _with_ac('rename')
58 remove = _with_ac('remove')
59 mkdir = _with_ac('mkdir')
60 listdir = _with_ac('listdir')
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/luigi/contrib/hdfs/clients.py b/luigi/contrib/hdfs/clients.py
--- a/luigi/contrib/hdfs/clients.py
+++ b/luigi/contrib/hdfs/clients.py
@@ -19,33 +19,42 @@
The implementations of the hdfs clients. The hadoop cli client and the
snakebite client.
"""
-
+import logging
+import threading
from luigi.contrib.hdfs import config as hdfs_config
from luigi.contrib.hdfs import snakebite_client as hdfs_snakebite_client
from luigi.contrib.hdfs import webhdfs_client as hdfs_webhdfs_client
from luigi.contrib.hdfs import hadoopcli_clients as hdfs_hadoopcli_clients
import luigi.contrib.target
-import logging
logger = logging.getLogger('luigi-interface')
+_AUTOCONFIG_CLIENT = threading.local()
+
-def get_autoconfig_client():
+def get_autoconfig_client(client_cache=_AUTOCONFIG_CLIENT):
"""
Creates the client as specified in the `luigi.cfg` configuration.
"""
- configured_client = hdfs_config.get_configured_hdfs_client()
- if configured_client == "webhdfs":
- return hdfs_webhdfs_client.WebHdfsClient()
- if configured_client == "snakebite":
- return hdfs_snakebite_client.SnakebiteHdfsClient()
- if configured_client == "snakebite_with_hadoopcli_fallback":
- return luigi.contrib.target.CascadingClient([hdfs_snakebite_client.SnakebiteHdfsClient(),
- hdfs_hadoopcli_clients.create_hadoopcli_client()])
- if configured_client == "hadoopcli":
- return hdfs_hadoopcli_clients.create_hadoopcli_client()
- raise Exception("Unknown hdfs client " + configured_client)
+ try:
+ return client_cache.client
+ except AttributeError:
+ configured_client = hdfs_config.get_configured_hdfs_client()
+ if configured_client == "webhdfs":
+ client_cache.client = hdfs_webhdfs_client.WebHdfsClient()
+ elif configured_client == "snakebite":
+ client_cache.client = hdfs_snakebite_client.SnakebiteHdfsClient()
+ elif configured_client == "snakebite_with_hadoopcli_fallback":
+ client_cache.client = luigi.contrib.target.CascadingClient([
+ hdfs_snakebite_client.SnakebiteHdfsClient(),
+ hdfs_hadoopcli_clients.create_hadoopcli_client(),
+ ])
+ elif configured_client == "hadoopcli":
+ client_cache.client = hdfs_hadoopcli_clients.create_hadoopcli_client()
+ else:
+ raise Exception("Unknown hdfs client " + configured_client)
+ return client_cache.client
def _with_ac(method_name):
| {"golden_diff": "diff --git a/luigi/contrib/hdfs/clients.py b/luigi/contrib/hdfs/clients.py\n--- a/luigi/contrib/hdfs/clients.py\n+++ b/luigi/contrib/hdfs/clients.py\n@@ -19,33 +19,42 @@\n The implementations of the hdfs clients. The hadoop cli client and the\n snakebite client.\n \"\"\"\n-\n+import logging\n+import threading\n \n from luigi.contrib.hdfs import config as hdfs_config\n from luigi.contrib.hdfs import snakebite_client as hdfs_snakebite_client\n from luigi.contrib.hdfs import webhdfs_client as hdfs_webhdfs_client\n from luigi.contrib.hdfs import hadoopcli_clients as hdfs_hadoopcli_clients\n import luigi.contrib.target\n-import logging\n \n logger = logging.getLogger('luigi-interface')\n \n+_AUTOCONFIG_CLIENT = threading.local()\n+\n \n-def get_autoconfig_client():\n+def get_autoconfig_client(client_cache=_AUTOCONFIG_CLIENT):\n \"\"\"\n Creates the client as specified in the `luigi.cfg` configuration.\n \"\"\"\n- configured_client = hdfs_config.get_configured_hdfs_client()\n- if configured_client == \"webhdfs\":\n- return hdfs_webhdfs_client.WebHdfsClient()\n- if configured_client == \"snakebite\":\n- return hdfs_snakebite_client.SnakebiteHdfsClient()\n- if configured_client == \"snakebite_with_hadoopcli_fallback\":\n- return luigi.contrib.target.CascadingClient([hdfs_snakebite_client.SnakebiteHdfsClient(),\n- hdfs_hadoopcli_clients.create_hadoopcli_client()])\n- if configured_client == \"hadoopcli\":\n- return hdfs_hadoopcli_clients.create_hadoopcli_client()\n- raise Exception(\"Unknown hdfs client \" + configured_client)\n+ try:\n+ return client_cache.client\n+ except AttributeError:\n+ configured_client = hdfs_config.get_configured_hdfs_client()\n+ if configured_client == \"webhdfs\":\n+ client_cache.client = hdfs_webhdfs_client.WebHdfsClient()\n+ elif configured_client == \"snakebite\":\n+ client_cache.client = hdfs_snakebite_client.SnakebiteHdfsClient()\n+ elif configured_client == \"snakebite_with_hadoopcli_fallback\":\n+ client_cache.client = luigi.contrib.target.CascadingClient([\n+ hdfs_snakebite_client.SnakebiteHdfsClient(),\n+ hdfs_hadoopcli_clients.create_hadoopcli_client(),\n+ ])\n+ elif configured_client == \"hadoopcli\":\n+ client_cache.client = hdfs_hadoopcli_clients.create_hadoopcli_client()\n+ else:\n+ raise Exception(\"Unknown hdfs client \" + configured_client)\n+ return client_cache.client\n \n \n def _with_ac(method_name):\n", "issue": "HdfsTarget commands fail when many targets are instantiated\nI've recently added an existence check to a large mapreduce task for which some input files may be missing. With a large enough set of inputs, it will fail every time. I've simplified it to the following code:\n\n``` python\nfrom luigi.contrib.hdfs import HdfsTarget\n\nmany_targets = [HdfsTarget('/') for _ in range(2000)]\nall(target.exists() for target in many_targets)\n```\n\nThis will break if I use any past 1000 or so. Here the client uses snakebite. For a more direct triggering, we can also do\n\n``` python\nfrom snakebite.client import AutoConfigClient\n\nclients = [AutoConfigClient() for _ in range(10000)]\nall(client.test('/', exists=True) for client in clients)\n```\n\nIn either case, the bug goes away if I use a generator expression rather than a list comprehension. The problem is that when I'm dealing with objects coming out of luigi calls like input_hadoop, it's too late for me to decide between lists and iterators. I can code around this by instantiating all of my HdfsTargets with the same client, but I'm not sure this is safe. It could also be fixed in luigi if we had get_autoconfig_client return the same object each time. Is there any reason this wouldn't work?\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2012-2015 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\"\"\"\nThe implementations of the hdfs clients. The hadoop cli client and the\nsnakebite client.\n\"\"\"\n\n\nfrom luigi.contrib.hdfs import config as hdfs_config\nfrom luigi.contrib.hdfs import snakebite_client as hdfs_snakebite_client\nfrom luigi.contrib.hdfs import webhdfs_client as hdfs_webhdfs_client\nfrom luigi.contrib.hdfs import hadoopcli_clients as hdfs_hadoopcli_clients\nimport luigi.contrib.target\nimport logging\n\nlogger = logging.getLogger('luigi-interface')\n\n\ndef get_autoconfig_client():\n \"\"\"\n Creates the client as specified in the `luigi.cfg` configuration.\n \"\"\"\n configured_client = hdfs_config.get_configured_hdfs_client()\n if configured_client == \"webhdfs\":\n return hdfs_webhdfs_client.WebHdfsClient()\n if configured_client == \"snakebite\":\n return hdfs_snakebite_client.SnakebiteHdfsClient()\n if configured_client == \"snakebite_with_hadoopcli_fallback\":\n return luigi.contrib.target.CascadingClient([hdfs_snakebite_client.SnakebiteHdfsClient(),\n hdfs_hadoopcli_clients.create_hadoopcli_client()])\n if configured_client == \"hadoopcli\":\n return hdfs_hadoopcli_clients.create_hadoopcli_client()\n raise Exception(\"Unknown hdfs client \" + configured_client)\n\n\ndef _with_ac(method_name):\n def result(*args, **kwargs):\n return getattr(get_autoconfig_client(), method_name)(*args, **kwargs)\n return result\n\nexists = _with_ac('exists')\nrename = _with_ac('rename')\nremove = _with_ac('remove')\nmkdir = _with_ac('mkdir')\nlistdir = _with_ac('listdir')\n", "path": "luigi/contrib/hdfs/clients.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2012-2015 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\"\"\"\nThe implementations of the hdfs clients. The hadoop cli client and the\nsnakebite client.\n\"\"\"\nimport logging\nimport threading\n\nfrom luigi.contrib.hdfs import config as hdfs_config\nfrom luigi.contrib.hdfs import snakebite_client as hdfs_snakebite_client\nfrom luigi.contrib.hdfs import webhdfs_client as hdfs_webhdfs_client\nfrom luigi.contrib.hdfs import hadoopcli_clients as hdfs_hadoopcli_clients\nimport luigi.contrib.target\n\nlogger = logging.getLogger('luigi-interface')\n\n_AUTOCONFIG_CLIENT = threading.local()\n\n\ndef get_autoconfig_client(client_cache=_AUTOCONFIG_CLIENT):\n \"\"\"\n Creates the client as specified in the `luigi.cfg` configuration.\n \"\"\"\n try:\n return client_cache.client\n except AttributeError:\n configured_client = hdfs_config.get_configured_hdfs_client()\n if configured_client == \"webhdfs\":\n client_cache.client = hdfs_webhdfs_client.WebHdfsClient()\n elif configured_client == \"snakebite\":\n client_cache.client = hdfs_snakebite_client.SnakebiteHdfsClient()\n elif configured_client == \"snakebite_with_hadoopcli_fallback\":\n client_cache.client = luigi.contrib.target.CascadingClient([\n hdfs_snakebite_client.SnakebiteHdfsClient(),\n hdfs_hadoopcli_clients.create_hadoopcli_client(),\n ])\n elif configured_client == \"hadoopcli\":\n client_cache.client = hdfs_hadoopcli_clients.create_hadoopcli_client()\n else:\n raise Exception(\"Unknown hdfs client \" + configured_client)\n return client_cache.client\n\n\ndef _with_ac(method_name):\n def result(*args, **kwargs):\n return getattr(get_autoconfig_client(), method_name)(*args, **kwargs)\n return result\n\nexists = _with_ac('exists')\nrename = _with_ac('rename')\nremove = _with_ac('remove')\nmkdir = _with_ac('mkdir')\nlistdir = _with_ac('listdir')\n", "path": "luigi/contrib/hdfs/clients.py"}]} | 1,182 | 606 |
gh_patches_debug_2493 | rasdani/github-patches | git_diff | freedomofpress__securedrop-359 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
securedrop_init script in Tails doesn't work right if you run it twice
It appends torrc-additions to torrc multiple times, and it should just append it once.
securedrop_init script in Tails doesn't work right if you run it twice
It appends torrc-additions to torrc multiple times, and it should just append it once.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tails_files/securedrop_init.py`
Content:
```
1 #!/usr/bin/env python
2
3 import os, sys, subprocess
4
5 if __name__ == '__main__':
6 # check for root
7 if not os.geteuid()==0:
8 sys.exit('You need to run this as root')
9
10 # paths
11 path_torrc_additions = '/home/amnesia/Persistent/.securedrop/torrc_additions'
12 path_torrc_backup = '/etc/tor/torrc.bak'
13 path_torrc = '/etc/tor/torrc'
14
15 # load torrc_additions
16 if os.path.isfile(path_torrc_additions):
17 torrc_additions = open(path_torrc_additions).read()
18 else:
19 sys.exit('Error opening {0} for reading'.format(path_torrc_additions));
20
21 # load torrc
22 if os.path.isfile(path_torrc_backup):
23 torrc = open(path_torrc_backup).read()
24 else:
25 if os.path.isfile(path_torrc):
26 torrc = open(path_torrc).read()
27 else:
28 sys.exit('Error opening {0} for reading'.format(path_torrc));
29
30 # save a backup
31 open(path_torrc_backup, 'w').write(torrc)
32
33 # append the additions
34 open(path_torrc, 'a').write(torrc_additions)
35
36 # reload tor
37 subprocess.call(['/usr/sbin/service', 'tor', 'reload'])
38
39 # success
40 subprocess.call(['/usr/bin/sudo', '-u', 'amnesia', '/usr/bin/notify-send', 'Updated torrc', 'You can now connect to your SecureDrop document interface']);
41
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tails_files/securedrop_init.py b/tails_files/securedrop_init.py
--- a/tails_files/securedrop_init.py
+++ b/tails_files/securedrop_init.py
@@ -31,7 +31,7 @@
open(path_torrc_backup, 'w').write(torrc)
# append the additions
- open(path_torrc, 'a').write(torrc_additions)
+ open(path_torrc, 'w').write(torrc+torrc_additions)
# reload tor
subprocess.call(['/usr/sbin/service', 'tor', 'reload'])
| {"golden_diff": "diff --git a/tails_files/securedrop_init.py b/tails_files/securedrop_init.py\n--- a/tails_files/securedrop_init.py\n+++ b/tails_files/securedrop_init.py\n@@ -31,7 +31,7 @@\n open(path_torrc_backup, 'w').write(torrc)\n \n # append the additions\n- open(path_torrc, 'a').write(torrc_additions)\n+ open(path_torrc, 'w').write(torrc+torrc_additions)\n \n # reload tor\n subprocess.call(['/usr/sbin/service', 'tor', 'reload'])\n", "issue": "securedrop_init script in Tails doesn't work right if you run it twice\nIt appends torrc-additions to torrc multiple times, and it should just append it once.\n\nsecuredrop_init script in Tails doesn't work right if you run it twice\nIt appends torrc-additions to torrc multiple times, and it should just append it once.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os, sys, subprocess\n\nif __name__ == '__main__':\n # check for root\n if not os.geteuid()==0:\n sys.exit('You need to run this as root')\n\n # paths\n path_torrc_additions = '/home/amnesia/Persistent/.securedrop/torrc_additions'\n path_torrc_backup = '/etc/tor/torrc.bak'\n path_torrc = '/etc/tor/torrc'\n\n # load torrc_additions\n if os.path.isfile(path_torrc_additions):\n torrc_additions = open(path_torrc_additions).read()\n else:\n sys.exit('Error opening {0} for reading'.format(path_torrc_additions));\n\n # load torrc\n if os.path.isfile(path_torrc_backup):\n torrc = open(path_torrc_backup).read()\n else:\n if os.path.isfile(path_torrc):\n torrc = open(path_torrc).read()\n else:\n sys.exit('Error opening {0} for reading'.format(path_torrc));\n\n # save a backup\n open(path_torrc_backup, 'w').write(torrc)\n\n # append the additions\n open(path_torrc, 'a').write(torrc_additions)\n\n # reload tor\n subprocess.call(['/usr/sbin/service', 'tor', 'reload'])\n\n # success\n subprocess.call(['/usr/bin/sudo', '-u', 'amnesia', '/usr/bin/notify-send', 'Updated torrc', 'You can now connect to your SecureDrop document interface']);\n\n", "path": "tails_files/securedrop_init.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nimport os, sys, subprocess\n\nif __name__ == '__main__':\n # check for root\n if not os.geteuid()==0:\n sys.exit('You need to run this as root')\n\n # paths\n path_torrc_additions = '/home/amnesia/Persistent/.securedrop/torrc_additions'\n path_torrc_backup = '/etc/tor/torrc.bak'\n path_torrc = '/etc/tor/torrc'\n\n # load torrc_additions\n if os.path.isfile(path_torrc_additions):\n torrc_additions = open(path_torrc_additions).read()\n else:\n sys.exit('Error opening {0} for reading'.format(path_torrc_additions));\n\n # load torrc\n if os.path.isfile(path_torrc_backup):\n torrc = open(path_torrc_backup).read()\n else:\n if os.path.isfile(path_torrc):\n torrc = open(path_torrc).read()\n else:\n sys.exit('Error opening {0} for reading'.format(path_torrc));\n\n # save a backup\n open(path_torrc_backup, 'w').write(torrc)\n\n # append the additions\n open(path_torrc, 'w').write(torrc+torrc_additions)\n\n # reload tor\n subprocess.call(['/usr/sbin/service', 'tor', 'reload'])\n\n # success\n subprocess.call(['/usr/bin/sudo', '-u', 'amnesia', '/usr/bin/notify-send', 'Updated torrc', 'You can now connect to your SecureDrop document interface']);\n\n", "path": "tails_files/securedrop_init.py"}]} | 773 | 138 |
gh_patches_debug_9703 | rasdani/github-patches | git_diff | searx__searx-487 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'module' object has no attribute 'old_where'
I updated my searx instance today, and got the following error:
```
Traceback (most recent call last):
File "/usr/local/searx/searx/__init__.py", line 55, in <module>
environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()
AttributeError: 'module' object has no attribute 'old_where'
```
I updated the dependencies with `pip install --upgrade -r requirements.txt` before running searx.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/__init__.py`
Content:
```
1 '''
2 searx is free software: you can redistribute it and/or modify
3 it under the terms of the GNU Affero General Public License as published by
4 the Free Software Foundation, either version 3 of the License, or
5 (at your option) any later version.
6
7 searx is distributed in the hope that it will be useful,
8 but WITHOUT ANY WARRANTY; without even the implied warranty of
9 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10 GNU Affero General Public License for more details.
11
12 You should have received a copy of the GNU Affero General Public License
13 along with searx. If not, see < http://www.gnu.org/licenses/ >.
14
15 (C) 2013- by Adam Tauber, <[email protected]>
16 '''
17
18 import certifi
19 import logging
20 from os import environ
21 from os.path import realpath, dirname, join, abspath
22 from ssl import OPENSSL_VERSION_INFO, OPENSSL_VERSION
23 try:
24 from yaml import load
25 except:
26 from sys import exit, stderr
27 stderr.write('[E] install pyyaml\n')
28 exit(2)
29
30 searx_dir = abspath(dirname(__file__))
31 engine_dir = dirname(realpath(__file__))
32
33 # if possible set path to settings using the
34 # enviroment variable SEARX_SETTINGS_PATH
35 if 'SEARX_SETTINGS_PATH' in environ:
36 settings_path = environ['SEARX_SETTINGS_PATH']
37 # otherwise using default path
38 else:
39 settings_path = join(searx_dir, 'settings.yml')
40
41 # load settings
42 with open(settings_path) as settings_yaml:
43 settings = load(settings_yaml)
44
45 if settings.get('general', {}).get('debug'):
46 logging.basicConfig(level=logging.DEBUG)
47 else:
48 logging.basicConfig(level=logging.WARNING)
49
50 logger = logging.getLogger('searx')
51
52 # Workaround for openssl versions <1.0.2
53 # https://github.com/certifi/python-certifi/issues/26
54 if OPENSSL_VERSION_INFO[0:3] < (1, 0, 2):
55 environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()
56 logger.warning('You are using an old openssl version({0}), please upgrade above 1.0.2!'.format(OPENSSL_VERSION))
57
58 logger.info('Initialisation done')
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/__init__.py b/searx/__init__.py
--- a/searx/__init__.py
+++ b/searx/__init__.py
@@ -52,7 +52,8 @@
# Workaround for openssl versions <1.0.2
# https://github.com/certifi/python-certifi/issues/26
if OPENSSL_VERSION_INFO[0:3] < (1, 0, 2):
- environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()
+ if hasattr(certifi, 'old_where'):
+ environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()
logger.warning('You are using an old openssl version({0}), please upgrade above 1.0.2!'.format(OPENSSL_VERSION))
logger.info('Initialisation done')
| {"golden_diff": "diff --git a/searx/__init__.py b/searx/__init__.py\n--- a/searx/__init__.py\n+++ b/searx/__init__.py\n@@ -52,7 +52,8 @@\n # Workaround for openssl versions <1.0.2\n # https://github.com/certifi/python-certifi/issues/26\n if OPENSSL_VERSION_INFO[0:3] < (1, 0, 2):\n- environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()\n+ if hasattr(certifi, 'old_where'):\n+ environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()\n logger.warning('You are using an old openssl version({0}), please upgrade above 1.0.2!'.format(OPENSSL_VERSION))\n \n logger.info('Initialisation done')\n", "issue": "AttributeError: 'module' object has no attribute 'old_where'\nI updated my searx instance today, and got the following error:\n\n```\nTraceback (most recent call last):\n File \"/usr/local/searx/searx/__init__.py\", line 55, in <module>\n environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()\nAttributeError: 'module' object has no attribute 'old_where'\n```\n\nI updated the dependencies with `pip install --upgrade -r requirements.txt` before running searx.\n\n", "before_files": [{"content": "'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2013- by Adam Tauber, <[email protected]>\n'''\n\nimport certifi\nimport logging\nfrom os import environ\nfrom os.path import realpath, dirname, join, abspath\nfrom ssl import OPENSSL_VERSION_INFO, OPENSSL_VERSION\ntry:\n from yaml import load\nexcept:\n from sys import exit, stderr\n stderr.write('[E] install pyyaml\\n')\n exit(2)\n\nsearx_dir = abspath(dirname(__file__))\nengine_dir = dirname(realpath(__file__))\n\n# if possible set path to settings using the\n# enviroment variable SEARX_SETTINGS_PATH\nif 'SEARX_SETTINGS_PATH' in environ:\n settings_path = environ['SEARX_SETTINGS_PATH']\n# otherwise using default path\nelse:\n settings_path = join(searx_dir, 'settings.yml')\n\n# load settings\nwith open(settings_path) as settings_yaml:\n settings = load(settings_yaml)\n\nif settings.get('general', {}).get('debug'):\n logging.basicConfig(level=logging.DEBUG)\nelse:\n logging.basicConfig(level=logging.WARNING)\n\nlogger = logging.getLogger('searx')\n\n# Workaround for openssl versions <1.0.2\n# https://github.com/certifi/python-certifi/issues/26\nif OPENSSL_VERSION_INFO[0:3] < (1, 0, 2):\n environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()\n logger.warning('You are using an old openssl version({0}), please upgrade above 1.0.2!'.format(OPENSSL_VERSION))\n\nlogger.info('Initialisation done')\n", "path": "searx/__init__.py"}], "after_files": [{"content": "'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2013- by Adam Tauber, <[email protected]>\n'''\n\nimport certifi\nimport logging\nfrom os import environ\nfrom os.path import realpath, dirname, join, abspath\nfrom ssl import OPENSSL_VERSION_INFO, OPENSSL_VERSION\ntry:\n from yaml import load\nexcept:\n from sys import exit, stderr\n stderr.write('[E] install pyyaml\\n')\n exit(2)\n\nsearx_dir = abspath(dirname(__file__))\nengine_dir = dirname(realpath(__file__))\n\n# if possible set path to settings using the\n# enviroment variable SEARX_SETTINGS_PATH\nif 'SEARX_SETTINGS_PATH' in environ:\n settings_path = environ['SEARX_SETTINGS_PATH']\n# otherwise using default path\nelse:\n settings_path = join(searx_dir, 'settings.yml')\n\n# load settings\nwith open(settings_path) as settings_yaml:\n settings = load(settings_yaml)\n\nif settings.get('general', {}).get('debug'):\n logging.basicConfig(level=logging.DEBUG)\nelse:\n logging.basicConfig(level=logging.WARNING)\n\nlogger = logging.getLogger('searx')\n\n# Workaround for openssl versions <1.0.2\n# https://github.com/certifi/python-certifi/issues/26\nif OPENSSL_VERSION_INFO[0:3] < (1, 0, 2):\n if hasattr(certifi, 'old_where'):\n environ['REQUESTS_CA_BUNDLE'] = certifi.old_where()\n logger.warning('You are using an old openssl version({0}), please upgrade above 1.0.2!'.format(OPENSSL_VERSION))\n\nlogger.info('Initialisation done')\n", "path": "searx/__init__.py"}]} | 971 | 183 |
gh_patches_debug_42563 | rasdani/github-patches | git_diff | litestar-org__litestar-1474 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/constants.py`
Content:
```
1 from inspect import Signature
2
3 from pydantic.fields import Undefined
4
5 from litestar.types import Empty
6
7 DEFAULT_ALLOWED_CORS_HEADERS = {"Accept", "Accept-Language", "Content-Language", "Content-Type"}
8 DEFAULT_CHUNK_SIZE = 1024 * 128 # 128KB
9 HTTP_DISCONNECT = "http.disconnect"
10 HTTP_RESPONSE_BODY = "http.response.body"
11 HTTP_RESPONSE_START = "http.response.start"
12 ONE_MEGABYTE = 1024 * 1024
13 OPENAPI_NOT_INITIALIZED = "Litestar has not been instantiated with OpenAPIConfig"
14 REDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}
15 RESERVED_KWARGS = {"state", "headers", "cookies", "request", "socket", "data", "query", "scope", "body"}
16 SCOPE_STATE_DEPENDENCY_CACHE = "dependency_cache"
17 SCOPE_STATE_NAMESPACE = "__litestar__"
18 SCOPE_STATE_RESPONSE_COMPRESSED = "response_compressed"
19 SKIP_VALIDATION_NAMES = {"request", "socket", "scope", "receive", "send"}
20 UNDEFINED_SENTINELS = {Undefined, Signature.empty, Empty, Ellipsis}
21 WEBSOCKET_CLOSE = "websocket.close"
22 WEBSOCKET_DISCONNECT = "websocket.disconnect"
23
```
Path: `litestar/response/redirect.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING, Any, Literal
4 from urllib.parse import quote
5
6 from litestar.constants import REDIRECT_STATUS_CODES
7 from litestar.enums import MediaType
8 from litestar.exceptions import ImproperlyConfiguredException
9 from litestar.response.base import Response
10 from litestar.status_codes import HTTP_307_TEMPORARY_REDIRECT
11
12 __all__ = ("RedirectResponse",)
13
14
15 if TYPE_CHECKING:
16 from litestar.background_tasks import BackgroundTask, BackgroundTasks
17 from litestar.types import ResponseCookies
18
19
20 class RedirectResponse(Response[Any]):
21 """A redirect response."""
22
23 def __init__(
24 self,
25 url: str,
26 *,
27 status_code: Literal[301, 302, 303, 307, 308] = HTTP_307_TEMPORARY_REDIRECT,
28 background: BackgroundTask | BackgroundTasks | None = None,
29 headers: dict[str, Any] | None = None,
30 cookies: ResponseCookies | None = None,
31 encoding: str = "utf-8",
32 ) -> None:
33 """Initialize the response.
34
35 Args:
36 url: A url to redirect to.
37 status_code: An HTTP status code. The status code should be one of 301, 302, 303, 307 or 308,
38 otherwise an exception will be raised.
39 background: A background task or tasks to be run after the response is sent.
40 headers: A string keyed dictionary of response headers. Header keys are insensitive.
41 cookies: A list of :class:`Cookie <.datastructures.Cookie>` instances to be set under the response
42 ``Set-Cookie`` header.
43 encoding: The encoding to be used for the response headers.
44
45 Raises:
46 ImproperlyConfiguredException: If status code is not a redirect status code.
47 """
48 if status_code not in REDIRECT_STATUS_CODES:
49 raise ImproperlyConfiguredException(
50 f"{status_code} is not a valid for this response. "
51 f"Redirect responses should have one of "
52 f"the following status codes: {', '.join([str(s) for s in REDIRECT_STATUS_CODES])}"
53 )
54 super().__init__(
55 background=background,
56 content=b"",
57 cookies=cookies,
58 headers={**(headers or {}), "location": quote(url, safe="/#%[]=:;$&()+,!?*@'~")},
59 media_type=MediaType.TEXT,
60 status_code=status_code,
61 encoding=encoding,
62 )
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/litestar/constants.py b/litestar/constants.py
--- a/litestar/constants.py
+++ b/litestar/constants.py
@@ -2,6 +2,7 @@
from pydantic.fields import Undefined
+from litestar.enums import MediaType
from litestar.types import Empty
DEFAULT_ALLOWED_CORS_HEADERS = {"Accept", "Accept-Language", "Content-Language", "Content-Type"}
@@ -12,6 +13,7 @@
ONE_MEGABYTE = 1024 * 1024
OPENAPI_NOT_INITIALIZED = "Litestar has not been instantiated with OpenAPIConfig"
REDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}
+REDIRECT_ALLOWED_MEDIA_TYPES = {MediaType.TEXT, MediaType.HTML}
RESERVED_KWARGS = {"state", "headers", "cookies", "request", "socket", "data", "query", "scope", "body"}
SCOPE_STATE_DEPENDENCY_CACHE = "dependency_cache"
SCOPE_STATE_NAMESPACE = "__litestar__"
diff --git a/litestar/response/redirect.py b/litestar/response/redirect.py
--- a/litestar/response/redirect.py
+++ b/litestar/response/redirect.py
@@ -3,7 +3,7 @@
from typing import TYPE_CHECKING, Any, Literal
from urllib.parse import quote
-from litestar.constants import REDIRECT_STATUS_CODES
+from litestar.constants import REDIRECT_ALLOWED_MEDIA_TYPES, REDIRECT_STATUS_CODES
from litestar.enums import MediaType
from litestar.exceptions import ImproperlyConfiguredException
from litestar.response.base import Response
@@ -29,6 +29,7 @@
headers: dict[str, Any] | None = None,
cookies: ResponseCookies | None = None,
encoding: str = "utf-8",
+ media_type: str | MediaType = MediaType.TEXT,
) -> None:
"""Initialize the response.
@@ -41,9 +42,11 @@
cookies: A list of :class:`Cookie <.datastructures.Cookie>` instances to be set under the response
``Set-Cookie`` header.
encoding: The encoding to be used for the response headers.
+ media_type: A value for the response ``Content-Type`` header.
+
Raises:
- ImproperlyConfiguredException: If status code is not a redirect status code.
+ ImproperlyConfiguredException: Either if status code is not a redirect status code or media type is not supported.
"""
if status_code not in REDIRECT_STATUS_CODES:
raise ImproperlyConfiguredException(
@@ -51,12 +54,18 @@
f"Redirect responses should have one of "
f"the following status codes: {', '.join([str(s) for s in REDIRECT_STATUS_CODES])}"
)
+ if media_type not in REDIRECT_ALLOWED_MEDIA_TYPES:
+ raise ImproperlyConfiguredException(
+ f"{media_type} media type is not supported yet. "
+ f"Media type should be one of "
+ f"the following values: {', '.join([str(s) for s in REDIRECT_ALLOWED_MEDIA_TYPES])}"
+ )
super().__init__(
background=background,
content=b"",
cookies=cookies,
headers={**(headers or {}), "location": quote(url, safe="/#%[]=:;$&()+,!?*@'~")},
- media_type=MediaType.TEXT,
+ media_type=media_type,
status_code=status_code,
encoding=encoding,
)
| {"golden_diff": "diff --git a/litestar/constants.py b/litestar/constants.py\n--- a/litestar/constants.py\n+++ b/litestar/constants.py\n@@ -2,6 +2,7 @@\n \n from pydantic.fields import Undefined\n \n+from litestar.enums import MediaType\n from litestar.types import Empty\n \n DEFAULT_ALLOWED_CORS_HEADERS = {\"Accept\", \"Accept-Language\", \"Content-Language\", \"Content-Type\"}\n@@ -12,6 +13,7 @@\n ONE_MEGABYTE = 1024 * 1024\n OPENAPI_NOT_INITIALIZED = \"Litestar has not been instantiated with OpenAPIConfig\"\n REDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}\n+REDIRECT_ALLOWED_MEDIA_TYPES = {MediaType.TEXT, MediaType.HTML}\n RESERVED_KWARGS = {\"state\", \"headers\", \"cookies\", \"request\", \"socket\", \"data\", \"query\", \"scope\", \"body\"}\n SCOPE_STATE_DEPENDENCY_CACHE = \"dependency_cache\"\n SCOPE_STATE_NAMESPACE = \"__litestar__\"\ndiff --git a/litestar/response/redirect.py b/litestar/response/redirect.py\n--- a/litestar/response/redirect.py\n+++ b/litestar/response/redirect.py\n@@ -3,7 +3,7 @@\n from typing import TYPE_CHECKING, Any, Literal\n from urllib.parse import quote\n \n-from litestar.constants import REDIRECT_STATUS_CODES\n+from litestar.constants import REDIRECT_ALLOWED_MEDIA_TYPES, REDIRECT_STATUS_CODES\n from litestar.enums import MediaType\n from litestar.exceptions import ImproperlyConfiguredException\n from litestar.response.base import Response\n@@ -29,6 +29,7 @@\n headers: dict[str, Any] | None = None,\n cookies: ResponseCookies | None = None,\n encoding: str = \"utf-8\",\n+ media_type: str | MediaType = MediaType.TEXT,\n ) -> None:\n \"\"\"Initialize the response.\n \n@@ -41,9 +42,11 @@\n cookies: A list of :class:`Cookie <.datastructures.Cookie>` instances to be set under the response\n ``Set-Cookie`` header.\n encoding: The encoding to be used for the response headers.\n+ media_type: A value for the response ``Content-Type`` header.\n+\n \n Raises:\n- ImproperlyConfiguredException: If status code is not a redirect status code.\n+ ImproperlyConfiguredException: Either if status code is not a redirect status code or media type is not supported.\n \"\"\"\n if status_code not in REDIRECT_STATUS_CODES:\n raise ImproperlyConfiguredException(\n@@ -51,12 +54,18 @@\n f\"Redirect responses should have one of \"\n f\"the following status codes: {', '.join([str(s) for s in REDIRECT_STATUS_CODES])}\"\n )\n+ if media_type not in REDIRECT_ALLOWED_MEDIA_TYPES:\n+ raise ImproperlyConfiguredException(\n+ f\"{media_type} media type is not supported yet. \"\n+ f\"Media type should be one of \"\n+ f\"the following values: {', '.join([str(s) for s in REDIRECT_ALLOWED_MEDIA_TYPES])}\"\n+ )\n super().__init__(\n background=background,\n content=b\"\",\n cookies=cookies,\n headers={**(headers or {}), \"location\": quote(url, safe=\"/#%[]=:;$&()+,!?*@'~\")},\n- media_type=MediaType.TEXT,\n+ media_type=media_type,\n status_code=status_code,\n encoding=encoding,\n )\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from inspect import Signature\n\nfrom pydantic.fields import Undefined\n\nfrom litestar.types import Empty\n\nDEFAULT_ALLOWED_CORS_HEADERS = {\"Accept\", \"Accept-Language\", \"Content-Language\", \"Content-Type\"}\nDEFAULT_CHUNK_SIZE = 1024 * 128 # 128KB\nHTTP_DISCONNECT = \"http.disconnect\"\nHTTP_RESPONSE_BODY = \"http.response.body\"\nHTTP_RESPONSE_START = \"http.response.start\"\nONE_MEGABYTE = 1024 * 1024\nOPENAPI_NOT_INITIALIZED = \"Litestar has not been instantiated with OpenAPIConfig\"\nREDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}\nRESERVED_KWARGS = {\"state\", \"headers\", \"cookies\", \"request\", \"socket\", \"data\", \"query\", \"scope\", \"body\"}\nSCOPE_STATE_DEPENDENCY_CACHE = \"dependency_cache\"\nSCOPE_STATE_NAMESPACE = \"__litestar__\"\nSCOPE_STATE_RESPONSE_COMPRESSED = \"response_compressed\"\nSKIP_VALIDATION_NAMES = {\"request\", \"socket\", \"scope\", \"receive\", \"send\"}\nUNDEFINED_SENTINELS = {Undefined, Signature.empty, Empty, Ellipsis}\nWEBSOCKET_CLOSE = \"websocket.close\"\nWEBSOCKET_DISCONNECT = \"websocket.disconnect\"\n", "path": "litestar/constants.py"}, {"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Literal\nfrom urllib.parse import quote\n\nfrom litestar.constants import REDIRECT_STATUS_CODES\nfrom litestar.enums import MediaType\nfrom litestar.exceptions import ImproperlyConfiguredException\nfrom litestar.response.base import Response\nfrom litestar.status_codes import HTTP_307_TEMPORARY_REDIRECT\n\n__all__ = (\"RedirectResponse\",)\n\n\nif TYPE_CHECKING:\n from litestar.background_tasks import BackgroundTask, BackgroundTasks\n from litestar.types import ResponseCookies\n\n\nclass RedirectResponse(Response[Any]):\n \"\"\"A redirect response.\"\"\"\n\n def __init__(\n self,\n url: str,\n *,\n status_code: Literal[301, 302, 303, 307, 308] = HTTP_307_TEMPORARY_REDIRECT,\n background: BackgroundTask | BackgroundTasks | None = None,\n headers: dict[str, Any] | None = None,\n cookies: ResponseCookies | None = None,\n encoding: str = \"utf-8\",\n ) -> None:\n \"\"\"Initialize the response.\n\n Args:\n url: A url to redirect to.\n status_code: An HTTP status code. The status code should be one of 301, 302, 303, 307 or 308,\n otherwise an exception will be raised.\n background: A background task or tasks to be run after the response is sent.\n headers: A string keyed dictionary of response headers. Header keys are insensitive.\n cookies: A list of :class:`Cookie <.datastructures.Cookie>` instances to be set under the response\n ``Set-Cookie`` header.\n encoding: The encoding to be used for the response headers.\n\n Raises:\n ImproperlyConfiguredException: If status code is not a redirect status code.\n \"\"\"\n if status_code not in REDIRECT_STATUS_CODES:\n raise ImproperlyConfiguredException(\n f\"{status_code} is not a valid for this response. \"\n f\"Redirect responses should have one of \"\n f\"the following status codes: {', '.join([str(s) for s in REDIRECT_STATUS_CODES])}\"\n )\n super().__init__(\n background=background,\n content=b\"\",\n cookies=cookies,\n headers={**(headers or {}), \"location\": quote(url, safe=\"/#%[]=:;$&()+,!?*@'~\")},\n media_type=MediaType.TEXT,\n status_code=status_code,\n encoding=encoding,\n )\n", "path": "litestar/response/redirect.py"}], "after_files": [{"content": "from inspect import Signature\n\nfrom pydantic.fields import Undefined\n\nfrom litestar.enums import MediaType\nfrom litestar.types import Empty\n\nDEFAULT_ALLOWED_CORS_HEADERS = {\"Accept\", \"Accept-Language\", \"Content-Language\", \"Content-Type\"}\nDEFAULT_CHUNK_SIZE = 1024 * 128 # 128KB\nHTTP_DISCONNECT = \"http.disconnect\"\nHTTP_RESPONSE_BODY = \"http.response.body\"\nHTTP_RESPONSE_START = \"http.response.start\"\nONE_MEGABYTE = 1024 * 1024\nOPENAPI_NOT_INITIALIZED = \"Litestar has not been instantiated with OpenAPIConfig\"\nREDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}\nREDIRECT_ALLOWED_MEDIA_TYPES = {MediaType.TEXT, MediaType.HTML}\nRESERVED_KWARGS = {\"state\", \"headers\", \"cookies\", \"request\", \"socket\", \"data\", \"query\", \"scope\", \"body\"}\nSCOPE_STATE_DEPENDENCY_CACHE = \"dependency_cache\"\nSCOPE_STATE_NAMESPACE = \"__litestar__\"\nSCOPE_STATE_RESPONSE_COMPRESSED = \"response_compressed\"\nSKIP_VALIDATION_NAMES = {\"request\", \"socket\", \"scope\", \"receive\", \"send\"}\nUNDEFINED_SENTINELS = {Undefined, Signature.empty, Empty, Ellipsis}\nWEBSOCKET_CLOSE = \"websocket.close\"\nWEBSOCKET_DISCONNECT = \"websocket.disconnect\"\n", "path": "litestar/constants.py"}, {"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Literal\nfrom urllib.parse import quote\n\nfrom litestar.constants import REDIRECT_ALLOWED_MEDIA_TYPES, REDIRECT_STATUS_CODES\nfrom litestar.enums import MediaType\nfrom litestar.exceptions import ImproperlyConfiguredException\nfrom litestar.response.base import Response\nfrom litestar.status_codes import HTTP_307_TEMPORARY_REDIRECT\n\n__all__ = (\"RedirectResponse\",)\n\n\nif TYPE_CHECKING:\n from litestar.background_tasks import BackgroundTask, BackgroundTasks\n from litestar.types import ResponseCookies\n\n\nclass RedirectResponse(Response[Any]):\n \"\"\"A redirect response.\"\"\"\n\n def __init__(\n self,\n url: str,\n *,\n status_code: Literal[301, 302, 303, 307, 308] = HTTP_307_TEMPORARY_REDIRECT,\n background: BackgroundTask | BackgroundTasks | None = None,\n headers: dict[str, Any] | None = None,\n cookies: ResponseCookies | None = None,\n encoding: str = \"utf-8\",\n media_type: str | MediaType = MediaType.TEXT,\n ) -> None:\n \"\"\"Initialize the response.\n\n Args:\n url: A url to redirect to.\n status_code: An HTTP status code. The status code should be one of 301, 302, 303, 307 or 308,\n otherwise an exception will be raised.\n background: A background task or tasks to be run after the response is sent.\n headers: A string keyed dictionary of response headers. Header keys are insensitive.\n cookies: A list of :class:`Cookie <.datastructures.Cookie>` instances to be set under the response\n ``Set-Cookie`` header.\n encoding: The encoding to be used for the response headers.\n media_type: A value for the response ``Content-Type`` header.\n\n\n Raises:\n ImproperlyConfiguredException: Either if status code is not a redirect status code or media type is not supported.\n \"\"\"\n if status_code not in REDIRECT_STATUS_CODES:\n raise ImproperlyConfiguredException(\n f\"{status_code} is not a valid for this response. \"\n f\"Redirect responses should have one of \"\n f\"the following status codes: {', '.join([str(s) for s in REDIRECT_STATUS_CODES])}\"\n )\n if media_type not in REDIRECT_ALLOWED_MEDIA_TYPES:\n raise ImproperlyConfiguredException(\n f\"{media_type} media type is not supported yet. \"\n f\"Media type should be one of \"\n f\"the following values: {', '.join([str(s) for s in REDIRECT_ALLOWED_MEDIA_TYPES])}\"\n )\n super().__init__(\n background=background,\n content=b\"\",\n cookies=cookies,\n headers={**(headers or {}), \"location\": quote(url, safe=\"/#%[]=:;$&()+,!?*@'~\")},\n media_type=media_type,\n status_code=status_code,\n encoding=encoding,\n )\n", "path": "litestar/response/redirect.py"}]} | 1,423 | 774 |
gh_patches_debug_7722 | rasdani/github-patches | git_diff | googleapis__python-bigquery-624 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
google.auth.exceptions.TransportError is not being retried
Hi,
Recently i faced an error can you please consider using this exception as a retry one also. Since i have faced this error in one of our production system
https://github.com/googleapis/python-storage/issues/414
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `google/cloud/bigquery/retry.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from google.api_core import exceptions
16 from google.api_core import retry
17 import requests.exceptions
18
19
20 _RETRYABLE_REASONS = frozenset(
21 ["rateLimitExceeded", "backendError", "internalError", "badGateway"]
22 )
23
24 _UNSTRUCTURED_RETRYABLE_TYPES = (
25 ConnectionError,
26 exceptions.TooManyRequests,
27 exceptions.InternalServerError,
28 exceptions.BadGateway,
29 requests.exceptions.ConnectionError,
30 )
31
32
33 def _should_retry(exc):
34 """Predicate for determining when to retry.
35
36 We retry if and only if the 'reason' is 'backendError'
37 or 'rateLimitExceeded'.
38 """
39 if not hasattr(exc, "errors") or len(exc.errors) == 0:
40 # Check for unstructured error returns, e.g. from GFE
41 return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES)
42
43 reason = exc.errors[0]["reason"]
44 return reason in _RETRYABLE_REASONS
45
46
47 DEFAULT_RETRY = retry.Retry(predicate=_should_retry)
48 """The default retry object.
49
50 Any method with a ``retry`` parameter will be retried automatically,
51 with reasonable defaults. To disable retry, pass ``retry=None``.
52 To modify the default retry behavior, call a ``with_XXX`` method
53 on ``DEFAULT_RETRY``. For example, to change the deadline to 30 seconds,
54 pass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.
55 """
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/google/cloud/bigquery/retry.py b/google/cloud/bigquery/retry.py
--- a/google/cloud/bigquery/retry.py
+++ b/google/cloud/bigquery/retry.py
@@ -14,6 +14,7 @@
from google.api_core import exceptions
from google.api_core import retry
+from google.auth import exceptions as auth_exceptions
import requests.exceptions
@@ -27,6 +28,7 @@
exceptions.InternalServerError,
exceptions.BadGateway,
requests.exceptions.ConnectionError,
+ auth_exceptions.TransportError,
)
| {"golden_diff": "diff --git a/google/cloud/bigquery/retry.py b/google/cloud/bigquery/retry.py\n--- a/google/cloud/bigquery/retry.py\n+++ b/google/cloud/bigquery/retry.py\n@@ -14,6 +14,7 @@\n \n from google.api_core import exceptions\n from google.api_core import retry\n+from google.auth import exceptions as auth_exceptions\n import requests.exceptions\n \n \n@@ -27,6 +28,7 @@\n exceptions.InternalServerError,\n exceptions.BadGateway,\n requests.exceptions.ConnectionError,\n+ auth_exceptions.TransportError,\n )\n", "issue": "google.auth.exceptions.TransportError is not being retried\nHi, \r\n\r\nRecently i faced an error can you please consider using this exception as a retry one also. Since i have faced this error in one of our production system\r\n\r\nhttps://github.com/googleapis/python-storage/issues/414\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom google.api_core import exceptions\nfrom google.api_core import retry\nimport requests.exceptions\n\n\n_RETRYABLE_REASONS = frozenset(\n [\"rateLimitExceeded\", \"backendError\", \"internalError\", \"badGateway\"]\n)\n\n_UNSTRUCTURED_RETRYABLE_TYPES = (\n ConnectionError,\n exceptions.TooManyRequests,\n exceptions.InternalServerError,\n exceptions.BadGateway,\n requests.exceptions.ConnectionError,\n)\n\n\ndef _should_retry(exc):\n \"\"\"Predicate for determining when to retry.\n\n We retry if and only if the 'reason' is 'backendError'\n or 'rateLimitExceeded'.\n \"\"\"\n if not hasattr(exc, \"errors\") or len(exc.errors) == 0:\n # Check for unstructured error returns, e.g. from GFE\n return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES)\n\n reason = exc.errors[0][\"reason\"]\n return reason in _RETRYABLE_REASONS\n\n\nDEFAULT_RETRY = retry.Retry(predicate=_should_retry)\n\"\"\"The default retry object.\n\nAny method with a ``retry`` parameter will be retried automatically,\nwith reasonable defaults. To disable retry, pass ``retry=None``.\nTo modify the default retry behavior, call a ``with_XXX`` method\non ``DEFAULT_RETRY``. For example, to change the deadline to 30 seconds,\npass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.\n\"\"\"\n", "path": "google/cloud/bigquery/retry.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom google.api_core import exceptions\nfrom google.api_core import retry\nfrom google.auth import exceptions as auth_exceptions\nimport requests.exceptions\n\n\n_RETRYABLE_REASONS = frozenset(\n [\"rateLimitExceeded\", \"backendError\", \"internalError\", \"badGateway\"]\n)\n\n_UNSTRUCTURED_RETRYABLE_TYPES = (\n ConnectionError,\n exceptions.TooManyRequests,\n exceptions.InternalServerError,\n exceptions.BadGateway,\n requests.exceptions.ConnectionError,\n auth_exceptions.TransportError,\n)\n\n\ndef _should_retry(exc):\n \"\"\"Predicate for determining when to retry.\n\n We retry if and only if the 'reason' is 'backendError'\n or 'rateLimitExceeded'.\n \"\"\"\n if not hasattr(exc, \"errors\") or len(exc.errors) == 0:\n # Check for unstructured error returns, e.g. from GFE\n return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES)\n\n reason = exc.errors[0][\"reason\"]\n return reason in _RETRYABLE_REASONS\n\n\nDEFAULT_RETRY = retry.Retry(predicate=_should_retry)\n\"\"\"The default retry object.\n\nAny method with a ``retry`` parameter will be retried automatically,\nwith reasonable defaults. To disable retry, pass ``retry=None``.\nTo modify the default retry behavior, call a ``with_XXX`` method\non ``DEFAULT_RETRY``. For example, to change the deadline to 30 seconds,\npass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.\n\"\"\"\n", "path": "google/cloud/bigquery/retry.py"}]} | 861 | 118 |
gh_patches_debug_5757 | rasdani/github-patches | git_diff | edgedb__edgedb-7149 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ALTER MODULE foo RENAME TO bar gives ISE
We should produce a parse error or an unimplemented message.
We could actually support it, but it might actually be kind of hairy to do, since in the data model modules really don't *do* anything, they *just* lay claim to a name. (And DDL isn't *that* important anyway; renaming a module might actually work in SDL...)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `edb/schema/modules.py`
Content:
```
1 #
2 # This source file is part of the EdgeDB open source project.
3 #
4 # Copyright 2008-present MagicStack Inc. and the EdgeDB authors.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17 #
18
19
20 from __future__ import annotations
21
22
23 from edb import errors
24
25 from edb.edgeql import ast as qlast
26 from edb.edgeql import qltypes
27
28 from . import annos as s_anno
29 from . import delta as sd
30 from . import name as sn
31 from . import objects as so
32 from . import schema as s_schema
33
34 RESERVED_MODULE_NAMES = {
35 'super',
36 }
37
38
39 class Module(
40 s_anno.AnnotationSubject,
41 so.Object, # Help reflection figure out the right db MRO
42 qlkind=qltypes.SchemaObjectClass.MODULE,
43 data_safe=False,
44 ):
45 # N.B: Modules are not "qualified" objects, even though they can
46 # be nested (because they might *not* be nested) and we arrange
47 # for their names to always be represented with an UnqualName.
48 pass
49
50
51 class ModuleCommandContext(sd.ObjectCommandContext[Module]):
52 pass
53
54
55 class ModuleCommand(
56 sd.ObjectCommand[Module],
57 context_class=ModuleCommandContext,
58 ):
59
60 def _validate_legal_command(
61 self,
62 schema: s_schema.Schema,
63 context: sd.CommandContext,
64 ) -> None:
65 super()._validate_legal_command(schema, context)
66
67 last = str(self.classname)
68 first = last
69 enclosing = None
70 if '::' in str(self.classname):
71 first, _, _ = str(self.classname).partition('::')
72 enclosing, _, last = str(self.classname).rpartition('::')
73 if not schema.has_module(enclosing):
74 raise errors.UnknownModuleError(
75 f'module {enclosing!r} is not in this schema')
76
77 if last in RESERVED_MODULE_NAMES:
78 raise errors.SchemaDefinitionError(
79 f"module {last!r} is a reserved module name")
80
81 if (
82 not context.stdmode and not context.testmode
83 and sn.UnqualName(first) in s_schema.STD_MODULES
84 ):
85 raise errors.SchemaDefinitionError(
86 f'cannot {self._delta_action} {self.get_verbosename()}: '
87 f'module {first} is read-only',
88 span=self.span)
89
90
91 class CreateModule(ModuleCommand, sd.CreateObject[Module]):
92 astnode = qlast.CreateModule
93
94
95 class AlterModule(ModuleCommand, sd.AlterObject[Module]):
96 astnode = qlast.AlterModule
97
98
99 class DeleteModule(ModuleCommand, sd.DeleteObject[Module]):
100 astnode = qlast.DropModule
101
102 def _validate_legal_command(
103 self,
104 schema: s_schema.Schema,
105 context: sd.CommandContext,
106 ) -> None:
107 super()._validate_legal_command(schema, context)
108
109 # For now, we disallow deleting non-empty modules.
110
111 # Modules aren't actually stored with any direct linkage
112 # to the objects in them, so explicitly search for objects
113 # in the module (excluding the module itself).
114 has_objects = bool(any(schema.get_objects(
115 included_modules=[self.classname],
116 excluded_items=[self.classname],
117 )))
118
119 if has_objects:
120 vn = self.scls.get_verbosename(schema)
121 raise errors.SchemaError(
122 f'cannot drop {vn} because it is not empty'
123 )
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/edb/schema/modules.py b/edb/schema/modules.py
--- a/edb/schema/modules.py
+++ b/edb/schema/modules.py
@@ -96,6 +96,19 @@
astnode = qlast.AlterModule
+class RenameModule(ModuleCommand, sd.RenameObject[Module]):
+
+ def apply(
+ self,
+ schema: s_schema.Schema,
+ context: sd.CommandContext,
+ ) -> s_schema.Schema:
+ raise errors.SchemaError(
+ f'renaming modules is not supported',
+ span=self.span,
+ )
+
+
class DeleteModule(ModuleCommand, sd.DeleteObject[Module]):
astnode = qlast.DropModule
| {"golden_diff": "diff --git a/edb/schema/modules.py b/edb/schema/modules.py\n--- a/edb/schema/modules.py\n+++ b/edb/schema/modules.py\n@@ -96,6 +96,19 @@\n astnode = qlast.AlterModule\n \n \n+class RenameModule(ModuleCommand, sd.RenameObject[Module]):\n+\n+ def apply(\n+ self,\n+ schema: s_schema.Schema,\n+ context: sd.CommandContext,\n+ ) -> s_schema.Schema:\n+ raise errors.SchemaError(\n+ f'renaming modules is not supported',\n+ span=self.span,\n+ )\n+\n+\n class DeleteModule(ModuleCommand, sd.DeleteObject[Module]):\n astnode = qlast.DropModule\n", "issue": "ALTER MODULE foo RENAME TO bar gives ISE\nWe should produce a parse error or an unimplemented message.\r\n\r\nWe could actually support it, but it might actually be kind of hairy to do, since in the data model modules really don't *do* anything, they *just* lay claim to a name. (And DDL isn't *that* important anyway; renaming a module might actually work in SDL...)\n", "before_files": [{"content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2008-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\nfrom __future__ import annotations\n\n\nfrom edb import errors\n\nfrom edb.edgeql import ast as qlast\nfrom edb.edgeql import qltypes\n\nfrom . import annos as s_anno\nfrom . import delta as sd\nfrom . import name as sn\nfrom . import objects as so\nfrom . import schema as s_schema\n\nRESERVED_MODULE_NAMES = {\n 'super',\n}\n\n\nclass Module(\n s_anno.AnnotationSubject,\n so.Object, # Help reflection figure out the right db MRO\n qlkind=qltypes.SchemaObjectClass.MODULE,\n data_safe=False,\n):\n # N.B: Modules are not \"qualified\" objects, even though they can\n # be nested (because they might *not* be nested) and we arrange\n # for their names to always be represented with an UnqualName.\n pass\n\n\nclass ModuleCommandContext(sd.ObjectCommandContext[Module]):\n pass\n\n\nclass ModuleCommand(\n sd.ObjectCommand[Module],\n context_class=ModuleCommandContext,\n):\n\n def _validate_legal_command(\n self,\n schema: s_schema.Schema,\n context: sd.CommandContext,\n ) -> None:\n super()._validate_legal_command(schema, context)\n\n last = str(self.classname)\n first = last\n enclosing = None\n if '::' in str(self.classname):\n first, _, _ = str(self.classname).partition('::')\n enclosing, _, last = str(self.classname).rpartition('::')\n if not schema.has_module(enclosing):\n raise errors.UnknownModuleError(\n f'module {enclosing!r} is not in this schema')\n\n if last in RESERVED_MODULE_NAMES:\n raise errors.SchemaDefinitionError(\n f\"module {last!r} is a reserved module name\")\n\n if (\n not context.stdmode and not context.testmode\n and sn.UnqualName(first) in s_schema.STD_MODULES\n ):\n raise errors.SchemaDefinitionError(\n f'cannot {self._delta_action} {self.get_verbosename()}: '\n f'module {first} is read-only',\n span=self.span)\n\n\nclass CreateModule(ModuleCommand, sd.CreateObject[Module]):\n astnode = qlast.CreateModule\n\n\nclass AlterModule(ModuleCommand, sd.AlterObject[Module]):\n astnode = qlast.AlterModule\n\n\nclass DeleteModule(ModuleCommand, sd.DeleteObject[Module]):\n astnode = qlast.DropModule\n\n def _validate_legal_command(\n self,\n schema: s_schema.Schema,\n context: sd.CommandContext,\n ) -> None:\n super()._validate_legal_command(schema, context)\n\n # For now, we disallow deleting non-empty modules.\n\n # Modules aren't actually stored with any direct linkage\n # to the objects in them, so explicitly search for objects\n # in the module (excluding the module itself).\n has_objects = bool(any(schema.get_objects(\n included_modules=[self.classname],\n excluded_items=[self.classname],\n )))\n\n if has_objects:\n vn = self.scls.get_verbosename(schema)\n raise errors.SchemaError(\n f'cannot drop {vn} because it is not empty'\n )\n", "path": "edb/schema/modules.py"}], "after_files": [{"content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2008-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\nfrom __future__ import annotations\n\n\nfrom edb import errors\n\nfrom edb.edgeql import ast as qlast\nfrom edb.edgeql import qltypes\n\nfrom . import annos as s_anno\nfrom . import delta as sd\nfrom . import name as sn\nfrom . import objects as so\nfrom . import schema as s_schema\n\nRESERVED_MODULE_NAMES = {\n 'super',\n}\n\n\nclass Module(\n s_anno.AnnotationSubject,\n so.Object, # Help reflection figure out the right db MRO\n qlkind=qltypes.SchemaObjectClass.MODULE,\n data_safe=False,\n):\n # N.B: Modules are not \"qualified\" objects, even though they can\n # be nested (because they might *not* be nested) and we arrange\n # for their names to always be represented with an UnqualName.\n pass\n\n\nclass ModuleCommandContext(sd.ObjectCommandContext[Module]):\n pass\n\n\nclass ModuleCommand(\n sd.ObjectCommand[Module],\n context_class=ModuleCommandContext,\n):\n\n def _validate_legal_command(\n self,\n schema: s_schema.Schema,\n context: sd.CommandContext,\n ) -> None:\n super()._validate_legal_command(schema, context)\n\n last = str(self.classname)\n first = last\n enclosing = None\n if '::' in str(self.classname):\n first, _, _ = str(self.classname).partition('::')\n enclosing, _, last = str(self.classname).rpartition('::')\n if not schema.has_module(enclosing):\n raise errors.UnknownModuleError(\n f'module {enclosing!r} is not in this schema')\n\n if last in RESERVED_MODULE_NAMES:\n raise errors.SchemaDefinitionError(\n f\"module {last!r} is a reserved module name\")\n\n if (\n not context.stdmode and not context.testmode\n and sn.UnqualName(first) in s_schema.STD_MODULES\n ):\n raise errors.SchemaDefinitionError(\n f'cannot {self._delta_action} {self.get_verbosename()}: '\n f'module {first} is read-only',\n span=self.span)\n\n\nclass CreateModule(ModuleCommand, sd.CreateObject[Module]):\n astnode = qlast.CreateModule\n\n\nclass AlterModule(ModuleCommand, sd.AlterObject[Module]):\n astnode = qlast.AlterModule\n\n\nclass RenameModule(ModuleCommand, sd.RenameObject[Module]):\n\n def apply(\n self,\n schema: s_schema.Schema,\n context: sd.CommandContext,\n ) -> s_schema.Schema:\n raise errors.SchemaError(\n f'renaming modules is not supported',\n span=self.span,\n )\n\n\nclass DeleteModule(ModuleCommand, sd.DeleteObject[Module]):\n astnode = qlast.DropModule\n\n def _validate_legal_command(\n self,\n schema: s_schema.Schema,\n context: sd.CommandContext,\n ) -> None:\n super()._validate_legal_command(schema, context)\n\n # For now, we disallow deleting non-empty modules.\n\n # Modules aren't actually stored with any direct linkage\n # to the objects in them, so explicitly search for objects\n # in the module (excluding the module itself).\n has_objects = bool(any(schema.get_objects(\n included_modules=[self.classname],\n excluded_items=[self.classname],\n )))\n\n if has_objects:\n vn = self.scls.get_verbosename(schema)\n raise errors.SchemaError(\n f'cannot drop {vn} because it is not empty'\n )\n", "path": "edb/schema/modules.py"}]} | 1,458 | 153 |
gh_patches_debug_21809 | rasdani/github-patches | git_diff | prowler-cloud__prowler-2639 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: The check 'Potential secret found in EC2 instance * User Data.' does not include the line numbers where the secrets were found
### Steps to Reproduce
The check 'Potential secret found in EC2 instance * User Data.' does not show the line numbers, whereas 'Potential secret found in variables of ECS task definition' does. Why is it so?
The results of check without precise pointing at the line are frustrating: you do not know where exactly the scanner found the secret and how many secrets were found.
Same issue will rise if you need to troubleshoot the scanner.
### Expected behavior
Numbers of lines with secrets are included in issue description.
### Actual Result with Screenshots or Logs
-
### How did you install Prowler?
Docker (docker pull toniblyx/prowler)
### Environment Resource
Fargate
### OS used
--
### Prowler version
3
### Pip version
--
### Context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py`
Content:
```
1 import os
2 import tempfile
3 import zlib
4 from base64 import b64decode
5
6 from detect_secrets import SecretsCollection
7 from detect_secrets.settings import default_settings
8
9 from prowler.lib.check.models import Check, Check_Report_AWS
10 from prowler.providers.aws.services.ec2.ec2_client import ec2_client
11
12
13 class ec2_instance_secrets_user_data(Check):
14 def execute(self):
15 findings = []
16 for instance in ec2_client.instances:
17 if instance.state != "terminated":
18 report = Check_Report_AWS(self.metadata())
19 report.region = instance.region
20 report.resource_id = instance.id
21 report.resource_arn = instance.arn
22 report.resource_tags = instance.tags
23 if instance.user_data:
24 temp_user_data_file = tempfile.NamedTemporaryFile(delete=False)
25 user_data = b64decode(instance.user_data)
26 if user_data[0:2] == b"\x1f\x8b": # GZIP magic number
27 user_data = zlib.decompress(
28 user_data, zlib.MAX_WBITS | 32
29 ).decode("utf-8")
30 else:
31 user_data = user_data.decode("utf-8")
32
33 temp_user_data_file.write(
34 bytes(user_data, encoding="raw_unicode_escape")
35 )
36 temp_user_data_file.close()
37 secrets = SecretsCollection()
38 with default_settings():
39 secrets.scan_file(temp_user_data_file.name)
40
41 if secrets.json():
42 report.status = "FAIL"
43 report.status_extended = f"Potential secret found in EC2 instance {instance.id} User Data."
44 else:
45 report.status = "PASS"
46 report.status_extended = (
47 f"No secrets found in EC2 instance {instance.id} User Data."
48 )
49
50 os.remove(temp_user_data_file.name)
51 else:
52 report.status = "PASS"
53 report.status_extended = f"No secrets found in EC2 instance {instance.id} since User Data is empty."
54
55 findings.append(report)
56
57 return findings
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py b/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py
--- a/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py
+++ b/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py
@@ -38,9 +38,19 @@
with default_settings():
secrets.scan_file(temp_user_data_file.name)
- if secrets.json():
+ detect_secrets_output = secrets.json()
+ if detect_secrets_output:
+ secrets_string = ", ".join(
+ [
+ f"{secret['type']} on line {secret['line_number']}"
+ for secret in detect_secrets_output[
+ temp_user_data_file.name
+ ]
+ ]
+ )
report.status = "FAIL"
- report.status_extended = f"Potential secret found in EC2 instance {instance.id} User Data."
+ report.status_extended = f"Potential secret found in EC2 instance {instance.id} User Data -> {secrets_string}."
+
else:
report.status = "PASS"
report.status_extended = (
| {"golden_diff": "diff --git a/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py b/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py\n--- a/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py\n+++ b/prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py\n@@ -38,9 +38,19 @@\n with default_settings():\n secrets.scan_file(temp_user_data_file.name)\n \n- if secrets.json():\n+ detect_secrets_output = secrets.json()\n+ if detect_secrets_output:\n+ secrets_string = \", \".join(\n+ [\n+ f\"{secret['type']} on line {secret['line_number']}\"\n+ for secret in detect_secrets_output[\n+ temp_user_data_file.name\n+ ]\n+ ]\n+ )\n report.status = \"FAIL\"\n- report.status_extended = f\"Potential secret found in EC2 instance {instance.id} User Data.\"\n+ report.status_extended = f\"Potential secret found in EC2 instance {instance.id} User Data -> {secrets_string}.\"\n+\n else:\n report.status = \"PASS\"\n report.status_extended = (\n", "issue": "[Bug]: The check 'Potential secret found in EC2 instance * User Data.' does not include the line numbers where the secrets were found\n### Steps to Reproduce\n\nThe check 'Potential secret found in EC2 instance * User Data.' does not show the line numbers, whereas 'Potential secret found in variables of ECS task definition' does. Why is it so?\r\n\r\nThe results of check without precise pointing at the line are frustrating: you do not know where exactly the scanner found the secret and how many secrets were found.\r\n\r\nSame issue will rise if you need to troubleshoot the scanner.\n\n### Expected behavior\n\nNumbers of lines with secrets are included in issue description.\n\n### Actual Result with Screenshots or Logs\n\n-\n\n### How did you install Prowler?\n\nDocker (docker pull toniblyx/prowler)\n\n### Environment Resource\n\nFargate\n\n### OS used\n\n--\n\n### Prowler version\n\n3\n\n### Pip version\n\n--\n\n### Context\n\n_No response_\n", "before_files": [{"content": "import os\nimport tempfile\nimport zlib\nfrom base64 import b64decode\n\nfrom detect_secrets import SecretsCollection\nfrom detect_secrets.settings import default_settings\n\nfrom prowler.lib.check.models import Check, Check_Report_AWS\nfrom prowler.providers.aws.services.ec2.ec2_client import ec2_client\n\n\nclass ec2_instance_secrets_user_data(Check):\n def execute(self):\n findings = []\n for instance in ec2_client.instances:\n if instance.state != \"terminated\":\n report = Check_Report_AWS(self.metadata())\n report.region = instance.region\n report.resource_id = instance.id\n report.resource_arn = instance.arn\n report.resource_tags = instance.tags\n if instance.user_data:\n temp_user_data_file = tempfile.NamedTemporaryFile(delete=False)\n user_data = b64decode(instance.user_data)\n if user_data[0:2] == b\"\\x1f\\x8b\": # GZIP magic number\n user_data = zlib.decompress(\n user_data, zlib.MAX_WBITS | 32\n ).decode(\"utf-8\")\n else:\n user_data = user_data.decode(\"utf-8\")\n\n temp_user_data_file.write(\n bytes(user_data, encoding=\"raw_unicode_escape\")\n )\n temp_user_data_file.close()\n secrets = SecretsCollection()\n with default_settings():\n secrets.scan_file(temp_user_data_file.name)\n\n if secrets.json():\n report.status = \"FAIL\"\n report.status_extended = f\"Potential secret found in EC2 instance {instance.id} User Data.\"\n else:\n report.status = \"PASS\"\n report.status_extended = (\n f\"No secrets found in EC2 instance {instance.id} User Data.\"\n )\n\n os.remove(temp_user_data_file.name)\n else:\n report.status = \"PASS\"\n report.status_extended = f\"No secrets found in EC2 instance {instance.id} since User Data is empty.\"\n\n findings.append(report)\n\n return findings\n", "path": "prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py"}], "after_files": [{"content": "import os\nimport tempfile\nimport zlib\nfrom base64 import b64decode\n\nfrom detect_secrets import SecretsCollection\nfrom detect_secrets.settings import default_settings\n\nfrom prowler.lib.check.models import Check, Check_Report_AWS\nfrom prowler.providers.aws.services.ec2.ec2_client import ec2_client\n\n\nclass ec2_instance_secrets_user_data(Check):\n def execute(self):\n findings = []\n for instance in ec2_client.instances:\n if instance.state != \"terminated\":\n report = Check_Report_AWS(self.metadata())\n report.region = instance.region\n report.resource_id = instance.id\n report.resource_arn = instance.arn\n report.resource_tags = instance.tags\n if instance.user_data:\n temp_user_data_file = tempfile.NamedTemporaryFile(delete=False)\n user_data = b64decode(instance.user_data)\n if user_data[0:2] == b\"\\x1f\\x8b\": # GZIP magic number\n user_data = zlib.decompress(\n user_data, zlib.MAX_WBITS | 32\n ).decode(\"utf-8\")\n else:\n user_data = user_data.decode(\"utf-8\")\n\n temp_user_data_file.write(\n bytes(user_data, encoding=\"raw_unicode_escape\")\n )\n temp_user_data_file.close()\n secrets = SecretsCollection()\n with default_settings():\n secrets.scan_file(temp_user_data_file.name)\n\n detect_secrets_output = secrets.json()\n if detect_secrets_output:\n secrets_string = \", \".join(\n [\n f\"{secret['type']} on line {secret['line_number']}\"\n for secret in detect_secrets_output[\n temp_user_data_file.name\n ]\n ]\n )\n report.status = \"FAIL\"\n report.status_extended = f\"Potential secret found in EC2 instance {instance.id} User Data -> {secrets_string}.\"\n\n else:\n report.status = \"PASS\"\n report.status_extended = (\n f\"No secrets found in EC2 instance {instance.id} User Data.\"\n )\n\n os.remove(temp_user_data_file.name)\n else:\n report.status = \"PASS\"\n report.status_extended = f\"No secrets found in EC2 instance {instance.id} since User Data is empty.\"\n\n findings.append(report)\n\n return findings\n", "path": "prowler/providers/aws/services/ec2/ec2_instance_secrets_user_data/ec2_instance_secrets_user_data.py"}]} | 1,004 | 287 |
gh_patches_debug_32978 | rasdani/github-patches | git_diff | sunpy__sunpy-2770 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Solar Cycle Gallery example out of date
The example includes the following text
> For this example we will use the SunPy sample data, if you want the current data, delete the argument to the create function. i.e. noaa = lc.NOAAIndicesLightCurve.create()
This text is inline and therefore not checked during build so was not caught. This should be fixed and this behavior should be discouraged.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/plotting/solar_cycle_example.py`
Content:
```
1 """
2 ===============
3 The Solar Cycle
4 ===============
5
6 This example shows the current and possible next solar cycle.
7 """
8 from __future__ import print_function, division
9
10 import datetime
11 import matplotlib.pyplot as plt
12
13 import sunpy.timeseries as ts
14 from sunpy.data.sample import NOAAINDICES_TIMESERIES, NOAAPREDICT_TIMESERIES
15
16 ###############################################################################
17 # For this example we will use the SunPy sample data, if you want the current
18 # data, delete the argument to the ``create`` function. i.e.
19 # ``noaa = lc.NOAAIndicesLightCurve.create()``
20
21 noaa = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')
22 noaa_predict = ts.TimeSeries(NOAAPREDICT_TIMESERIES, source='noaapredictindices')
23
24 ###############################################################################
25 # Next lets grab the data again to create a new data structure that we will
26 # shift by 12 years to simulate the next solar cycle. We will truncate the
27 # data to only plot what is necessary.
28
29 noaa2 = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')
30 noaa2.data = noaa2.data.shift(2, freq=datetime.timedelta(days=365*12))
31 noaa2 = noaa2.truncate('2021/04/01', '2030/01/01')
32
33 ###############################################################################
34 # Finally lets plot everything together with some arbitrary range for the
35 # strength of the next solar cycle.
36
37 plt.plot(noaa.data.index, noaa.data['sunspot RI'], label='Sunspot Number')
38 plt.plot(noaa_predict.data.index, noaa_predict.data['sunspot'],
39 color='grey', label='Near-term Prediction')
40 plt.fill_between(noaa_predict.data.index, noaa_predict.data['sunspot low'],
41 noaa_predict.data['sunspot high'], alpha=0.3, color='grey')
42
43 plt.fill_between(noaa2.data.index, noaa2.data['sunspot RI smooth']*0.4,
44 noaa2.data['sunspot RI smooth']*1.3, alpha=0.3, color='grey',
45 label='Next Cycle Predict')
46 plt.ylim(0)
47 plt.text('2011-01-01', 120, 'Cycle 24', fontsize=16)
48 plt.text('2024-01-01', 120, 'Cycle 25', fontsize=16)
49 plt.ylabel('Sunspot Number')
50 plt.xlabel('Year')
51 plt.legend(loc=2, framealpha=0.5)
52 plt.show()
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/plotting/solar_cycle_example.py b/examples/plotting/solar_cycle_example.py
--- a/examples/plotting/solar_cycle_example.py
+++ b/examples/plotting/solar_cycle_example.py
@@ -14,25 +14,25 @@
from sunpy.data.sample import NOAAINDICES_TIMESERIES, NOAAPREDICT_TIMESERIES
###############################################################################
-# For this example we will use the SunPy sample data, if you want the current
-# data, delete the argument to the ``create`` function. i.e.
-# ``noaa = lc.NOAAIndicesLightCurve.create()``
+# For this example we will use the SunPy sample data. This code snippet grabs
+# the most current NOAA solar cycle data as a ``TimeSeries``
+# (see :ref:`timeseries_code_ref`).
noaa = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')
noaa_predict = ts.TimeSeries(NOAAPREDICT_TIMESERIES, source='noaapredictindices')
###############################################################################
-# Next lets grab the data again to create a new data structure that we will
-# shift by 12 years to simulate the next solar cycle. We will truncate the
-# data to only plot what is necessary.
+# Next, we grab a new copy of the data and shift it forward 12 years to
+# simulate the next solar cycle. We will also truncate the data to ensure
+# that we only plot what is necessary.
noaa2 = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')
noaa2.data = noaa2.data.shift(2, freq=datetime.timedelta(days=365*12))
noaa2 = noaa2.truncate('2021/04/01', '2030/01/01')
###############################################################################
-# Finally lets plot everything together with some arbitrary range for the
-# strength of the next solar cycle.
+# Finally, we plot both ``noaa`` and ``noaa2`` together, with an arbitrary
+# range for the strength of the next solar cycle.
plt.plot(noaa.data.index, noaa.data['sunspot RI'], label='Sunspot Number')
plt.plot(noaa_predict.data.index, noaa_predict.data['sunspot'],
| {"golden_diff": "diff --git a/examples/plotting/solar_cycle_example.py b/examples/plotting/solar_cycle_example.py\n--- a/examples/plotting/solar_cycle_example.py\n+++ b/examples/plotting/solar_cycle_example.py\n@@ -14,25 +14,25 @@\n from sunpy.data.sample import NOAAINDICES_TIMESERIES, NOAAPREDICT_TIMESERIES\n \n ###############################################################################\n-# For this example we will use the SunPy sample data, if you want the current\n-# data, delete the argument to the ``create`` function. i.e.\n-# ``noaa = lc.NOAAIndicesLightCurve.create()``\n+# For this example we will use the SunPy sample data. This code snippet grabs\n+# the most current NOAA solar cycle data as a ``TimeSeries``\n+# (see :ref:`timeseries_code_ref`).\n \n noaa = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')\n noaa_predict = ts.TimeSeries(NOAAPREDICT_TIMESERIES, source='noaapredictindices')\n \n ###############################################################################\n-# Next lets grab the data again to create a new data structure that we will\n-# shift by 12 years to simulate the next solar cycle. We will truncate the\n-# data to only plot what is necessary.\n+# Next, we grab a new copy of the data and shift it forward 12 years to\n+# simulate the next solar cycle. We will also truncate the data to ensure\n+# that we only plot what is necessary.\n \n noaa2 = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')\n noaa2.data = noaa2.data.shift(2, freq=datetime.timedelta(days=365*12))\n noaa2 = noaa2.truncate('2021/04/01', '2030/01/01')\n \n ###############################################################################\n-# Finally lets plot everything together with some arbitrary range for the\n-# strength of the next solar cycle.\n+# Finally, we plot both ``noaa`` and ``noaa2`` together, with an arbitrary\n+# range for the strength of the next solar cycle.\n \n plt.plot(noaa.data.index, noaa.data['sunspot RI'], label='Sunspot Number')\n plt.plot(noaa_predict.data.index, noaa_predict.data['sunspot'],\n", "issue": "Solar Cycle Gallery example out of date\nThe example includes the following text \r\n\r\n> For this example we will use the SunPy sample data, if you want the current data, delete the argument to the create function. i.e. noaa = lc.NOAAIndicesLightCurve.create()\r\n\r\nThis text is inline and therefore not checked during build so was not caught. This should be fixed and this behavior should be discouraged.\n", "before_files": [{"content": "\"\"\"\n===============\nThe Solar Cycle\n===============\n\nThis example shows the current and possible next solar cycle.\n\"\"\"\nfrom __future__ import print_function, division\n\nimport datetime\nimport matplotlib.pyplot as plt\n\nimport sunpy.timeseries as ts\nfrom sunpy.data.sample import NOAAINDICES_TIMESERIES, NOAAPREDICT_TIMESERIES\n\n###############################################################################\n# For this example we will use the SunPy sample data, if you want the current\n# data, delete the argument to the ``create`` function. i.e.\n# ``noaa = lc.NOAAIndicesLightCurve.create()``\n\nnoaa = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')\nnoaa_predict = ts.TimeSeries(NOAAPREDICT_TIMESERIES, source='noaapredictindices')\n\n###############################################################################\n# Next lets grab the data again to create a new data structure that we will\n# shift by 12 years to simulate the next solar cycle. We will truncate the\n# data to only plot what is necessary.\n\nnoaa2 = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')\nnoaa2.data = noaa2.data.shift(2, freq=datetime.timedelta(days=365*12))\nnoaa2 = noaa2.truncate('2021/04/01', '2030/01/01')\n\n###############################################################################\n# Finally lets plot everything together with some arbitrary range for the\n# strength of the next solar cycle.\n\nplt.plot(noaa.data.index, noaa.data['sunspot RI'], label='Sunspot Number')\nplt.plot(noaa_predict.data.index, noaa_predict.data['sunspot'],\n color='grey', label='Near-term Prediction')\nplt.fill_between(noaa_predict.data.index, noaa_predict.data['sunspot low'],\n noaa_predict.data['sunspot high'], alpha=0.3, color='grey')\n\nplt.fill_between(noaa2.data.index, noaa2.data['sunspot RI smooth']*0.4,\n noaa2.data['sunspot RI smooth']*1.3, alpha=0.3, color='grey',\n label='Next Cycle Predict')\nplt.ylim(0)\nplt.text('2011-01-01', 120, 'Cycle 24', fontsize=16)\nplt.text('2024-01-01', 120, 'Cycle 25', fontsize=16)\nplt.ylabel('Sunspot Number')\nplt.xlabel('Year')\nplt.legend(loc=2, framealpha=0.5)\nplt.show()\n", "path": "examples/plotting/solar_cycle_example.py"}], "after_files": [{"content": "\"\"\"\n===============\nThe Solar Cycle\n===============\n\nThis example shows the current and possible next solar cycle.\n\"\"\"\nfrom __future__ import print_function, division\n\nimport datetime\nimport matplotlib.pyplot as plt\n\nimport sunpy.timeseries as ts\nfrom sunpy.data.sample import NOAAINDICES_TIMESERIES, NOAAPREDICT_TIMESERIES\n\n###############################################################################\n# For this example we will use the SunPy sample data. This code snippet grabs\n# the most current NOAA solar cycle data as a ``TimeSeries``\n# (see :ref:`timeseries_code_ref`).\n\nnoaa = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')\nnoaa_predict = ts.TimeSeries(NOAAPREDICT_TIMESERIES, source='noaapredictindices')\n\n###############################################################################\n# Next, we grab a new copy of the data and shift it forward 12 years to\n# simulate the next solar cycle. We will also truncate the data to ensure\n# that we only plot what is necessary.\n\nnoaa2 = ts.TimeSeries(NOAAINDICES_TIMESERIES, source='noaaindices')\nnoaa2.data = noaa2.data.shift(2, freq=datetime.timedelta(days=365*12))\nnoaa2 = noaa2.truncate('2021/04/01', '2030/01/01')\n\n###############################################################################\n# Finally, we plot both ``noaa`` and ``noaa2`` together, with an arbitrary\n# range for the strength of the next solar cycle.\n\nplt.plot(noaa.data.index, noaa.data['sunspot RI'], label='Sunspot Number')\nplt.plot(noaa_predict.data.index, noaa_predict.data['sunspot'],\n color='grey', label='Near-term Prediction')\nplt.fill_between(noaa_predict.data.index, noaa_predict.data['sunspot low'],\n noaa_predict.data['sunspot high'], alpha=0.3, color='grey')\n\nplt.fill_between(noaa2.data.index, noaa2.data['sunspot RI smooth']*0.4,\n noaa2.data['sunspot RI smooth']*1.3, alpha=0.3, color='grey',\n label='Next Cycle Predict')\nplt.ylim(0)\nplt.text('2011-01-01', 120, 'Cycle 24', fontsize=16)\nplt.text('2024-01-01', 120, 'Cycle 25', fontsize=16)\nplt.ylabel('Sunspot Number')\nplt.xlabel('Year')\nplt.legend(loc=2, framealpha=0.5)\nplt.show()\n", "path": "examples/plotting/solar_cycle_example.py"}]} | 1,004 | 504 |
gh_patches_debug_42513 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-57 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change PS to run in graph mode.
在multi-thread下面, 如果ps enable eager mode, 在同一个process下面,所有thread都是在eager mode下面了。 eager mode 可以run graph, 反之则不行。
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/elasticdl/tflib/ps/ps.py`
Content:
```
1 import threading
2 import queue
3 import numpy as np
4 import tensorflow.contrib.eager as tfe
5 import tensorflow as tf
6 tf.enable_eager_execution()
7
8
9 class ParameterServer(object):
10 def __init__(self, optimizer, vars):
11 self._opt = optimizer
12 self._vars = {}
13 for k, v in vars.items():
14 if (not isinstance(v, np.ndarray)
15 or v.dtype not in (np.float32, np.float64)):
16 raise ValueError(
17 'Initial value for variable %s is not of float type ndarray' %
18 k)
19 self._vars[k] = tfe.Variable(v, name=k)
20 self._step = 0
21 self._grad_q = queue.Queue()
22 self._lock = threading.Lock()
23 self._runner = threading.Thread(target=self._run, name='ps-runner')
24 self._exiting = False
25 self._min_step_cv = threading.Condition()
26
27 def pull(self, names=None, min_step=0, blocking=True, timeout=None):
28 with self._min_step_cv:
29 self._min_step_cv.wait_for(
30 lambda: not blocking or min_step <= self._step,
31 timeout=timeout)
32 with self._lock:
33 if min_step > self._step:
34 raise LookupError(
35 'Required step is not ready yet: %s' %
36 min_step)
37 if names:
38 res = {k: self._vars[k].numpy() for k in names}
39 else:
40 res = {k: v.numpy() for k, v in self._vars.items()}
41 return self._step, res
42
43 def push(self, base_step, sub_step, grads):
44 with self._lock:
45 if base_step > self._step:
46 raise ValueError(
47 'Illegal base step %s, parameter server step is %s' %
48 (base_step, self._step))
49
50 if sub_step < 0:
51 raise ValueError('Illegal sub step %s' % sub_step)
52
53 for k, g in grads.items():
54 v = self._vars[k]
55 if g.dtype != v.dtype.as_numpy_dtype or g.shape != v.shape:
56 raise ValueError('Incompatible gradient for variable %s' % k)
57 # TODO(l.zou): use @dataclass when python 3.7 is available.
58 self._grad_q.put((base_step, sub_step, grads))
59
60 def _compute(self, grads):
61 grads_vars = [(g, self._vars[k]) for k, g in grads.items()]
62 with self._lock:
63 self._opt.apply_gradients(grads_vars)
64 with self._min_step_cv:
65 self._step += 1
66 self._min_step_cv.notify_all()
67
68 def _run(self):
69 while not self._exiting:
70 # TODO(l.zou): How to properly accumulate and decay grads?
71 try:
72 base_step, sub_step, grads = self._grad_q.get(timeout=1.0)
73 self._compute(grads)
74 except queue.Empty:
75 pass
76
77 def start(self):
78 self._runner.start()
79
80 def join(self):
81 self._exiting = True
82 self._runner.join()
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/elasticdl/tflib/ps/ps.py b/python/elasticdl/tflib/ps/ps.py
--- a/python/elasticdl/tflib/ps/ps.py
+++ b/python/elasticdl/tflib/ps/ps.py
@@ -1,22 +1,12 @@
import threading
import queue
import numpy as np
-import tensorflow.contrib.eager as tfe
import tensorflow as tf
-tf.enable_eager_execution()
+from tensorflow.python.ops import array_ops
class ParameterServer(object):
def __init__(self, optimizer, vars):
- self._opt = optimizer
- self._vars = {}
- for k, v in vars.items():
- if (not isinstance(v, np.ndarray)
- or v.dtype not in (np.float32, np.float64)):
- raise ValueError(
- 'Initial value for variable %s is not of float type ndarray' %
- k)
- self._vars[k] = tfe.Variable(v, name=k)
self._step = 0
self._grad_q = queue.Queue()
self._lock = threading.Lock()
@@ -24,6 +14,23 @@
self._exiting = False
self._min_step_cv = threading.Condition()
+ self._grads_vars = {}
+ for k, v in vars.items():
+ if (not isinstance(v, np.ndarray)
+ or v.dtype not in (np.float32, np.float64)):
+ raise ValueError(
+ 'Initial value for variable %s is not of float type ndarray' %
+ k)
+ # TODO: In graph mode we don't need to keep track of variables by ourselves.
+ self._grads_vars[k] = (array_ops.placeholder(dtype=v.dtype), tf.Variable(v, name=k))
+
+ self._opt = optimizer
+ self._apply_grad_op = self._opt.apply_gradients(self._grads_vars.values())
+
+ self._sess = tf.Session()
+ init_op = tf.global_variables_initializer()
+ self._sess.run(init_op)
+
def pull(self, names=None, min_step=0, blocking=True, timeout=None):
with self._min_step_cv:
self._min_step_cv.wait_for(
@@ -35,9 +42,9 @@
'Required step is not ready yet: %s' %
min_step)
if names:
- res = {k: self._vars[k].numpy() for k in names}
+ res = {k: self._grads_vars[k][1].eval(self._sess) for k in names}
else:
- res = {k: v.numpy() for k, v in self._vars.items()}
+ res = {k: v[1].eval(self._sess) for k, v in self._grads_vars.items()}
return self._step, res
def push(self, base_step, sub_step, grads):
@@ -51,16 +58,16 @@
raise ValueError('Illegal sub step %s' % sub_step)
for k, g in grads.items():
- v = self._vars[k]
+ v = self._grads_vars[k][1]
if g.dtype != v.dtype.as_numpy_dtype or g.shape != v.shape:
raise ValueError('Incompatible gradient for variable %s' % k)
# TODO(l.zou): use @dataclass when python 3.7 is available.
self._grad_q.put((base_step, sub_step, grads))
def _compute(self, grads):
- grads_vars = [(g, self._vars[k]) for k, g in grads.items()]
with self._lock:
- self._opt.apply_gradients(grads_vars)
+ feed_dict = {self._grads_vars[k][0]:v for k, v in grads.items()}
+ self._sess.run(self._apply_grad_op, feed_dict=feed_dict)
with self._min_step_cv:
self._step += 1
self._min_step_cv.notify_all()
@@ -80,3 +87,4 @@
def join(self):
self._exiting = True
self._runner.join()
+ self._sess.close()
| {"golden_diff": "diff --git a/python/elasticdl/tflib/ps/ps.py b/python/elasticdl/tflib/ps/ps.py\n--- a/python/elasticdl/tflib/ps/ps.py\n+++ b/python/elasticdl/tflib/ps/ps.py\n@@ -1,22 +1,12 @@\n import threading\n import queue\n import numpy as np\n-import tensorflow.contrib.eager as tfe\n import tensorflow as tf\n-tf.enable_eager_execution()\n+from tensorflow.python.ops import array_ops\n \n \n class ParameterServer(object):\n def __init__(self, optimizer, vars):\n- self._opt = optimizer\n- self._vars = {}\n- for k, v in vars.items():\n- if (not isinstance(v, np.ndarray)\n- or v.dtype not in (np.float32, np.float64)):\n- raise ValueError(\n- 'Initial value for variable %s is not of float type ndarray' %\n- k)\n- self._vars[k] = tfe.Variable(v, name=k)\n self._step = 0\n self._grad_q = queue.Queue()\n self._lock = threading.Lock()\n@@ -24,6 +14,23 @@\n self._exiting = False\n self._min_step_cv = threading.Condition()\n \n+ self._grads_vars = {}\n+ for k, v in vars.items():\n+ if (not isinstance(v, np.ndarray)\n+ or v.dtype not in (np.float32, np.float64)):\n+ raise ValueError(\n+ 'Initial value for variable %s is not of float type ndarray' %\n+ k)\n+ # TODO: In graph mode we don't need to keep track of variables by ourselves.\n+ self._grads_vars[k] = (array_ops.placeholder(dtype=v.dtype), tf.Variable(v, name=k))\n+\n+ self._opt = optimizer\n+ self._apply_grad_op = self._opt.apply_gradients(self._grads_vars.values())\n+\n+ self._sess = tf.Session()\n+ init_op = tf.global_variables_initializer()\n+ self._sess.run(init_op)\n+\n def pull(self, names=None, min_step=0, blocking=True, timeout=None):\n with self._min_step_cv:\n self._min_step_cv.wait_for(\n@@ -35,9 +42,9 @@\n 'Required step is not ready yet: %s' %\n min_step)\n if names:\n- res = {k: self._vars[k].numpy() for k in names}\n+ res = {k: self._grads_vars[k][1].eval(self._sess) for k in names}\n else:\n- res = {k: v.numpy() for k, v in self._vars.items()}\n+ res = {k: v[1].eval(self._sess) for k, v in self._grads_vars.items()}\n return self._step, res\n \n def push(self, base_step, sub_step, grads):\n@@ -51,16 +58,16 @@\n raise ValueError('Illegal sub step %s' % sub_step)\n \n for k, g in grads.items():\n- v = self._vars[k]\n+ v = self._grads_vars[k][1]\n if g.dtype != v.dtype.as_numpy_dtype or g.shape != v.shape:\n raise ValueError('Incompatible gradient for variable %s' % k)\n # TODO(l.zou): use @dataclass when python 3.7 is available.\n self._grad_q.put((base_step, sub_step, grads))\n \n def _compute(self, grads):\n- grads_vars = [(g, self._vars[k]) for k, g in grads.items()]\n with self._lock:\n- self._opt.apply_gradients(grads_vars)\n+ feed_dict = {self._grads_vars[k][0]:v for k, v in grads.items()}\n+ self._sess.run(self._apply_grad_op, feed_dict=feed_dict)\n with self._min_step_cv:\n self._step += 1\n self._min_step_cv.notify_all()\n@@ -80,3 +87,4 @@\n def join(self):\n self._exiting = True\n self._runner.join()\n+ self._sess.close()\n", "issue": "Change PS to run in graph mode.\n\u5728multi-thread\u4e0b\u9762\uff0c \u5982\u679cps enable eager mode, \u5728\u540c\u4e00\u4e2aprocess\u4e0b\u9762\uff0c\u6240\u6709thread\u90fd\u662f\u5728eager mode\u4e0b\u9762\u4e86\u3002 eager mode \u53ef\u4ee5run graph, \u53cd\u4e4b\u5219\u4e0d\u884c\u3002\n", "before_files": [{"content": "import threading\nimport queue\nimport numpy as np\nimport tensorflow.contrib.eager as tfe\nimport tensorflow as tf\ntf.enable_eager_execution()\n\n\nclass ParameterServer(object):\n def __init__(self, optimizer, vars):\n self._opt = optimizer\n self._vars = {}\n for k, v in vars.items():\n if (not isinstance(v, np.ndarray)\n or v.dtype not in (np.float32, np.float64)):\n raise ValueError(\n 'Initial value for variable %s is not of float type ndarray' %\n k)\n self._vars[k] = tfe.Variable(v, name=k)\n self._step = 0\n self._grad_q = queue.Queue()\n self._lock = threading.Lock()\n self._runner = threading.Thread(target=self._run, name='ps-runner')\n self._exiting = False\n self._min_step_cv = threading.Condition()\n\n def pull(self, names=None, min_step=0, blocking=True, timeout=None):\n with self._min_step_cv:\n self._min_step_cv.wait_for(\n lambda: not blocking or min_step <= self._step,\n timeout=timeout)\n with self._lock:\n if min_step > self._step:\n raise LookupError(\n 'Required step is not ready yet: %s' %\n min_step)\n if names:\n res = {k: self._vars[k].numpy() for k in names}\n else:\n res = {k: v.numpy() for k, v in self._vars.items()}\n return self._step, res\n\n def push(self, base_step, sub_step, grads):\n with self._lock:\n if base_step > self._step:\n raise ValueError(\n 'Illegal base step %s, parameter server step is %s' %\n (base_step, self._step))\n\n if sub_step < 0:\n raise ValueError('Illegal sub step %s' % sub_step)\n\n for k, g in grads.items():\n v = self._vars[k]\n if g.dtype != v.dtype.as_numpy_dtype or g.shape != v.shape:\n raise ValueError('Incompatible gradient for variable %s' % k)\n # TODO(l.zou): use @dataclass when python 3.7 is available.\n self._grad_q.put((base_step, sub_step, grads))\n\n def _compute(self, grads):\n grads_vars = [(g, self._vars[k]) for k, g in grads.items()]\n with self._lock:\n self._opt.apply_gradients(grads_vars)\n with self._min_step_cv:\n self._step += 1\n self._min_step_cv.notify_all()\n\n def _run(self):\n while not self._exiting:\n # TODO(l.zou): How to properly accumulate and decay grads?\n try:\n base_step, sub_step, grads = self._grad_q.get(timeout=1.0)\n self._compute(grads)\n except queue.Empty:\n pass\n\n def start(self):\n self._runner.start()\n\n def join(self):\n self._exiting = True\n self._runner.join()\n", "path": "python/elasticdl/tflib/ps/ps.py"}], "after_files": [{"content": "import threading\nimport queue\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow.python.ops import array_ops\n\n\nclass ParameterServer(object):\n def __init__(self, optimizer, vars):\n self._step = 0\n self._grad_q = queue.Queue()\n self._lock = threading.Lock()\n self._runner = threading.Thread(target=self._run, name='ps-runner')\n self._exiting = False\n self._min_step_cv = threading.Condition()\n\n self._grads_vars = {}\n for k, v in vars.items():\n if (not isinstance(v, np.ndarray)\n or v.dtype not in (np.float32, np.float64)):\n raise ValueError(\n 'Initial value for variable %s is not of float type ndarray' %\n k)\n # TODO: In graph mode we don't need to keep track of variables by ourselves.\n self._grads_vars[k] = (array_ops.placeholder(dtype=v.dtype), tf.Variable(v, name=k))\n\n self._opt = optimizer\n self._apply_grad_op = self._opt.apply_gradients(self._grads_vars.values())\n\n self._sess = tf.Session()\n init_op = tf.global_variables_initializer()\n self._sess.run(init_op)\n\n def pull(self, names=None, min_step=0, blocking=True, timeout=None):\n with self._min_step_cv:\n self._min_step_cv.wait_for(\n lambda: not blocking or min_step <= self._step,\n timeout=timeout)\n with self._lock:\n if min_step > self._step:\n raise LookupError(\n 'Required step is not ready yet: %s' %\n min_step)\n if names:\n res = {k: self._grads_vars[k][1].eval(self._sess) for k in names}\n else:\n res = {k: v[1].eval(self._sess) for k, v in self._grads_vars.items()}\n return self._step, res\n\n def push(self, base_step, sub_step, grads):\n with self._lock:\n if base_step > self._step:\n raise ValueError(\n 'Illegal base step %s, parameter server step is %s' %\n (base_step, self._step))\n\n if sub_step < 0:\n raise ValueError('Illegal sub step %s' % sub_step)\n\n for k, g in grads.items():\n v = self._grads_vars[k][1]\n if g.dtype != v.dtype.as_numpy_dtype or g.shape != v.shape:\n raise ValueError('Incompatible gradient for variable %s' % k)\n # TODO(l.zou): use @dataclass when python 3.7 is available.\n self._grad_q.put((base_step, sub_step, grads))\n\n def _compute(self, grads):\n with self._lock:\n feed_dict = {self._grads_vars[k][0]:v for k, v in grads.items()}\n self._sess.run(self._apply_grad_op, feed_dict=feed_dict)\n with self._min_step_cv:\n self._step += 1\n self._min_step_cv.notify_all()\n\n def _run(self):\n while not self._exiting:\n # TODO(l.zou): How to properly accumulate and decay grads?\n try:\n base_step, sub_step, grads = self._grad_q.get(timeout=1.0)\n self._compute(grads)\n except queue.Empty:\n pass\n\n def start(self):\n self._runner.start()\n\n def join(self):\n self._exiting = True\n self._runner.join()\n self._sess.close()\n", "path": "python/elasticdl/tflib/ps/ps.py"}]} | 1,147 | 926 |
gh_patches_debug_36975 | rasdani/github-patches | git_diff | pulp__pulpcore-2318 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clean up TaskReservedResources/task-table at migration to new-tasking-system
See https://bugzilla.redhat.com/show_bug.cgi?id=2031154 for details.
Migration that needs to be updated to purge taskreservedresource entries: 0064_add_new_style_task_columns.py
This wants to be cherrypicked into 3.14/15/16 (after which the offending table no longer exists)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/migrations/0064_add_new_style_task_columns.py`
Content:
```
1 # Generated by Django 2.2.20 on 2021-04-27 07:51
2
3 import django.contrib.postgres.fields
4 import django.contrib.postgres.fields.jsonb
5 from django.db import migrations, models
6
7
8 def copy_reserved_resources_record(apps, schema_editor):
9 Task = apps.get_model('core', 'Task')
10 for task in Task.objects.iterator():
11 task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))
12 task.save()
13
14
15 def noop(apps, schema_editor):
16 pass
17
18
19 class Migration(migrations.Migration):
20
21 dependencies = [
22 ('core', '0063_repository_retained_versions'),
23 ]
24
25 operations = [
26 migrations.AddField(
27 model_name='task',
28 name='args',
29 field=django.contrib.postgres.fields.jsonb.JSONField(null=True),
30 ),
31 migrations.AddField(
32 model_name='task',
33 name='kwargs',
34 field=django.contrib.postgres.fields.jsonb.JSONField(null=True),
35 ),
36 migrations.AddField(
37 model_name='task',
38 name='_reserved_resources_record',
39 field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=256), null=True, size=None),
40 ),
41 migrations.AlterField(
42 model_name='task',
43 name='_resource_job_id',
44 field=models.UUIDField(null=True),
45 ),
46 migrations.AlterField(
47 model_name='progressreport',
48 name='state',
49 field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')], default='waiting'),
50 ),
51 migrations.AlterField(
52 model_name='task',
53 name='state',
54 field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')]),
55 ),
56 migrations.AddIndex(
57 model_name='task',
58 index=models.Index(fields=['pulp_created'], name='core_task_pulp_cr_10223f_idx'),
59 ),
60 migrations.RunPython(
61 code=copy_reserved_resources_record,
62 reverse_code=noop,
63 ),
64 migrations.RemoveField(
65 model_name='taskreservedresourcerecord',
66 name='resource',
67 ),
68 migrations.RemoveField(
69 model_name='taskreservedresourcerecord',
70 name='task',
71 ),
72 migrations.DeleteModel(
73 name='ReservedResourceRecord',
74 ),
75 migrations.DeleteModel(
76 name='TaskReservedResourceRecord',
77 ),
78 migrations.RenameField(
79 model_name='task',
80 old_name='_reserved_resources_record',
81 new_name='reserved_resources_record',
82 ),
83 ]
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pulpcore/app/migrations/0064_add_new_style_task_columns.py b/pulpcore/app/migrations/0064_add_new_style_task_columns.py
--- a/pulpcore/app/migrations/0064_add_new_style_task_columns.py
+++ b/pulpcore/app/migrations/0064_add_new_style_task_columns.py
@@ -4,16 +4,34 @@
import django.contrib.postgres.fields.jsonb
from django.db import migrations, models
+TASK_BATCH_SIZE = 1000
+
def copy_reserved_resources_record(apps, schema_editor):
Task = apps.get_model('core', 'Task')
- for task in Task.objects.iterator():
+
+ # Update _reserved_resource_record for all tasks, 1000 tasks at a time.
+ # When we hit 1K tasks, go to the db for the batch.
+ # Make sure to update the final batch!
+ tasks = []
+ for task in Task.objects.iterator(chunk_size=TASK_BATCH_SIZE):
task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))
- task.save()
+ tasks.append(task)
+ if len(tasks) == TASK_BATCH_SIZE:
+ Task.objects.bulk_update(tasks, ["_reserved_resources_record"])
+ tasks.clear()
+
+ # Update last set of tasks
+ if len(tasks) > 0:
+ Task.objects.bulk_update(tasks, ["_reserved_resources_record"])
+
+def purge_reservedresources(apps, schema_editor):
+ TaskReservedResource = apps.get_model('core', 'TaskReservedResource')
+ TaskReservedResource.objects.all().delete()
-def noop(apps, schema_editor):
- pass
+ ReservedResource = apps.get_model('core', 'ReservedResource')
+ ReservedResource.objects.all().delete()
class Migration(migrations.Migration):
@@ -23,6 +41,12 @@
]
operations = [
+ # Purge any ReservedResource entries - if there are any, they're orphans
+ migrations.RunPython(
+ code=purge_reservedresources,
+ reverse_code=migrations.RunPython.noop,
+ ),
+ # Update entities for the new task-system
migrations.AddField(
model_name='task',
name='args',
@@ -59,7 +83,7 @@
),
migrations.RunPython(
code=copy_reserved_resources_record,
- reverse_code=noop,
+ reverse_code=migrations.RunPython.noop,
),
migrations.RemoveField(
model_name='taskreservedresourcerecord',
@@ -80,4 +104,5 @@
old_name='_reserved_resources_record',
new_name='reserved_resources_record',
),
+
]
| {"golden_diff": "diff --git a/pulpcore/app/migrations/0064_add_new_style_task_columns.py b/pulpcore/app/migrations/0064_add_new_style_task_columns.py\n--- a/pulpcore/app/migrations/0064_add_new_style_task_columns.py\n+++ b/pulpcore/app/migrations/0064_add_new_style_task_columns.py\n@@ -4,16 +4,34 @@\n import django.contrib.postgres.fields.jsonb\n from django.db import migrations, models\n \n+TASK_BATCH_SIZE = 1000\n+\n \n def copy_reserved_resources_record(apps, schema_editor):\n Task = apps.get_model('core', 'Task')\n- for task in Task.objects.iterator():\n+\n+ # Update _reserved_resource_record for all tasks, 1000 tasks at a time.\n+ # When we hit 1K tasks, go to the db for the batch.\n+ # Make sure to update the final batch!\n+ tasks = []\n+ for task in Task.objects.iterator(chunk_size=TASK_BATCH_SIZE):\n task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))\n- task.save()\n+ tasks.append(task)\n+ if len(tasks) == TASK_BATCH_SIZE:\n+ Task.objects.bulk_update(tasks, [\"_reserved_resources_record\"])\n+ tasks.clear()\n+\n+ # Update last set of tasks\n+ if len(tasks) > 0:\n+ Task.objects.bulk_update(tasks, [\"_reserved_resources_record\"])\n+\n \n+def purge_reservedresources(apps, schema_editor):\n+ TaskReservedResource = apps.get_model('core', 'TaskReservedResource')\n+ TaskReservedResource.objects.all().delete()\n \n-def noop(apps, schema_editor):\n- pass\n+ ReservedResource = apps.get_model('core', 'ReservedResource')\n+ ReservedResource.objects.all().delete()\n \n \n class Migration(migrations.Migration):\n@@ -23,6 +41,12 @@\n ]\n \n operations = [\n+ # Purge any ReservedResource entries - if there are any, they're orphans\n+ migrations.RunPython(\n+ code=purge_reservedresources,\n+ reverse_code=migrations.RunPython.noop,\n+ ),\n+ # Update entities for the new task-system\n migrations.AddField(\n model_name='task',\n name='args',\n@@ -59,7 +83,7 @@\n ),\n migrations.RunPython(\n code=copy_reserved_resources_record,\n- reverse_code=noop,\n+ reverse_code=migrations.RunPython.noop,\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n@@ -80,4 +104,5 @@\n old_name='_reserved_resources_record',\n new_name='reserved_resources_record',\n ),\n+\n ]\n", "issue": "Clean up TaskReservedResources/task-table at migration to new-tasking-system\nSee https://bugzilla.redhat.com/show_bug.cgi?id=2031154 for details.\r\n\r\nMigration that needs to be updated to purge taskreservedresource entries: 0064_add_new_style_task_columns.py\r\n\r\nThis wants to be cherrypicked into 3.14/15/16 (after which the offending table no longer exists)\n", "before_files": [{"content": "# Generated by Django 2.2.20 on 2021-04-27 07:51\n\nimport django.contrib.postgres.fields\nimport django.contrib.postgres.fields.jsonb\nfrom django.db import migrations, models\n\n\ndef copy_reserved_resources_record(apps, schema_editor):\n Task = apps.get_model('core', 'Task')\n for task in Task.objects.iterator():\n task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))\n task.save()\n\n\ndef noop(apps, schema_editor):\n pass\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0063_repository_retained_versions'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='task',\n name='args',\n field=django.contrib.postgres.fields.jsonb.JSONField(null=True),\n ),\n migrations.AddField(\n model_name='task',\n name='kwargs',\n field=django.contrib.postgres.fields.jsonb.JSONField(null=True),\n ),\n migrations.AddField(\n model_name='task',\n name='_reserved_resources_record',\n field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=256), null=True, size=None),\n ),\n migrations.AlterField(\n model_name='task',\n name='_resource_job_id',\n field=models.UUIDField(null=True),\n ),\n migrations.AlterField(\n model_name='progressreport',\n name='state',\n field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')], default='waiting'),\n ),\n migrations.AlterField(\n model_name='task',\n name='state',\n field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')]),\n ),\n migrations.AddIndex(\n model_name='task',\n index=models.Index(fields=['pulp_created'], name='core_task_pulp_cr_10223f_idx'),\n ),\n migrations.RunPython(\n code=copy_reserved_resources_record,\n reverse_code=noop,\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n name='resource',\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n name='task',\n ),\n migrations.DeleteModel(\n name='ReservedResourceRecord',\n ),\n migrations.DeleteModel(\n name='TaskReservedResourceRecord',\n ),\n migrations.RenameField(\n model_name='task',\n old_name='_reserved_resources_record',\n new_name='reserved_resources_record',\n ),\n ]\n", "path": "pulpcore/app/migrations/0064_add_new_style_task_columns.py"}], "after_files": [{"content": "# Generated by Django 2.2.20 on 2021-04-27 07:51\n\nimport django.contrib.postgres.fields\nimport django.contrib.postgres.fields.jsonb\nfrom django.db import migrations, models\n\nTASK_BATCH_SIZE = 1000\n\n\ndef copy_reserved_resources_record(apps, schema_editor):\n Task = apps.get_model('core', 'Task')\n\n # Update _reserved_resource_record for all tasks, 1000 tasks at a time.\n # When we hit 1K tasks, go to the db for the batch.\n # Make sure to update the final batch!\n tasks = []\n for task in Task.objects.iterator(chunk_size=TASK_BATCH_SIZE):\n task._reserved_resources_record = list(task.reserved_resources_record.values_list('resource', flat=True))\n tasks.append(task)\n if len(tasks) == TASK_BATCH_SIZE:\n Task.objects.bulk_update(tasks, [\"_reserved_resources_record\"])\n tasks.clear()\n\n # Update last set of tasks\n if len(tasks) > 0:\n Task.objects.bulk_update(tasks, [\"_reserved_resources_record\"])\n\n\ndef purge_reservedresources(apps, schema_editor):\n TaskReservedResource = apps.get_model('core', 'TaskReservedResource')\n TaskReservedResource.objects.all().delete()\n\n ReservedResource = apps.get_model('core', 'ReservedResource')\n ReservedResource.objects.all().delete()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0063_repository_retained_versions'),\n ]\n\n operations = [\n # Purge any ReservedResource entries - if there are any, they're orphans\n migrations.RunPython(\n code=purge_reservedresources,\n reverse_code=migrations.RunPython.noop,\n ),\n # Update entities for the new task-system\n migrations.AddField(\n model_name='task',\n name='args',\n field=django.contrib.postgres.fields.jsonb.JSONField(null=True),\n ),\n migrations.AddField(\n model_name='task',\n name='kwargs',\n field=django.contrib.postgres.fields.jsonb.JSONField(null=True),\n ),\n migrations.AddField(\n model_name='task',\n name='_reserved_resources_record',\n field=django.contrib.postgres.fields.ArrayField(base_field=models.CharField(max_length=256), null=True, size=None),\n ),\n migrations.AlterField(\n model_name='task',\n name='_resource_job_id',\n field=models.UUIDField(null=True),\n ),\n migrations.AlterField(\n model_name='progressreport',\n name='state',\n field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')], default='waiting'),\n ),\n migrations.AlterField(\n model_name='task',\n name='state',\n field=models.TextField(choices=[('waiting', 'Waiting'), ('skipped', 'Skipped'), ('running', 'Running'), ('completed', 'Completed'), ('failed', 'Failed'), ('canceled', 'Canceled'), ('canceling', 'Canceling')]),\n ),\n migrations.AddIndex(\n model_name='task',\n index=models.Index(fields=['pulp_created'], name='core_task_pulp_cr_10223f_idx'),\n ),\n migrations.RunPython(\n code=copy_reserved_resources_record,\n reverse_code=migrations.RunPython.noop,\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n name='resource',\n ),\n migrations.RemoveField(\n model_name='taskreservedresourcerecord',\n name='task',\n ),\n migrations.DeleteModel(\n name='ReservedResourceRecord',\n ),\n migrations.DeleteModel(\n name='TaskReservedResourceRecord',\n ),\n migrations.RenameField(\n model_name='task',\n old_name='_reserved_resources_record',\n new_name='reserved_resources_record',\n ),\n\n ]\n", "path": "pulpcore/app/migrations/0064_add_new_style_task_columns.py"}]} | 1,140 | 600 |
gh_patches_debug_17883 | rasdani/github-patches | git_diff | encode__httpx-2803 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change the type of `Extensions` from `Mapping` to `MutableMapping`.
### Discussed in https://github.com/encode/httpx/discussions/2793
<div type='discussions-op-text'>
<sup>Originally posted by **karosis88** July 28, 2023</sup>
I'm working on a library that implements HTTP Caching for httpx and httpcore (it provides transports and connection pools), and I'd like to add an extension that simply indicates whether or not the response was taken from the cache.
Unfortunately, the type of extension is Mapping, so this is an error for mypy.
```python
response = httpx.Response(200)
response.extensions['my_custom_extension'] = 'something'
```
OUTPUT
```
error: Unsupported target for indexed assignment ("Mapping[Str, Any]") [index]
```
The solution is to simply change the extension type from `Mapping` to `MutableMapping`, allowing us to add custom extensions after the response has been created.
[See also this pr](https://github.com/karosis88/hishel/pull/4)</div>
---
I believe the only change needed is in the "_models.py" file.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `httpx/_types.py`
Content:
```
1 """
2 Type definitions for type checking purposes.
3 """
4
5 import ssl
6 from http.cookiejar import CookieJar
7 from typing import (
8 IO,
9 TYPE_CHECKING,
10 Any,
11 AsyncIterable,
12 AsyncIterator,
13 Callable,
14 Dict,
15 Iterable,
16 Iterator,
17 List,
18 Mapping,
19 NamedTuple,
20 Optional,
21 Sequence,
22 Tuple,
23 Union,
24 )
25
26 if TYPE_CHECKING: # pragma: no cover
27 from ._auth import Auth # noqa: F401
28 from ._config import Proxy, Timeout # noqa: F401
29 from ._models import Cookies, Headers, Request # noqa: F401
30 from ._urls import URL, QueryParams # noqa: F401
31
32
33 PrimitiveData = Optional[Union[str, int, float, bool]]
34
35 RawURL = NamedTuple(
36 "RawURL",
37 [
38 ("raw_scheme", bytes),
39 ("raw_host", bytes),
40 ("port", Optional[int]),
41 ("raw_path", bytes),
42 ],
43 )
44
45 URLTypes = Union["URL", str]
46
47 QueryParamTypes = Union[
48 "QueryParams",
49 Mapping[str, Union[PrimitiveData, Sequence[PrimitiveData]]],
50 List[Tuple[str, PrimitiveData]],
51 Tuple[Tuple[str, PrimitiveData], ...],
52 str,
53 bytes,
54 ]
55
56 HeaderTypes = Union[
57 "Headers",
58 Mapping[str, str],
59 Mapping[bytes, bytes],
60 Sequence[Tuple[str, str]],
61 Sequence[Tuple[bytes, bytes]],
62 ]
63
64 CookieTypes = Union["Cookies", CookieJar, Dict[str, str], List[Tuple[str, str]]]
65
66 CertTypes = Union[
67 # certfile
68 str,
69 # (certfile, keyfile)
70 Tuple[str, Optional[str]],
71 # (certfile, keyfile, password)
72 Tuple[str, Optional[str], Optional[str]],
73 ]
74 VerifyTypes = Union[str, bool, ssl.SSLContext]
75 TimeoutTypes = Union[
76 Optional[float],
77 Tuple[Optional[float], Optional[float], Optional[float], Optional[float]],
78 "Timeout",
79 ]
80 ProxiesTypes = Union[URLTypes, "Proxy", Dict[URLTypes, Union[None, URLTypes, "Proxy"]]]
81
82 AuthTypes = Union[
83 Tuple[Union[str, bytes], Union[str, bytes]],
84 Callable[["Request"], "Request"],
85 "Auth",
86 ]
87
88 RequestContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]
89 ResponseContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]
90 ResponseExtensions = Mapping[str, Any]
91
92 RequestData = Mapping[str, Any]
93
94 FileContent = Union[IO[bytes], bytes, str]
95 FileTypes = Union[
96 # file (or bytes)
97 FileContent,
98 # (filename, file (or bytes))
99 Tuple[Optional[str], FileContent],
100 # (filename, file (or bytes), content_type)
101 Tuple[Optional[str], FileContent, Optional[str]],
102 # (filename, file (or bytes), content_type, headers)
103 Tuple[Optional[str], FileContent, Optional[str], Mapping[str, str]],
104 ]
105 RequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]
106
107 RequestExtensions = Mapping[str, Any]
108
109
110 class SyncByteStream:
111 def __iter__(self) -> Iterator[bytes]:
112 raise NotImplementedError(
113 "The '__iter__' method must be implemented."
114 ) # pragma: no cover
115 yield b"" # pragma: no cover
116
117 def close(self) -> None:
118 """
119 Subclasses can override this method to release any network resources
120 after a request/response cycle is complete.
121 """
122
123
124 class AsyncByteStream:
125 async def __aiter__(self) -> AsyncIterator[bytes]:
126 raise NotImplementedError(
127 "The '__aiter__' method must be implemented."
128 ) # pragma: no cover
129 yield b"" # pragma: no cover
130
131 async def aclose(self) -> None:
132 pass
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/httpx/_types.py b/httpx/_types.py
--- a/httpx/_types.py
+++ b/httpx/_types.py
@@ -16,6 +16,7 @@
Iterator,
List,
Mapping,
+ MutableMapping,
NamedTuple,
Optional,
Sequence,
@@ -87,7 +88,7 @@
RequestContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]
ResponseContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]
-ResponseExtensions = Mapping[str, Any]
+ResponseExtensions = MutableMapping[str, Any]
RequestData = Mapping[str, Any]
@@ -104,7 +105,7 @@
]
RequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]
-RequestExtensions = Mapping[str, Any]
+RequestExtensions = MutableMapping[str, Any]
class SyncByteStream:
| {"golden_diff": "diff --git a/httpx/_types.py b/httpx/_types.py\n--- a/httpx/_types.py\n+++ b/httpx/_types.py\n@@ -16,6 +16,7 @@\n Iterator,\n List,\n Mapping,\n+ MutableMapping,\n NamedTuple,\n Optional,\n Sequence,\n@@ -87,7 +88,7 @@\n \n RequestContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]\n ResponseContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]\n-ResponseExtensions = Mapping[str, Any]\n+ResponseExtensions = MutableMapping[str, Any]\n \n RequestData = Mapping[str, Any]\n \n@@ -104,7 +105,7 @@\n ]\n RequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]\n \n-RequestExtensions = Mapping[str, Any]\n+RequestExtensions = MutableMapping[str, Any]\n \n \n class SyncByteStream:\n", "issue": "Change the type of `Extensions` from `Mapping` to `MutableMapping`.\n### Discussed in https://github.com/encode/httpx/discussions/2793\r\n\r\n<div type='discussions-op-text'>\r\n\r\n<sup>Originally posted by **karosis88** July 28, 2023</sup>\r\nI'm working on a library that implements HTTP Caching for httpx and httpcore (it provides transports and connection pools), and I'd like to add an extension that simply indicates whether or not the response was taken from the cache.\r\n\r\nUnfortunately, the type of extension is Mapping, so this is an error for mypy.\r\n\r\n\r\n```python\r\nresponse = httpx.Response(200)\r\nresponse.extensions['my_custom_extension'] = 'something'\r\n```\r\n\r\nOUTPUT \r\n```\r\nerror: Unsupported target for indexed assignment (\"Mapping[Str, Any]\") [index]\r\n```\r\n\r\nThe solution is to simply change the extension type from `Mapping` to `MutableMapping`, allowing us to add custom extensions after the response has been created.\r\n\r\n[See also this pr](https://github.com/karosis88/hishel/pull/4)</div>\r\n\r\n---\r\n\r\nI believe the only change needed is in the \"_models.py\" file.\n", "before_files": [{"content": "\"\"\"\nType definitions for type checking purposes.\n\"\"\"\n\nimport ssl\nfrom http.cookiejar import CookieJar\nfrom typing import (\n IO,\n TYPE_CHECKING,\n Any,\n AsyncIterable,\n AsyncIterator,\n Callable,\n Dict,\n Iterable,\n Iterator,\n List,\n Mapping,\n NamedTuple,\n Optional,\n Sequence,\n Tuple,\n Union,\n)\n\nif TYPE_CHECKING: # pragma: no cover\n from ._auth import Auth # noqa: F401\n from ._config import Proxy, Timeout # noqa: F401\n from ._models import Cookies, Headers, Request # noqa: F401\n from ._urls import URL, QueryParams # noqa: F401\n\n\nPrimitiveData = Optional[Union[str, int, float, bool]]\n\nRawURL = NamedTuple(\n \"RawURL\",\n [\n (\"raw_scheme\", bytes),\n (\"raw_host\", bytes),\n (\"port\", Optional[int]),\n (\"raw_path\", bytes),\n ],\n)\n\nURLTypes = Union[\"URL\", str]\n\nQueryParamTypes = Union[\n \"QueryParams\",\n Mapping[str, Union[PrimitiveData, Sequence[PrimitiveData]]],\n List[Tuple[str, PrimitiveData]],\n Tuple[Tuple[str, PrimitiveData], ...],\n str,\n bytes,\n]\n\nHeaderTypes = Union[\n \"Headers\",\n Mapping[str, str],\n Mapping[bytes, bytes],\n Sequence[Tuple[str, str]],\n Sequence[Tuple[bytes, bytes]],\n]\n\nCookieTypes = Union[\"Cookies\", CookieJar, Dict[str, str], List[Tuple[str, str]]]\n\nCertTypes = Union[\n # certfile\n str,\n # (certfile, keyfile)\n Tuple[str, Optional[str]],\n # (certfile, keyfile, password)\n Tuple[str, Optional[str], Optional[str]],\n]\nVerifyTypes = Union[str, bool, ssl.SSLContext]\nTimeoutTypes = Union[\n Optional[float],\n Tuple[Optional[float], Optional[float], Optional[float], Optional[float]],\n \"Timeout\",\n]\nProxiesTypes = Union[URLTypes, \"Proxy\", Dict[URLTypes, Union[None, URLTypes, \"Proxy\"]]]\n\nAuthTypes = Union[\n Tuple[Union[str, bytes], Union[str, bytes]],\n Callable[[\"Request\"], \"Request\"],\n \"Auth\",\n]\n\nRequestContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]\nResponseContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]\nResponseExtensions = Mapping[str, Any]\n\nRequestData = Mapping[str, Any]\n\nFileContent = Union[IO[bytes], bytes, str]\nFileTypes = Union[\n # file (or bytes)\n FileContent,\n # (filename, file (or bytes))\n Tuple[Optional[str], FileContent],\n # (filename, file (or bytes), content_type)\n Tuple[Optional[str], FileContent, Optional[str]],\n # (filename, file (or bytes), content_type, headers)\n Tuple[Optional[str], FileContent, Optional[str], Mapping[str, str]],\n]\nRequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]\n\nRequestExtensions = Mapping[str, Any]\n\n\nclass SyncByteStream:\n def __iter__(self) -> Iterator[bytes]:\n raise NotImplementedError(\n \"The '__iter__' method must be implemented.\"\n ) # pragma: no cover\n yield b\"\" # pragma: no cover\n\n def close(self) -> None:\n \"\"\"\n Subclasses can override this method to release any network resources\n after a request/response cycle is complete.\n \"\"\"\n\n\nclass AsyncByteStream:\n async def __aiter__(self) -> AsyncIterator[bytes]:\n raise NotImplementedError(\n \"The '__aiter__' method must be implemented.\"\n ) # pragma: no cover\n yield b\"\" # pragma: no cover\n\n async def aclose(self) -> None:\n pass\n", "path": "httpx/_types.py"}], "after_files": [{"content": "\"\"\"\nType definitions for type checking purposes.\n\"\"\"\n\nimport ssl\nfrom http.cookiejar import CookieJar\nfrom typing import (\n IO,\n TYPE_CHECKING,\n Any,\n AsyncIterable,\n AsyncIterator,\n Callable,\n Dict,\n Iterable,\n Iterator,\n List,\n Mapping,\n MutableMapping,\n NamedTuple,\n Optional,\n Sequence,\n Tuple,\n Union,\n)\n\nif TYPE_CHECKING: # pragma: no cover\n from ._auth import Auth # noqa: F401\n from ._config import Proxy, Timeout # noqa: F401\n from ._models import Cookies, Headers, Request # noqa: F401\n from ._urls import URL, QueryParams # noqa: F401\n\n\nPrimitiveData = Optional[Union[str, int, float, bool]]\n\nRawURL = NamedTuple(\n \"RawURL\",\n [\n (\"raw_scheme\", bytes),\n (\"raw_host\", bytes),\n (\"port\", Optional[int]),\n (\"raw_path\", bytes),\n ],\n)\n\nURLTypes = Union[\"URL\", str]\n\nQueryParamTypes = Union[\n \"QueryParams\",\n Mapping[str, Union[PrimitiveData, Sequence[PrimitiveData]]],\n List[Tuple[str, PrimitiveData]],\n Tuple[Tuple[str, PrimitiveData], ...],\n str,\n bytes,\n]\n\nHeaderTypes = Union[\n \"Headers\",\n Mapping[str, str],\n Mapping[bytes, bytes],\n Sequence[Tuple[str, str]],\n Sequence[Tuple[bytes, bytes]],\n]\n\nCookieTypes = Union[\"Cookies\", CookieJar, Dict[str, str], List[Tuple[str, str]]]\n\nCertTypes = Union[\n # certfile\n str,\n # (certfile, keyfile)\n Tuple[str, Optional[str]],\n # (certfile, keyfile, password)\n Tuple[str, Optional[str], Optional[str]],\n]\nVerifyTypes = Union[str, bool, ssl.SSLContext]\nTimeoutTypes = Union[\n Optional[float],\n Tuple[Optional[float], Optional[float], Optional[float], Optional[float]],\n \"Timeout\",\n]\nProxiesTypes = Union[URLTypes, \"Proxy\", Dict[URLTypes, Union[None, URLTypes, \"Proxy\"]]]\n\nAuthTypes = Union[\n Tuple[Union[str, bytes], Union[str, bytes]],\n Callable[[\"Request\"], \"Request\"],\n \"Auth\",\n]\n\nRequestContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]\nResponseContent = Union[str, bytes, Iterable[bytes], AsyncIterable[bytes]]\nResponseExtensions = MutableMapping[str, Any]\n\nRequestData = Mapping[str, Any]\n\nFileContent = Union[IO[bytes], bytes, str]\nFileTypes = Union[\n # file (or bytes)\n FileContent,\n # (filename, file (or bytes))\n Tuple[Optional[str], FileContent],\n # (filename, file (or bytes), content_type)\n Tuple[Optional[str], FileContent, Optional[str]],\n # (filename, file (or bytes), content_type, headers)\n Tuple[Optional[str], FileContent, Optional[str], Mapping[str, str]],\n]\nRequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]\n\nRequestExtensions = MutableMapping[str, Any]\n\n\nclass SyncByteStream:\n def __iter__(self) -> Iterator[bytes]:\n raise NotImplementedError(\n \"The '__iter__' method must be implemented.\"\n ) # pragma: no cover\n yield b\"\" # pragma: no cover\n\n def close(self) -> None:\n \"\"\"\n Subclasses can override this method to release any network resources\n after a request/response cycle is complete.\n \"\"\"\n\n\nclass AsyncByteStream:\n async def __aiter__(self) -> AsyncIterator[bytes]:\n raise NotImplementedError(\n \"The '__aiter__' method must be implemented.\"\n ) # pragma: no cover\n yield b\"\" # pragma: no cover\n\n async def aclose(self) -> None:\n pass\n", "path": "httpx/_types.py"}]} | 1,666 | 205 |
gh_patches_debug_6089 | rasdani/github-patches | git_diff | encode__starlette-1459 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Raising Exceptions in sub-applications routes
### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
Let's start with this PR: #1262
It's about preventing raise `anyio.ExceptionGroup` in views under a `BaseHTTPMiddleware`. PR resolve that problem with nonlocal variable that stores our exception. But in the case of sub-applications, it does not work.
As I can see (fyi I am not good at asyncio), in the case below, we reach and read a response before we raise an exception and store it to our nonlocal variable:
fragment of `BaseHTTPMiddleware.__call__`
```python
async def call_next(request: Request) -> Response:
app_exc: typing.Optional[Exception] = None
send_stream, recv_stream = anyio.create_memory_object_stream()
async def coro() -> None:
nonlocal app_exc
async with send_stream:
try:
task = await self.app(scope, request.receive, send_stream.send)
except Exception as exc:
app_exc = exc
task_group.start_soon(coro)
try:
message = await recv_stream.receive()
except anyio.EndOfStream:
if app_exc is not None:
raise app_exc
raise RuntimeError("No response returned.")
...
response = StreamingResponse(
status_code=message["status"], content=body_stream()
)
response.raw_headers = message["headers"]
return response
```
in this moment: `except anyio.EndOfStream:` exception still no raised.
### Steps to reproduce the bug
```python
import httpx
import pytest
from fastapi import FastAPI, APIRouter
from starlette.middleware import Middleware
from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint
from starlette.requests import Request
from starlette.responses import Response
from starlette.routing import Route
class SomeError(Exception):
pass
class SomeMiddleware(BaseHTTPMiddleware):
async def dispatch(
self, request: Request, call_next: RequestResponseEndpoint
) -> Response:
return await call_next(request)
# Drop (or use not BaseHTTPMiddleware based) middleware and test works fine
app = FastAPI(middleware=[Middleware(SomeMiddleware), ])
async def simple_route(request: Request):
raise SomeError
another_router = APIRouter(
routes=[Route('/simple-route/', simple_route, methods=['GET'])]
)
sub_app = FastAPI()
sub_app.include_router(another_router)
app.router.mount(f'/api', sub_app)
@pytest.mark.asyncio
async def test_simple_route():
async with httpx.AsyncClient(app=app) as client:
with pytest.raises(SomeError):
await client.get("http://testserver/api/simple-route/")
```
### Expected behavior
An exception was raised and caught by pytest exception
### Actual behavior
An exception wasn't raised
### Debugging material
_No response_
### Environment
macOS Monterey 12.0.1, starlette 0.17.1, Python 3.9.9
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/middleware/base.py`
Content:
```
1 import typing
2
3 import anyio
4
5 from starlette.requests import Request
6 from starlette.responses import Response, StreamingResponse
7 from starlette.types import ASGIApp, Receive, Scope, Send
8
9 RequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]
10 DispatchFunction = typing.Callable[
11 [Request, RequestResponseEndpoint], typing.Awaitable[Response]
12 ]
13
14
15 class BaseHTTPMiddleware:
16 def __init__(self, app: ASGIApp, dispatch: DispatchFunction = None) -> None:
17 self.app = app
18 self.dispatch_func = self.dispatch if dispatch is None else dispatch
19
20 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
21 if scope["type"] != "http":
22 await self.app(scope, receive, send)
23 return
24
25 async def call_next(request: Request) -> Response:
26 app_exc: typing.Optional[Exception] = None
27 send_stream, recv_stream = anyio.create_memory_object_stream()
28
29 async def coro() -> None:
30 nonlocal app_exc
31
32 async with send_stream:
33 try:
34 await self.app(scope, request.receive, send_stream.send)
35 except Exception as exc:
36 app_exc = exc
37
38 task_group.start_soon(coro)
39
40 try:
41 message = await recv_stream.receive()
42 except anyio.EndOfStream:
43 if app_exc is not None:
44 raise app_exc
45 raise RuntimeError("No response returned.")
46
47 assert message["type"] == "http.response.start"
48
49 async def body_stream() -> typing.AsyncGenerator[bytes, None]:
50 async with recv_stream:
51 async for message in recv_stream:
52 assert message["type"] == "http.response.body"
53 yield message.get("body", b"")
54
55 response = StreamingResponse(
56 status_code=message["status"], content=body_stream()
57 )
58 response.raw_headers = message["headers"]
59 return response
60
61 async with anyio.create_task_group() as task_group:
62 request = Request(scope, receive=receive)
63 response = await self.dispatch_func(request, call_next)
64 await response(scope, receive, send)
65 task_group.cancel_scope.cancel()
66
67 async def dispatch(
68 self, request: Request, call_next: RequestResponseEndpoint
69 ) -> Response:
70 raise NotImplementedError() # pragma: no cover
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlette/middleware/base.py b/starlette/middleware/base.py
--- a/starlette/middleware/base.py
+++ b/starlette/middleware/base.py
@@ -52,6 +52,9 @@
assert message["type"] == "http.response.body"
yield message.get("body", b"")
+ if app_exc is not None:
+ raise app_exc
+
response = StreamingResponse(
status_code=message["status"], content=body_stream()
)
| {"golden_diff": "diff --git a/starlette/middleware/base.py b/starlette/middleware/base.py\n--- a/starlette/middleware/base.py\n+++ b/starlette/middleware/base.py\n@@ -52,6 +52,9 @@\n assert message[\"type\"] == \"http.response.body\"\n yield message.get(\"body\", b\"\")\n \n+ if app_exc is not None:\n+ raise app_exc\n+\n response = StreamingResponse(\n status_code=message[\"status\"], content=body_stream()\n )\n", "issue": "Raising Exceptions in sub-applications routes\n### Checklist\r\n\r\n- [X] The bug is reproducible against the latest release or `master`.\r\n- [X] There are no similar issues or pull requests to fix it yet.\r\n\r\n### Describe the bug\r\n\r\nLet's start with this PR: #1262\r\n\r\nIt's about preventing raise `anyio.ExceptionGroup` in views under a `BaseHTTPMiddleware`. PR resolve that problem with nonlocal variable that stores our exception. But in the case of sub-applications, it does not work. \r\n\r\nAs I can see (fyi I am not good at asyncio), in the case below, we reach and read a response before we raise an exception and store it to our nonlocal variable:\r\n\r\nfragment of `BaseHTTPMiddleware.__call__`\r\n```python\r\nasync def call_next(request: Request) -> Response:\r\n app_exc: typing.Optional[Exception] = None\r\n send_stream, recv_stream = anyio.create_memory_object_stream()\r\n\r\n async def coro() -> None:\r\n nonlocal app_exc\r\n\r\n async with send_stream:\r\n try:\r\n task = await self.app(scope, request.receive, send_stream.send)\r\n except Exception as exc:\r\n app_exc = exc\r\n\r\n task_group.start_soon(coro)\r\n\r\n try:\r\n message = await recv_stream.receive()\r\n except anyio.EndOfStream:\r\n if app_exc is not None:\r\n raise app_exc\r\n raise RuntimeError(\"No response returned.\")\r\n \r\n ...\r\n response = StreamingResponse(\r\n status_code=message[\"status\"], content=body_stream()\r\n )\r\n response.raw_headers = message[\"headers\"]\r\n return response\r\n```\r\n\r\nin this moment: `except anyio.EndOfStream:` exception still no raised.\r\n\r\n### Steps to reproduce the bug\r\n\r\n```python\r\nimport httpx\r\nimport pytest\r\nfrom fastapi import FastAPI, APIRouter\r\nfrom starlette.middleware import Middleware\r\nfrom starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint\r\nfrom starlette.requests import Request\r\nfrom starlette.responses import Response\r\nfrom starlette.routing import Route\r\n\r\n\r\nclass SomeError(Exception):\r\n pass\r\n\r\n\r\nclass SomeMiddleware(BaseHTTPMiddleware):\r\n async def dispatch(\r\n self, request: Request, call_next: RequestResponseEndpoint\r\n ) -> Response:\r\n return await call_next(request)\r\n\r\n# Drop (or use not BaseHTTPMiddleware based) middleware and test works fine\r\napp = FastAPI(middleware=[Middleware(SomeMiddleware), ])\r\n\r\n\r\nasync def simple_route(request: Request):\r\n raise SomeError\r\n\r\n\r\nanother_router = APIRouter(\r\n routes=[Route('/simple-route/', simple_route, methods=['GET'])]\r\n)\r\nsub_app = FastAPI()\r\nsub_app.include_router(another_router)\r\napp.router.mount(f'/api', sub_app)\r\n\r\n\r\[email protected]\r\nasync def test_simple_route():\r\n async with httpx.AsyncClient(app=app) as client:\r\n with pytest.raises(SomeError):\r\n await client.get(\"http://testserver/api/simple-route/\")\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\nAn exception was raised and caught by pytest exception\r\n\r\n### Actual behavior\r\n\r\nAn exception wasn't raised\r\n\r\n### Debugging material\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\nmacOS Monterey 12.0.1, starlette 0.17.1, Python 3.9.9\r\n\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "import typing\n\nimport anyio\n\nfrom starlette.requests import Request\nfrom starlette.responses import Response, StreamingResponse\nfrom starlette.types import ASGIApp, Receive, Scope, Send\n\nRequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]\nDispatchFunction = typing.Callable[\n [Request, RequestResponseEndpoint], typing.Awaitable[Response]\n]\n\n\nclass BaseHTTPMiddleware:\n def __init__(self, app: ASGIApp, dispatch: DispatchFunction = None) -> None:\n self.app = app\n self.dispatch_func = self.dispatch if dispatch is None else dispatch\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n async def call_next(request: Request) -> Response:\n app_exc: typing.Optional[Exception] = None\n send_stream, recv_stream = anyio.create_memory_object_stream()\n\n async def coro() -> None:\n nonlocal app_exc\n\n async with send_stream:\n try:\n await self.app(scope, request.receive, send_stream.send)\n except Exception as exc:\n app_exc = exc\n\n task_group.start_soon(coro)\n\n try:\n message = await recv_stream.receive()\n except anyio.EndOfStream:\n if app_exc is not None:\n raise app_exc\n raise RuntimeError(\"No response returned.\")\n\n assert message[\"type\"] == \"http.response.start\"\n\n async def body_stream() -> typing.AsyncGenerator[bytes, None]:\n async with recv_stream:\n async for message in recv_stream:\n assert message[\"type\"] == \"http.response.body\"\n yield message.get(\"body\", b\"\")\n\n response = StreamingResponse(\n status_code=message[\"status\"], content=body_stream()\n )\n response.raw_headers = message[\"headers\"]\n return response\n\n async with anyio.create_task_group() as task_group:\n request = Request(scope, receive=receive)\n response = await self.dispatch_func(request, call_next)\n await response(scope, receive, send)\n task_group.cancel_scope.cancel()\n\n async def dispatch(\n self, request: Request, call_next: RequestResponseEndpoint\n ) -> Response:\n raise NotImplementedError() # pragma: no cover\n", "path": "starlette/middleware/base.py"}], "after_files": [{"content": "import typing\n\nimport anyio\n\nfrom starlette.requests import Request\nfrom starlette.responses import Response, StreamingResponse\nfrom starlette.types import ASGIApp, Receive, Scope, Send\n\nRequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]\nDispatchFunction = typing.Callable[\n [Request, RequestResponseEndpoint], typing.Awaitable[Response]\n]\n\n\nclass BaseHTTPMiddleware:\n def __init__(self, app: ASGIApp, dispatch: DispatchFunction = None) -> None:\n self.app = app\n self.dispatch_func = self.dispatch if dispatch is None else dispatch\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n async def call_next(request: Request) -> Response:\n app_exc: typing.Optional[Exception] = None\n send_stream, recv_stream = anyio.create_memory_object_stream()\n\n async def coro() -> None:\n nonlocal app_exc\n\n async with send_stream:\n try:\n await self.app(scope, request.receive, send_stream.send)\n except Exception as exc:\n app_exc = exc\n\n task_group.start_soon(coro)\n\n try:\n message = await recv_stream.receive()\n except anyio.EndOfStream:\n if app_exc is not None:\n raise app_exc\n raise RuntimeError(\"No response returned.\")\n\n assert message[\"type\"] == \"http.response.start\"\n\n async def body_stream() -> typing.AsyncGenerator[bytes, None]:\n async with recv_stream:\n async for message in recv_stream:\n assert message[\"type\"] == \"http.response.body\"\n yield message.get(\"body\", b\"\")\n\n if app_exc is not None:\n raise app_exc\n\n response = StreamingResponse(\n status_code=message[\"status\"], content=body_stream()\n )\n response.raw_headers = message[\"headers\"]\n return response\n\n async with anyio.create_task_group() as task_group:\n request = Request(scope, receive=receive)\n response = await self.dispatch_func(request, call_next)\n await response(scope, receive, send)\n task_group.cancel_scope.cancel()\n\n async def dispatch(\n self, request: Request, call_next: RequestResponseEndpoint\n ) -> Response:\n raise NotImplementedError() # pragma: no cover\n", "path": "starlette/middleware/base.py"}]} | 1,581 | 107 |
gh_patches_debug_5943 | rasdani/github-patches | git_diff | paperless-ngx__paperless-ngx-5320 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Configuration: User-Arguments are mandatory
### Description
See the following Screenshot.

I don't think, user args should be mandatory, since the documentation says "additionally specify arguments".
### Steps to reproduce
1. Click on "Configuration"
2. Fill in some Values
3. Click "Save"
4. User-Args are mandatory
In order to save a Configuration, the user now has to enter an empty json which is not very user-friendly.
### Webserver logs
```bash
not applicable
```
### Browser logs
_No response_
### Paperless-ngx version
2.3.2
### Host OS
Synology DSM 6.2.X
### Installation method
Docker - official image
### Browser
Chrome
### Configuration changes
none of interest
### Other
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/paperless/serialisers.py`
Content:
```
1 import logging
2
3 from django.contrib.auth.models import Group
4 from django.contrib.auth.models import Permission
5 from django.contrib.auth.models import User
6 from rest_framework import serializers
7
8 from paperless.models import ApplicationConfiguration
9
10 logger = logging.getLogger("paperless.settings")
11
12
13 class ObfuscatedUserPasswordField(serializers.Field):
14 """
15 Sends *** string instead of password in the clear
16 """
17
18 def to_representation(self, value):
19 return "**********" if len(value) > 0 else ""
20
21 def to_internal_value(self, data):
22 return data
23
24
25 class UserSerializer(serializers.ModelSerializer):
26 password = ObfuscatedUserPasswordField(required=False)
27 user_permissions = serializers.SlugRelatedField(
28 many=True,
29 queryset=Permission.objects.all(),
30 slug_field="codename",
31 required=False,
32 )
33 inherited_permissions = serializers.SerializerMethodField()
34
35 class Meta:
36 model = User
37 fields = (
38 "id",
39 "username",
40 "email",
41 "password",
42 "first_name",
43 "last_name",
44 "date_joined",
45 "is_staff",
46 "is_active",
47 "is_superuser",
48 "groups",
49 "user_permissions",
50 "inherited_permissions",
51 )
52
53 def get_inherited_permissions(self, obj):
54 return obj.get_group_permissions()
55
56 def update(self, instance, validated_data):
57 if "password" in validated_data:
58 if len(validated_data.get("password").replace("*", "")) > 0:
59 instance.set_password(validated_data.get("password"))
60 instance.save()
61 validated_data.pop("password")
62 super().update(instance, validated_data)
63 return instance
64
65 def create(self, validated_data):
66 groups = None
67 if "groups" in validated_data:
68 groups = validated_data.pop("groups")
69 user_permissions = None
70 if "user_permissions" in validated_data:
71 user_permissions = validated_data.pop("user_permissions")
72 password = None
73 if (
74 "password" in validated_data
75 and len(validated_data.get("password").replace("*", "")) > 0
76 ):
77 password = validated_data.pop("password")
78 user = User.objects.create(**validated_data)
79 # set groups
80 if groups:
81 user.groups.set(groups)
82 # set permissions
83 if user_permissions:
84 user.user_permissions.set(user_permissions)
85 # set password
86 if password:
87 user.set_password(password)
88 user.save()
89 return user
90
91
92 class GroupSerializer(serializers.ModelSerializer):
93 permissions = serializers.SlugRelatedField(
94 many=True,
95 queryset=Permission.objects.all(),
96 slug_field="codename",
97 )
98
99 class Meta:
100 model = Group
101 fields = (
102 "id",
103 "name",
104 "permissions",
105 )
106
107
108 class ProfileSerializer(serializers.ModelSerializer):
109 email = serializers.EmailField(allow_null=False)
110 password = ObfuscatedUserPasswordField(required=False, allow_null=False)
111 auth_token = serializers.SlugRelatedField(read_only=True, slug_field="key")
112
113 class Meta:
114 model = User
115 fields = (
116 "email",
117 "password",
118 "first_name",
119 "last_name",
120 "auth_token",
121 )
122
123
124 class ApplicationConfigurationSerializer(serializers.ModelSerializer):
125 user_args = serializers.JSONField(binary=True)
126
127 class Meta:
128 model = ApplicationConfiguration
129 fields = "__all__"
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/paperless/serialisers.py b/src/paperless/serialisers.py
--- a/src/paperless/serialisers.py
+++ b/src/paperless/serialisers.py
@@ -122,7 +122,12 @@
class ApplicationConfigurationSerializer(serializers.ModelSerializer):
- user_args = serializers.JSONField(binary=True)
+ user_args = serializers.JSONField(binary=True, allow_null=True)
+
+ def run_validation(self, data):
+ if "user_args" in data and data["user_args"] == "":
+ data["user_args"] = None
+ return super().run_validation(data)
class Meta:
model = ApplicationConfiguration
| {"golden_diff": "diff --git a/src/paperless/serialisers.py b/src/paperless/serialisers.py\n--- a/src/paperless/serialisers.py\n+++ b/src/paperless/serialisers.py\n@@ -122,7 +122,12 @@\n \n \n class ApplicationConfigurationSerializer(serializers.ModelSerializer):\n- user_args = serializers.JSONField(binary=True)\n+ user_args = serializers.JSONField(binary=True, allow_null=True)\n+\n+ def run_validation(self, data):\n+ if \"user_args\" in data and data[\"user_args\"] == \"\":\n+ data[\"user_args\"] = None\n+ return super().run_validation(data)\n \n class Meta:\n model = ApplicationConfiguration\n", "issue": "Configuration: User-Arguments are mandatory\n### Description\r\n\r\nSee the following Screenshot. \r\n\r\n\r\n\r\nI don't think, user args should be mandatory, since the documentation says \"additionally specify arguments\".\r\n\r\n### Steps to reproduce\r\n\r\n1. Click on \"Configuration\"\r\n2. Fill in some Values\r\n3. Click \"Save\"\r\n4. User-Args are mandatory\r\n\r\nIn order to save a Configuration, the user now has to enter an empty json which is not very user-friendly.\r\n\r\n### Webserver logs\r\n\r\n```bash\r\nnot applicable\r\n```\r\n\r\n\r\n### Browser logs\r\n\r\n_No response_\r\n\r\n### Paperless-ngx version\r\n\r\n2.3.2\r\n\r\n### Host OS\r\n\r\nSynology DSM 6.2.X\r\n\r\n### Installation method\r\n\r\nDocker - official image\r\n\r\n### Browser\r\n\r\nChrome\r\n\r\n### Configuration changes\r\n\r\nnone of interest\r\n\r\n### Other\r\n\r\n_No response_\r\n\r\n### Please confirm the following\r\n\r\n- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.\r\n- [X] I have already searched for relevant existing issues and discussions before opening this report.\r\n- [X] I have updated the title field above with a concise description.\n", "before_files": [{"content": "import logging\n\nfrom django.contrib.auth.models import Group\nfrom django.contrib.auth.models import Permission\nfrom django.contrib.auth.models import User\nfrom rest_framework import serializers\n\nfrom paperless.models import ApplicationConfiguration\n\nlogger = logging.getLogger(\"paperless.settings\")\n\n\nclass ObfuscatedUserPasswordField(serializers.Field):\n \"\"\"\n Sends *** string instead of password in the clear\n \"\"\"\n\n def to_representation(self, value):\n return \"**********\" if len(value) > 0 else \"\"\n\n def to_internal_value(self, data):\n return data\n\n\nclass UserSerializer(serializers.ModelSerializer):\n password = ObfuscatedUserPasswordField(required=False)\n user_permissions = serializers.SlugRelatedField(\n many=True,\n queryset=Permission.objects.all(),\n slug_field=\"codename\",\n required=False,\n )\n inherited_permissions = serializers.SerializerMethodField()\n\n class Meta:\n model = User\n fields = (\n \"id\",\n \"username\",\n \"email\",\n \"password\",\n \"first_name\",\n \"last_name\",\n \"date_joined\",\n \"is_staff\",\n \"is_active\",\n \"is_superuser\",\n \"groups\",\n \"user_permissions\",\n \"inherited_permissions\",\n )\n\n def get_inherited_permissions(self, obj):\n return obj.get_group_permissions()\n\n def update(self, instance, validated_data):\n if \"password\" in validated_data:\n if len(validated_data.get(\"password\").replace(\"*\", \"\")) > 0:\n instance.set_password(validated_data.get(\"password\"))\n instance.save()\n validated_data.pop(\"password\")\n super().update(instance, validated_data)\n return instance\n\n def create(self, validated_data):\n groups = None\n if \"groups\" in validated_data:\n groups = validated_data.pop(\"groups\")\n user_permissions = None\n if \"user_permissions\" in validated_data:\n user_permissions = validated_data.pop(\"user_permissions\")\n password = None\n if (\n \"password\" in validated_data\n and len(validated_data.get(\"password\").replace(\"*\", \"\")) > 0\n ):\n password = validated_data.pop(\"password\")\n user = User.objects.create(**validated_data)\n # set groups\n if groups:\n user.groups.set(groups)\n # set permissions\n if user_permissions:\n user.user_permissions.set(user_permissions)\n # set password\n if password:\n user.set_password(password)\n user.save()\n return user\n\n\nclass GroupSerializer(serializers.ModelSerializer):\n permissions = serializers.SlugRelatedField(\n many=True,\n queryset=Permission.objects.all(),\n slug_field=\"codename\",\n )\n\n class Meta:\n model = Group\n fields = (\n \"id\",\n \"name\",\n \"permissions\",\n )\n\n\nclass ProfileSerializer(serializers.ModelSerializer):\n email = serializers.EmailField(allow_null=False)\n password = ObfuscatedUserPasswordField(required=False, allow_null=False)\n auth_token = serializers.SlugRelatedField(read_only=True, slug_field=\"key\")\n\n class Meta:\n model = User\n fields = (\n \"email\",\n \"password\",\n \"first_name\",\n \"last_name\",\n \"auth_token\",\n )\n\n\nclass ApplicationConfigurationSerializer(serializers.ModelSerializer):\n user_args = serializers.JSONField(binary=True)\n\n class Meta:\n model = ApplicationConfiguration\n fields = \"__all__\"\n", "path": "src/paperless/serialisers.py"}], "after_files": [{"content": "import logging\n\nfrom django.contrib.auth.models import Group\nfrom django.contrib.auth.models import Permission\nfrom django.contrib.auth.models import User\nfrom rest_framework import serializers\n\nfrom paperless.models import ApplicationConfiguration\n\nlogger = logging.getLogger(\"paperless.settings\")\n\n\nclass ObfuscatedUserPasswordField(serializers.Field):\n \"\"\"\n Sends *** string instead of password in the clear\n \"\"\"\n\n def to_representation(self, value):\n return \"**********\" if len(value) > 0 else \"\"\n\n def to_internal_value(self, data):\n return data\n\n\nclass UserSerializer(serializers.ModelSerializer):\n password = ObfuscatedUserPasswordField(required=False)\n user_permissions = serializers.SlugRelatedField(\n many=True,\n queryset=Permission.objects.all(),\n slug_field=\"codename\",\n required=False,\n )\n inherited_permissions = serializers.SerializerMethodField()\n\n class Meta:\n model = User\n fields = (\n \"id\",\n \"username\",\n \"email\",\n \"password\",\n \"first_name\",\n \"last_name\",\n \"date_joined\",\n \"is_staff\",\n \"is_active\",\n \"is_superuser\",\n \"groups\",\n \"user_permissions\",\n \"inherited_permissions\",\n )\n\n def get_inherited_permissions(self, obj):\n return obj.get_group_permissions()\n\n def update(self, instance, validated_data):\n if \"password\" in validated_data:\n if len(validated_data.get(\"password\").replace(\"*\", \"\")) > 0:\n instance.set_password(validated_data.get(\"password\"))\n instance.save()\n validated_data.pop(\"password\")\n super().update(instance, validated_data)\n return instance\n\n def create(self, validated_data):\n groups = None\n if \"groups\" in validated_data:\n groups = validated_data.pop(\"groups\")\n user_permissions = None\n if \"user_permissions\" in validated_data:\n user_permissions = validated_data.pop(\"user_permissions\")\n password = None\n if (\n \"password\" in validated_data\n and len(validated_data.get(\"password\").replace(\"*\", \"\")) > 0\n ):\n password = validated_data.pop(\"password\")\n user = User.objects.create(**validated_data)\n # set groups\n if groups:\n user.groups.set(groups)\n # set permissions\n if user_permissions:\n user.user_permissions.set(user_permissions)\n # set password\n if password:\n user.set_password(password)\n user.save()\n return user\n\n\nclass GroupSerializer(serializers.ModelSerializer):\n permissions = serializers.SlugRelatedField(\n many=True,\n queryset=Permission.objects.all(),\n slug_field=\"codename\",\n )\n\n class Meta:\n model = Group\n fields = (\n \"id\",\n \"name\",\n \"permissions\",\n )\n\n\nclass ProfileSerializer(serializers.ModelSerializer):\n email = serializers.EmailField(allow_null=False)\n password = ObfuscatedUserPasswordField(required=False, allow_null=False)\n auth_token = serializers.SlugRelatedField(read_only=True, slug_field=\"key\")\n\n class Meta:\n model = User\n fields = (\n \"email\",\n \"password\",\n \"first_name\",\n \"last_name\",\n \"auth_token\",\n )\n\n\nclass ApplicationConfigurationSerializer(serializers.ModelSerializer):\n user_args = serializers.JSONField(binary=True, allow_null=True)\n\n def run_validation(self, data):\n if \"user_args\" in data and data[\"user_args\"] == \"\":\n data[\"user_args\"] = None\n return super().run_validation(data)\n\n class Meta:\n model = ApplicationConfiguration\n fields = \"__all__\"\n", "path": "src/paperless/serialisers.py"}]} | 1,555 | 150 |
gh_patches_debug_34233 | rasdani/github-patches | git_diff | bokeh__bokeh-10360 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
webdriver failing to find installed firefox/geckodriver
Can't get export examples running on binder even though everything is installed and on the path:
<img width="1130" alt="Screen Shot 2020-06-02 at 8 23 16 PM" src="https://user-images.githubusercontent.com/1078448/83592297-e9459c80-a50e-11ea-90d1-7189fcc93af0.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/io/webdriver.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7 '''
8
9 '''
10
11 #-----------------------------------------------------------------------------
12 # Boilerplate
13 #-----------------------------------------------------------------------------
14 import logging # isort:skip
15 log = logging.getLogger(__name__)
16
17 #-----------------------------------------------------------------------------
18 # Imports
19 #-----------------------------------------------------------------------------
20
21 from ..util.dependencies import import_required # isort:skip
22 import_required("selenium.webdriver",
23 "To use bokeh.io image export functions you need selenium "
24 "('conda install selenium' or 'pip install selenium')")
25
26 # Standard library imports
27 import atexit
28 import shutil
29 from os.path import devnull
30 from typing import List, Optional
31
32 # External imports
33 from selenium import webdriver
34 from selenium.webdriver.remote.webdriver import WebDriver
35 from typing_extensions import Literal
36
37 #-----------------------------------------------------------------------------
38 # Globals and constants
39 #-----------------------------------------------------------------------------
40
41 DriverKind = Literal["firefox", "chromium"]
42
43 __all__ = (
44 'webdriver_control',
45 )
46
47 #-----------------------------------------------------------------------------
48 # General API
49 #-----------------------------------------------------------------------------
50
51 #-----------------------------------------------------------------------------
52 # Dev API
53 #-----------------------------------------------------------------------------
54
55 def create_firefox_webdriver() -> WebDriver:
56 from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
57 binary = FirefoxBinary(_detect("firefox"))
58 options = webdriver.firefox.options.Options()
59 options.add_argument("--headless")
60 return webdriver.Firefox(firefox_binary=binary, options=options, service_log_path=devnull)
61
62 def create_chromium_webdriver() -> WebDriver:
63 options = webdriver.chrome.options.Options()
64 options.add_argument("--headless")
65 options.add_argument("--hide-scrollbars")
66 options.add_argument("--force-device-scale-factor=1")
67 options.add_argument("--force-color-profile=srgb")
68 return webdriver.Chrome(options=options)
69
70 #-----------------------------------------------------------------------------
71 # Private API
72 #-----------------------------------------------------------------------------
73
74 def _detect(executable: str) -> Optional[str]:
75 return shutil.which(executable)
76
77 def _try_create_firefox_webdriver() -> Optional[WebDriver]:
78 try:
79 return create_firefox_webdriver()
80 except Exception:
81 return None
82
83 def _try_create_chromium_webdriver() -> Optional[WebDriver]:
84 try:
85 return create_chromium_webdriver()
86 except Exception:
87 return None
88
89 class _WebdriverState:
90 '''
91
92 '''
93
94 reuse: bool
95 kind: Optional[DriverKind]
96
97 current: Optional[WebDriver]
98 _drivers: List[WebDriver]
99
100 def __init__(self, *, kind: Optional[DriverKind] = None, reuse: bool = True):
101 self.kind = kind
102 self.reuse = reuse
103 self.current = None
104 self._drivers = set()
105
106 def terminate(self, driver: WebDriver) -> None:
107 self._drivers.remove(driver)
108 driver.quit()
109
110 def reset(self) -> None:
111 if self.current is not None:
112 self.terminate(self.current)
113 self.current = None
114
115 def get(self) -> WebDriver:
116 if not self.reuse or self.current is None:
117 self.reset()
118 self.current = self.create()
119 return self.current
120
121 def create(self, kind: Optional[DriverKind] = None) -> WebDriver:
122 driver = self._create(kind)
123 self._drivers.add(driver)
124 return driver
125
126 def _create(self, kind: Optional[DriverKind]) -> WebDriver:
127 driver_kind = kind or self.kind
128
129 if driver_kind is None:
130 driver = _try_create_chromium_webdriver()
131 if driver is not None:
132 self.kind = "chromium"
133 return driver
134
135 driver = _try_create_firefox_webdriver()
136 if driver is not None:
137 self.kind = "firefox"
138 return driver
139
140 raise RuntimeError("Neither firefox and geckodriver nor a variant of chromium browser and " \
141 "chromedriver are available on system PATH. You can install the former " \
142 "with 'conda install -c conda-forge firefox geckodriver'.")
143 elif driver_kind == "chromium":
144 return create_chromium_webdriver()
145 elif driver_kind == "firefox":
146 return create_firefox_webdriver()
147 else:
148 raise ValueError(f"'{driver_kind}' is not a recognized webdriver kind")
149
150 def cleanup(self) -> None:
151 self.reset()
152 for driver in list(self._drivers):
153 self.terminate(driver)
154
155 #-----------------------------------------------------------------------------
156 # Code
157 #-----------------------------------------------------------------------------
158
159 webdriver_control = _WebdriverState()
160
161 atexit.register(lambda: webdriver_control.cleanup())
162
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bokeh/io/webdriver.py b/bokeh/io/webdriver.py
--- a/bokeh/io/webdriver.py
+++ b/bokeh/io/webdriver.py
@@ -25,8 +25,9 @@
# Standard library imports
import atexit
-import shutil
-from os.path import devnull
+import os
+from os.path import devnull, dirname, isfile, join
+from shutil import which
from typing import List, Optional
# External imports
@@ -53,11 +54,38 @@
#-----------------------------------------------------------------------------
def create_firefox_webdriver() -> WebDriver:
+ firefox = which("firefox")
+ if firefox is None:
+ raise RuntimeError("firefox is not installed or not present on PATH")
+
+ geckodriver = which("geckodriver")
+ if geckodriver is None:
+ raise RuntimeError("geckodriver is not installed or not present on PATH")
+
+ firefox_paths = [
+ join(dirname(firefox), "FirefoxApp", "firefox"),
+ join(dirname(firefox), "FirefoxApp", "Contents", "MacOS", "firefox"),
+ ]
+
+ for firefox_path in firefox_paths:
+ if _is_executable(firefox_path):
+ binary_path = firefox_path
+ break
+ else:
+ binary_path = firefox
+
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
- binary = FirefoxBinary(_detect("firefox"))
+ binary = FirefoxBinary(binary_path)
+
options = webdriver.firefox.options.Options()
options.add_argument("--headless")
- return webdriver.Firefox(firefox_binary=binary, options=options, service_log_path=devnull)
+
+ return webdriver.Firefox(
+ options=options,
+ firefox_binary=binary,
+ executable_path=geckodriver,
+ service_log_path=devnull,
+ )
def create_chromium_webdriver() -> WebDriver:
options = webdriver.chrome.options.Options()
@@ -71,8 +99,8 @@
# Private API
#-----------------------------------------------------------------------------
-def _detect(executable: str) -> Optional[str]:
- return shutil.which(executable)
+def _is_executable(path: str) -> bool:
+ return isfile(path) and os.access(path, os.X_OK)
def _try_create_firefox_webdriver() -> Optional[WebDriver]:
try:
| {"golden_diff": "diff --git a/bokeh/io/webdriver.py b/bokeh/io/webdriver.py\n--- a/bokeh/io/webdriver.py\n+++ b/bokeh/io/webdriver.py\n@@ -25,8 +25,9 @@\n \n # Standard library imports\n import atexit\n-import shutil\n-from os.path import devnull\n+import os\n+from os.path import devnull, dirname, isfile, join\n+from shutil import which\n from typing import List, Optional\n \n # External imports\n@@ -53,11 +54,38 @@\n #-----------------------------------------------------------------------------\n \n def create_firefox_webdriver() -> WebDriver:\n+ firefox = which(\"firefox\")\n+ if firefox is None:\n+ raise RuntimeError(\"firefox is not installed or not present on PATH\")\n+\n+ geckodriver = which(\"geckodriver\")\n+ if geckodriver is None:\n+ raise RuntimeError(\"geckodriver is not installed or not present on PATH\")\n+\n+ firefox_paths = [\n+ join(dirname(firefox), \"FirefoxApp\", \"firefox\"),\n+ join(dirname(firefox), \"FirefoxApp\", \"Contents\", \"MacOS\", \"firefox\"),\n+ ]\n+\n+ for firefox_path in firefox_paths:\n+ if _is_executable(firefox_path):\n+ binary_path = firefox_path\n+ break\n+ else:\n+ binary_path = firefox\n+\n from selenium.webdriver.firefox.firefox_binary import FirefoxBinary\n- binary = FirefoxBinary(_detect(\"firefox\"))\n+ binary = FirefoxBinary(binary_path)\n+\n options = webdriver.firefox.options.Options()\n options.add_argument(\"--headless\")\n- return webdriver.Firefox(firefox_binary=binary, options=options, service_log_path=devnull)\n+\n+ return webdriver.Firefox(\n+ options=options,\n+ firefox_binary=binary,\n+ executable_path=geckodriver,\n+ service_log_path=devnull,\n+ )\n \n def create_chromium_webdriver() -> WebDriver:\n options = webdriver.chrome.options.Options()\n@@ -71,8 +99,8 @@\n # Private API\n #-----------------------------------------------------------------------------\n \n-def _detect(executable: str) -> Optional[str]:\n- return shutil.which(executable)\n+def _is_executable(path: str) -> bool:\n+ return isfile(path) and os.access(path, os.X_OK)\n \n def _try_create_firefox_webdriver() -> Optional[WebDriver]:\n try:\n", "issue": "webdriver failing to find installed firefox/geckodriver\nCan't get export examples running on binder even though everything is installed and on the path:\r\n\r\n<img width=\"1130\" alt=\"Screen Shot 2020-06-02 at 8 23 16 PM\" src=\"https://user-images.githubusercontent.com/1078448/83592297-e9459c80-a50e-11ea-90d1-7189fcc93af0.png\">\r\n\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n'''\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nimport logging # isort:skip\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\nfrom ..util.dependencies import import_required # isort:skip\nimport_required(\"selenium.webdriver\",\n \"To use bokeh.io image export functions you need selenium \"\n \"('conda install selenium' or 'pip install selenium')\")\n\n# Standard library imports\nimport atexit\nimport shutil\nfrom os.path import devnull\nfrom typing import List, Optional\n\n# External imports\nfrom selenium import webdriver\nfrom selenium.webdriver.remote.webdriver import WebDriver\nfrom typing_extensions import Literal\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\nDriverKind = Literal[\"firefox\", \"chromium\"]\n\n__all__ = (\n 'webdriver_control',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\ndef create_firefox_webdriver() -> WebDriver:\n from selenium.webdriver.firefox.firefox_binary import FirefoxBinary\n binary = FirefoxBinary(_detect(\"firefox\"))\n options = webdriver.firefox.options.Options()\n options.add_argument(\"--headless\")\n return webdriver.Firefox(firefox_binary=binary, options=options, service_log_path=devnull)\n\ndef create_chromium_webdriver() -> WebDriver:\n options = webdriver.chrome.options.Options()\n options.add_argument(\"--headless\")\n options.add_argument(\"--hide-scrollbars\")\n options.add_argument(\"--force-device-scale-factor=1\")\n options.add_argument(\"--force-color-profile=srgb\")\n return webdriver.Chrome(options=options)\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\ndef _detect(executable: str) -> Optional[str]:\n return shutil.which(executable)\n\ndef _try_create_firefox_webdriver() -> Optional[WebDriver]:\n try:\n return create_firefox_webdriver()\n except Exception:\n return None\n\ndef _try_create_chromium_webdriver() -> Optional[WebDriver]:\n try:\n return create_chromium_webdriver()\n except Exception:\n return None\n\nclass _WebdriverState:\n '''\n\n '''\n\n reuse: bool\n kind: Optional[DriverKind]\n\n current: Optional[WebDriver]\n _drivers: List[WebDriver]\n\n def __init__(self, *, kind: Optional[DriverKind] = None, reuse: bool = True):\n self.kind = kind\n self.reuse = reuse\n self.current = None\n self._drivers = set()\n\n def terminate(self, driver: WebDriver) -> None:\n self._drivers.remove(driver)\n driver.quit()\n\n def reset(self) -> None:\n if self.current is not None:\n self.terminate(self.current)\n self.current = None\n\n def get(self) -> WebDriver:\n if not self.reuse or self.current is None:\n self.reset()\n self.current = self.create()\n return self.current\n\n def create(self, kind: Optional[DriverKind] = None) -> WebDriver:\n driver = self._create(kind)\n self._drivers.add(driver)\n return driver\n\n def _create(self, kind: Optional[DriverKind]) -> WebDriver:\n driver_kind = kind or self.kind\n\n if driver_kind is None:\n driver = _try_create_chromium_webdriver()\n if driver is not None:\n self.kind = \"chromium\"\n return driver\n\n driver = _try_create_firefox_webdriver()\n if driver is not None:\n self.kind = \"firefox\"\n return driver\n\n raise RuntimeError(\"Neither firefox and geckodriver nor a variant of chromium browser and \" \\\n \"chromedriver are available on system PATH. You can install the former \" \\\n \"with 'conda install -c conda-forge firefox geckodriver'.\")\n elif driver_kind == \"chromium\":\n return create_chromium_webdriver()\n elif driver_kind == \"firefox\":\n return create_firefox_webdriver()\n else:\n raise ValueError(f\"'{driver_kind}' is not a recognized webdriver kind\")\n\n def cleanup(self) -> None:\n self.reset()\n for driver in list(self._drivers):\n self.terminate(driver)\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n\nwebdriver_control = _WebdriverState()\n\natexit.register(lambda: webdriver_control.cleanup())\n", "path": "bokeh/io/webdriver.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2020, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n'''\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nimport logging # isort:skip\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\nfrom ..util.dependencies import import_required # isort:skip\nimport_required(\"selenium.webdriver\",\n \"To use bokeh.io image export functions you need selenium \"\n \"('conda install selenium' or 'pip install selenium')\")\n\n# Standard library imports\nimport atexit\nimport os\nfrom os.path import devnull, dirname, isfile, join\nfrom shutil import which\nfrom typing import List, Optional\n\n# External imports\nfrom selenium import webdriver\nfrom selenium.webdriver.remote.webdriver import WebDriver\nfrom typing_extensions import Literal\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\nDriverKind = Literal[\"firefox\", \"chromium\"]\n\n__all__ = (\n 'webdriver_control',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\ndef create_firefox_webdriver() -> WebDriver:\n firefox = which(\"firefox\")\n if firefox is None:\n raise RuntimeError(\"firefox is not installed or not present on PATH\")\n\n geckodriver = which(\"geckodriver\")\n if geckodriver is None:\n raise RuntimeError(\"geckodriver is not installed or not present on PATH\")\n\n firefox_paths = [\n join(dirname(firefox), \"FirefoxApp\", \"firefox\"),\n join(dirname(firefox), \"FirefoxApp\", \"Contents\", \"MacOS\", \"firefox\"),\n ]\n\n for firefox_path in firefox_paths:\n if _is_executable(firefox_path):\n binary_path = firefox_path\n break\n else:\n binary_path = firefox\n\n from selenium.webdriver.firefox.firefox_binary import FirefoxBinary\n binary = FirefoxBinary(binary_path)\n\n options = webdriver.firefox.options.Options()\n options.add_argument(\"--headless\")\n\n return webdriver.Firefox(\n options=options,\n firefox_binary=binary,\n executable_path=geckodriver,\n service_log_path=devnull,\n )\n\ndef create_chromium_webdriver() -> WebDriver:\n options = webdriver.chrome.options.Options()\n options.add_argument(\"--headless\")\n options.add_argument(\"--hide-scrollbars\")\n options.add_argument(\"--force-device-scale-factor=1\")\n options.add_argument(\"--force-color-profile=srgb\")\n return webdriver.Chrome(options=options)\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\ndef _is_executable(path: str) -> bool:\n return isfile(path) and os.access(path, os.X_OK)\n\ndef _try_create_firefox_webdriver() -> Optional[WebDriver]:\n try:\n return create_firefox_webdriver()\n except Exception:\n return None\n\ndef _try_create_chromium_webdriver() -> Optional[WebDriver]:\n try:\n return create_chromium_webdriver()\n except Exception:\n return None\n\nclass _WebdriverState:\n '''\n\n '''\n\n reuse: bool\n kind: Optional[DriverKind]\n\n current: Optional[WebDriver]\n _drivers: List[WebDriver]\n\n def __init__(self, *, kind: Optional[DriverKind] = None, reuse: bool = True):\n self.kind = kind\n self.reuse = reuse\n self.current = None\n self._drivers = set()\n\n def terminate(self, driver: WebDriver) -> None:\n self._drivers.remove(driver)\n driver.quit()\n\n def reset(self) -> None:\n if self.current is not None:\n self.terminate(self.current)\n self.current = None\n\n def get(self) -> WebDriver:\n if not self.reuse or self.current is None:\n self.reset()\n self.current = self.create()\n return self.current\n\n def create(self, kind: Optional[DriverKind] = None) -> WebDriver:\n driver = self._create(kind)\n self._drivers.add(driver)\n return driver\n\n def _create(self, kind: Optional[DriverKind]) -> WebDriver:\n driver_kind = kind or self.kind\n\n if driver_kind is None:\n driver = _try_create_chromium_webdriver()\n if driver is not None:\n self.kind = \"chromium\"\n return driver\n\n driver = _try_create_firefox_webdriver()\n if driver is not None:\n self.kind = \"firefox\"\n return driver\n\n raise RuntimeError(\"Neither firefox and geckodriver nor a variant of chromium browser and \" \\\n \"chromedriver are available on system PATH. You can install the former \" \\\n \"with 'conda install -c conda-forge firefox geckodriver'.\")\n elif driver_kind == \"chromium\":\n return create_chromium_webdriver()\n elif driver_kind == \"firefox\":\n return create_firefox_webdriver()\n else:\n raise ValueError(f\"'{driver_kind}' is not a recognized webdriver kind\")\n\n def cleanup(self) -> None:\n self.reset()\n for driver in list(self._drivers):\n self.terminate(driver)\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n\nwebdriver_control = _WebdriverState()\n\natexit.register(lambda: webdriver_control.cleanup())\n", "path": "bokeh/io/webdriver.py"}]} | 1,733 | 530 |
gh_patches_debug_38225 | rasdani/github-patches | git_diff | medtagger__MedTagger-506 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Check for sent Label in E2E Tests
## Current Behavior
There are no checks on Labels sent to REST API.
## Expected Behavior
E2E Tests should also check if Label on the backend side was properly created.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/medtagger/config.py`
Content:
```
1 """Module responsible for reading data from application configuration."""
2 import os
3 from typing import Any
4
5
6 class AppConfiguration:
7 """Class that represents application configuration."""
8
9 def __init__(self) -> None:
10 """Initialize application configuration."""
11 pass
12
13 @staticmethod
14 def get(namespace: str, key: str, fallback: Any = None) -> Any:
15 """Return value of a given configuration entry.
16
17 :param namespace: name of a namespace for given entry
18 :param key: key for which it should return value from given namespace
19 :param fallback: default value returned if key was not found
20 :return: value for given entry
21 """
22 variable_name = 'MEDTAGGER__' + namespace.upper() + '_' + key.upper()
23 return os.environ.get(variable_name, fallback)
24
25 @staticmethod
26 def getint(namespace: str, key: str, fallback: int = 0) -> int:
27 """Return integer value for given key in namespace."""
28 return int(AppConfiguration.get(namespace, key, fallback))
29
30 @staticmethod
31 def getboolean(namespace: str, key: str, fallback: bool = False) -> bool:
32 """Return boolean value for given key in namespace."""
33 return bool(AppConfiguration.getint(namespace, key, fallback))
34
```
Path: `backend/medtagger/api/exceptions.py`
Content:
```
1 """Exceptions used across whole API."""
2 from medtagger.exceptions import MedTaggerException
3
4
5 class BaseHTTPException(MedTaggerException):
6 """Base class for all HTTP Exceptions."""
7
8 pass
9
10
11 class UnauthorizedException(BaseHTTPException):
12 """Exception designed to use once there was an authorization error during business logic processing."""
13
14 pass
15
16
17 class NotFoundException(BaseHTTPException):
18 """Exception designed to use while the object that user was looking for could not be found."""
19
20 pass
21
22
23 class InvalidArgumentsException(BaseHTTPException):
24 """Exception designed to use with invalid arguments (400 status code)."""
25
26 pass
27
28
29 class AccessForbiddenException(BaseHTTPException):
30 """Exception designed to use while the user does not have a privilege to perform action."""
31
32 pass
33
```
Path: `backend/medtagger/exceptions.py`
Content:
```
1 """All available Exceptions for whole project."""
2
3
4 class MedTaggerException(Exception):
5 """Base class for all HTTP Exceptions."""
6
7 pass
8
9
10 class UnsupportedActionException(MedTaggerException):
11 """Exception for unsupported Action."""
12
13 pass
14
15
16 class InvalidResponseException(MedTaggerException):
17 """Exception for invalid Response."""
18
19 pass
20
21
22 class InternalErrorException(MedTaggerException):
23 """Exception designed to use to indicate internal errors (like DB/Storage error)."""
24
25 pass
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/medtagger/api/exceptions.py b/backend/medtagger/api/exceptions.py
--- a/backend/medtagger/api/exceptions.py
+++ b/backend/medtagger/api/exceptions.py
@@ -5,28 +5,28 @@
class BaseHTTPException(MedTaggerException):
"""Base class for all HTTP Exceptions."""
- pass
+ pass # pylint: disable=unnecessary-pass
class UnauthorizedException(BaseHTTPException):
"""Exception designed to use once there was an authorization error during business logic processing."""
- pass
+ pass # pylint: disable=unnecessary-pass
class NotFoundException(BaseHTTPException):
"""Exception designed to use while the object that user was looking for could not be found."""
- pass
+ pass # pylint: disable=unnecessary-pass
class InvalidArgumentsException(BaseHTTPException):
"""Exception designed to use with invalid arguments (400 status code)."""
- pass
+ pass # pylint: disable=unnecessary-pass
class AccessForbiddenException(BaseHTTPException):
"""Exception designed to use while the user does not have a privilege to perform action."""
- pass
+ pass # pylint: disable=unnecessary-pass
diff --git a/backend/medtagger/config.py b/backend/medtagger/config.py
--- a/backend/medtagger/config.py
+++ b/backend/medtagger/config.py
@@ -6,10 +6,6 @@
class AppConfiguration:
"""Class that represents application configuration."""
- def __init__(self) -> None:
- """Initialize application configuration."""
- pass
-
@staticmethod
def get(namespace: str, key: str, fallback: Any = None) -> Any:
"""Return value of a given configuration entry.
diff --git a/backend/medtagger/exceptions.py b/backend/medtagger/exceptions.py
--- a/backend/medtagger/exceptions.py
+++ b/backend/medtagger/exceptions.py
@@ -4,22 +4,22 @@
class MedTaggerException(Exception):
"""Base class for all HTTP Exceptions."""
- pass
+ pass # pylint: disable=unnecessary-pass
class UnsupportedActionException(MedTaggerException):
"""Exception for unsupported Action."""
- pass
+ pass # pylint: disable=unnecessary-pass
class InvalidResponseException(MedTaggerException):
"""Exception for invalid Response."""
- pass
+ pass # pylint: disable=unnecessary-pass
class InternalErrorException(MedTaggerException):
"""Exception designed to use to indicate internal errors (like DB/Storage error)."""
- pass
+ pass # pylint: disable=unnecessary-pass
| {"golden_diff": "diff --git a/backend/medtagger/api/exceptions.py b/backend/medtagger/api/exceptions.py\n--- a/backend/medtagger/api/exceptions.py\n+++ b/backend/medtagger/api/exceptions.py\n@@ -5,28 +5,28 @@\n class BaseHTTPException(MedTaggerException):\n \"\"\"Base class for all HTTP Exceptions.\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n \n \n class UnauthorizedException(BaseHTTPException):\n \"\"\"Exception designed to use once there was an authorization error during business logic processing.\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n \n \n class NotFoundException(BaseHTTPException):\n \"\"\"Exception designed to use while the object that user was looking for could not be found.\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n \n \n class InvalidArgumentsException(BaseHTTPException):\n \"\"\"Exception designed to use with invalid arguments (400 status code).\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n \n \n class AccessForbiddenException(BaseHTTPException):\n \"\"\"Exception designed to use while the user does not have a privilege to perform action.\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\ndiff --git a/backend/medtagger/config.py b/backend/medtagger/config.py\n--- a/backend/medtagger/config.py\n+++ b/backend/medtagger/config.py\n@@ -6,10 +6,6 @@\n class AppConfiguration:\n \"\"\"Class that represents application configuration.\"\"\"\n \n- def __init__(self) -> None:\n- \"\"\"Initialize application configuration.\"\"\"\n- pass\n-\n @staticmethod\n def get(namespace: str, key: str, fallback: Any = None) -> Any:\n \"\"\"Return value of a given configuration entry.\ndiff --git a/backend/medtagger/exceptions.py b/backend/medtagger/exceptions.py\n--- a/backend/medtagger/exceptions.py\n+++ b/backend/medtagger/exceptions.py\n@@ -4,22 +4,22 @@\n class MedTaggerException(Exception):\n \"\"\"Base class for all HTTP Exceptions.\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n \n \n class UnsupportedActionException(MedTaggerException):\n \"\"\"Exception for unsupported Action.\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n \n \n class InvalidResponseException(MedTaggerException):\n \"\"\"Exception for invalid Response.\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n \n \n class InternalErrorException(MedTaggerException):\n \"\"\"Exception designed to use to indicate internal errors (like DB/Storage error).\"\"\"\n \n- pass\n+ pass # pylint: disable=unnecessary-pass\n", "issue": "Check for sent Label in E2E Tests\n## Current Behavior\r\n\r\nThere are no checks on Labels sent to REST API.\r\n\r\n## Expected Behavior\r\n\r\nE2E Tests should also check if Label on the backend side was properly created.\n", "before_files": [{"content": "\"\"\"Module responsible for reading data from application configuration.\"\"\"\nimport os\nfrom typing import Any\n\n\nclass AppConfiguration:\n \"\"\"Class that represents application configuration.\"\"\"\n\n def __init__(self) -> None:\n \"\"\"Initialize application configuration.\"\"\"\n pass\n\n @staticmethod\n def get(namespace: str, key: str, fallback: Any = None) -> Any:\n \"\"\"Return value of a given configuration entry.\n\n :param namespace: name of a namespace for given entry\n :param key: key for which it should return value from given namespace\n :param fallback: default value returned if key was not found\n :return: value for given entry\n \"\"\"\n variable_name = 'MEDTAGGER__' + namespace.upper() + '_' + key.upper()\n return os.environ.get(variable_name, fallback)\n\n @staticmethod\n def getint(namespace: str, key: str, fallback: int = 0) -> int:\n \"\"\"Return integer value for given key in namespace.\"\"\"\n return int(AppConfiguration.get(namespace, key, fallback))\n\n @staticmethod\n def getboolean(namespace: str, key: str, fallback: bool = False) -> bool:\n \"\"\"Return boolean value for given key in namespace.\"\"\"\n return bool(AppConfiguration.getint(namespace, key, fallback))\n", "path": "backend/medtagger/config.py"}, {"content": "\"\"\"Exceptions used across whole API.\"\"\"\nfrom medtagger.exceptions import MedTaggerException\n\n\nclass BaseHTTPException(MedTaggerException):\n \"\"\"Base class for all HTTP Exceptions.\"\"\"\n\n pass\n\n\nclass UnauthorizedException(BaseHTTPException):\n \"\"\"Exception designed to use once there was an authorization error during business logic processing.\"\"\"\n\n pass\n\n\nclass NotFoundException(BaseHTTPException):\n \"\"\"Exception designed to use while the object that user was looking for could not be found.\"\"\"\n\n pass\n\n\nclass InvalidArgumentsException(BaseHTTPException):\n \"\"\"Exception designed to use with invalid arguments (400 status code).\"\"\"\n\n pass\n\n\nclass AccessForbiddenException(BaseHTTPException):\n \"\"\"Exception designed to use while the user does not have a privilege to perform action.\"\"\"\n\n pass\n", "path": "backend/medtagger/api/exceptions.py"}, {"content": "\"\"\"All available Exceptions for whole project.\"\"\"\n\n\nclass MedTaggerException(Exception):\n \"\"\"Base class for all HTTP Exceptions.\"\"\"\n\n pass\n\n\nclass UnsupportedActionException(MedTaggerException):\n \"\"\"Exception for unsupported Action.\"\"\"\n\n pass\n\n\nclass InvalidResponseException(MedTaggerException):\n \"\"\"Exception for invalid Response.\"\"\"\n\n pass\n\n\nclass InternalErrorException(MedTaggerException):\n \"\"\"Exception designed to use to indicate internal errors (like DB/Storage error).\"\"\"\n\n pass\n", "path": "backend/medtagger/exceptions.py"}], "after_files": [{"content": "\"\"\"Module responsible for reading data from application configuration.\"\"\"\nimport os\nfrom typing import Any\n\n\nclass AppConfiguration:\n \"\"\"Class that represents application configuration.\"\"\"\n\n @staticmethod\n def get(namespace: str, key: str, fallback: Any = None) -> Any:\n \"\"\"Return value of a given configuration entry.\n\n :param namespace: name of a namespace for given entry\n :param key: key for which it should return value from given namespace\n :param fallback: default value returned if key was not found\n :return: value for given entry\n \"\"\"\n variable_name = 'MEDTAGGER__' + namespace.upper() + '_' + key.upper()\n return os.environ.get(variable_name, fallback)\n\n @staticmethod\n def getint(namespace: str, key: str, fallback: int = 0) -> int:\n \"\"\"Return integer value for given key in namespace.\"\"\"\n return int(AppConfiguration.get(namespace, key, fallback))\n\n @staticmethod\n def getboolean(namespace: str, key: str, fallback: bool = False) -> bool:\n \"\"\"Return boolean value for given key in namespace.\"\"\"\n return bool(AppConfiguration.getint(namespace, key, fallback))\n", "path": "backend/medtagger/config.py"}, {"content": "\"\"\"Exceptions used across whole API.\"\"\"\nfrom medtagger.exceptions import MedTaggerException\n\n\nclass BaseHTTPException(MedTaggerException):\n \"\"\"Base class for all HTTP Exceptions.\"\"\"\n\n pass # pylint: disable=unnecessary-pass\n\n\nclass UnauthorizedException(BaseHTTPException):\n \"\"\"Exception designed to use once there was an authorization error during business logic processing.\"\"\"\n\n pass # pylint: disable=unnecessary-pass\n\n\nclass NotFoundException(BaseHTTPException):\n \"\"\"Exception designed to use while the object that user was looking for could not be found.\"\"\"\n\n pass # pylint: disable=unnecessary-pass\n\n\nclass InvalidArgumentsException(BaseHTTPException):\n \"\"\"Exception designed to use with invalid arguments (400 status code).\"\"\"\n\n pass # pylint: disable=unnecessary-pass\n\n\nclass AccessForbiddenException(BaseHTTPException):\n \"\"\"Exception designed to use while the user does not have a privilege to perform action.\"\"\"\n\n pass # pylint: disable=unnecessary-pass\n", "path": "backend/medtagger/api/exceptions.py"}, {"content": "\"\"\"All available Exceptions for whole project.\"\"\"\n\n\nclass MedTaggerException(Exception):\n \"\"\"Base class for all HTTP Exceptions.\"\"\"\n\n pass # pylint: disable=unnecessary-pass\n\n\nclass UnsupportedActionException(MedTaggerException):\n \"\"\"Exception for unsupported Action.\"\"\"\n\n pass # pylint: disable=unnecessary-pass\n\n\nclass InvalidResponseException(MedTaggerException):\n \"\"\"Exception for invalid Response.\"\"\"\n\n pass # pylint: disable=unnecessary-pass\n\n\nclass InternalErrorException(MedTaggerException):\n \"\"\"Exception designed to use to indicate internal errors (like DB/Storage error).\"\"\"\n\n pass # pylint: disable=unnecessary-pass\n", "path": "backend/medtagger/exceptions.py"}]} | 1,053 | 602 |
gh_patches_debug_61378 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-1288 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
inplace operation in pairwise_cosine_similarity
## 🐛 Bug
Hello !
The x, y values are modified inplace in the `pairwise_cosine_similarity` function.
This is not documented and may cause bugs that are difficult to find.
Thank you.
### To Reproduce
```python
import torch
from torchmetrics.functional import pairwise_cosine_similarity
x = torch.tensor([[2, 3], [3, 5], [5, 8]], dtype=torch.float32)
y = torch.tensor([[1, 0], [2, 1]], dtype=torch.float32)
print("Result:", pairwise_cosine_similarity(x, y))
print("X:", x)
print("Y:", y)
"""Out[0]
Result: tensor([[0.5547, 0.8682],
[0.5145, 0.8437],
[0.5300, 0.8533]])
X: tensor([[0.5547, 0.8321],
[0.5145, 0.8575],
[0.5300, 0.8480]])
Y: tensor([[1.0000, 0.0000],
[0.8944, 0.4472]])
"""
```
### Environment
torchmetrics==0.10.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/torchmetrics/functional/pairwise/cosine.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Optional
15
16 import torch
17 from torch import Tensor
18 from typing_extensions import Literal
19
20 from torchmetrics.functional.pairwise.helpers import _check_input, _reduce_distance_matrix
21 from torchmetrics.utilities.compute import _safe_matmul
22
23
24 def _pairwise_cosine_similarity_update(
25 x: Tensor, y: Optional[Tensor] = None, zero_diagonal: Optional[bool] = None
26 ) -> Tensor:
27 """Calculates the pairwise cosine similarity matrix.
28
29 Args:
30 x: tensor of shape ``[N,d]``
31 y: tensor of shape ``[M,d]``
32 zero_diagonal: determines if the diagonal of the distance matrix should be set to zero
33 """
34 x, y, zero_diagonal = _check_input(x, y, zero_diagonal)
35
36 norm = torch.norm(x, p=2, dim=1)
37 x /= norm.unsqueeze(1)
38 norm = torch.norm(y, p=2, dim=1)
39 y /= norm.unsqueeze(1)
40
41 distance = _safe_matmul(x, y)
42 if zero_diagonal:
43 distance.fill_diagonal_(0)
44 return distance
45
46
47 def pairwise_cosine_similarity(
48 x: Tensor,
49 y: Optional[Tensor] = None,
50 reduction: Literal["mean", "sum", "none", None] = None,
51 zero_diagonal: Optional[bool] = None,
52 ) -> Tensor:
53 r"""Calculates pairwise cosine similarity:
54
55 .. math::
56 s_{cos}(x,y) = \frac{<x,y>}{||x|| \cdot ||y||}
57 = \frac{\sum_{d=1}^D x_d \cdot y_d }{\sqrt{\sum_{d=1}^D x_i^2} \cdot \sqrt{\sum_{d=1}^D y_i^2}}
58
59 If both :math:`x` and :math:`y` are passed in, the calculation will be performed pairwise
60 between the rows of :math:`x` and :math:`y`.
61 If only :math:`x` is passed in, the calculation will be performed between the rows of :math:`x`.
62
63 Args:
64 x: Tensor with shape ``[N, d]``
65 y: Tensor with shape ``[M, d]``, optional
66 reduction: reduction to apply along the last dimension. Choose between `'mean'`, `'sum'`
67 (applied along column dimension) or `'none'`, `None` for no reduction
68 zero_diagonal: if the diagonal of the distance matrix should be set to 0. If only :math:`x` is given
69 this defaults to ``True`` else if :math:`y` is also given it defaults to ``False``
70
71 Returns:
72 A ``[N,N]`` matrix of distances if only ``x`` is given, else a ``[N,M]`` matrix
73
74 Example:
75 >>> import torch
76 >>> from torchmetrics.functional import pairwise_cosine_similarity
77 >>> x = torch.tensor([[2, 3], [3, 5], [5, 8]], dtype=torch.float32)
78 >>> y = torch.tensor([[1, 0], [2, 1]], dtype=torch.float32)
79 >>> pairwise_cosine_similarity(x, y)
80 tensor([[0.5547, 0.8682],
81 [0.5145, 0.8437],
82 [0.5300, 0.8533]])
83 >>> pairwise_cosine_similarity(x)
84 tensor([[0.0000, 0.9989, 0.9996],
85 [0.9989, 0.0000, 0.9998],
86 [0.9996, 0.9998, 0.0000]])
87 """
88 distance = _pairwise_cosine_similarity_update(x, y, zero_diagonal)
89 return _reduce_distance_matrix(distance, reduction)
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/torchmetrics/functional/pairwise/cosine.py b/src/torchmetrics/functional/pairwise/cosine.py
--- a/src/torchmetrics/functional/pairwise/cosine.py
+++ b/src/torchmetrics/functional/pairwise/cosine.py
@@ -34,9 +34,9 @@
x, y, zero_diagonal = _check_input(x, y, zero_diagonal)
norm = torch.norm(x, p=2, dim=1)
- x /= norm.unsqueeze(1)
+ x = x / norm.unsqueeze(1)
norm = torch.norm(y, p=2, dim=1)
- y /= norm.unsqueeze(1)
+ y = y / norm.unsqueeze(1)
distance = _safe_matmul(x, y)
if zero_diagonal:
| {"golden_diff": "diff --git a/src/torchmetrics/functional/pairwise/cosine.py b/src/torchmetrics/functional/pairwise/cosine.py\n--- a/src/torchmetrics/functional/pairwise/cosine.py\n+++ b/src/torchmetrics/functional/pairwise/cosine.py\n@@ -34,9 +34,9 @@\n x, y, zero_diagonal = _check_input(x, y, zero_diagonal)\n \n norm = torch.norm(x, p=2, dim=1)\n- x /= norm.unsqueeze(1)\n+ x = x / norm.unsqueeze(1)\n norm = torch.norm(y, p=2, dim=1)\n- y /= norm.unsqueeze(1)\n+ y = y / norm.unsqueeze(1)\n \n distance = _safe_matmul(x, y)\n if zero_diagonal:\n", "issue": "inplace operation in pairwise_cosine_similarity\n## \ud83d\udc1b Bug\r\nHello !\r\nThe x, y values are modified inplace in the `pairwise_cosine_similarity` function. \r\nThis is not documented and may cause bugs that are difficult to find. \r\nThank you.\r\n\r\n### To Reproduce\r\n\r\n```python\r\nimport torch\r\nfrom torchmetrics.functional import pairwise_cosine_similarity\r\nx = torch.tensor([[2, 3], [3, 5], [5, 8]], dtype=torch.float32)\r\ny = torch.tensor([[1, 0], [2, 1]], dtype=torch.float32)\r\nprint(\"Result:\", pairwise_cosine_similarity(x, y))\r\nprint(\"X:\", x)\r\nprint(\"Y:\", y)\r\n\"\"\"Out[0]\r\nResult: tensor([[0.5547, 0.8682],\r\n [0.5145, 0.8437],\r\n [0.5300, 0.8533]])\r\nX: tensor([[0.5547, 0.8321],\r\n [0.5145, 0.8575],\r\n [0.5300, 0.8480]])\r\nY: tensor([[1.0000, 0.0000],\r\n [0.8944, 0.4472]])\r\n\"\"\"\r\n```\r\n\r\n### Environment\r\ntorchmetrics==0.10.0\r\n\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Optional\n\nimport torch\nfrom torch import Tensor\nfrom typing_extensions import Literal\n\nfrom torchmetrics.functional.pairwise.helpers import _check_input, _reduce_distance_matrix\nfrom torchmetrics.utilities.compute import _safe_matmul\n\n\ndef _pairwise_cosine_similarity_update(\n x: Tensor, y: Optional[Tensor] = None, zero_diagonal: Optional[bool] = None\n) -> Tensor:\n \"\"\"Calculates the pairwise cosine similarity matrix.\n\n Args:\n x: tensor of shape ``[N,d]``\n y: tensor of shape ``[M,d]``\n zero_diagonal: determines if the diagonal of the distance matrix should be set to zero\n \"\"\"\n x, y, zero_diagonal = _check_input(x, y, zero_diagonal)\n\n norm = torch.norm(x, p=2, dim=1)\n x /= norm.unsqueeze(1)\n norm = torch.norm(y, p=2, dim=1)\n y /= norm.unsqueeze(1)\n\n distance = _safe_matmul(x, y)\n if zero_diagonal:\n distance.fill_diagonal_(0)\n return distance\n\n\ndef pairwise_cosine_similarity(\n x: Tensor,\n y: Optional[Tensor] = None,\n reduction: Literal[\"mean\", \"sum\", \"none\", None] = None,\n zero_diagonal: Optional[bool] = None,\n) -> Tensor:\n r\"\"\"Calculates pairwise cosine similarity:\n\n .. math::\n s_{cos}(x,y) = \\frac{<x,y>}{||x|| \\cdot ||y||}\n = \\frac{\\sum_{d=1}^D x_d \\cdot y_d }{\\sqrt{\\sum_{d=1}^D x_i^2} \\cdot \\sqrt{\\sum_{d=1}^D y_i^2}}\n\n If both :math:`x` and :math:`y` are passed in, the calculation will be performed pairwise\n between the rows of :math:`x` and :math:`y`.\n If only :math:`x` is passed in, the calculation will be performed between the rows of :math:`x`.\n\n Args:\n x: Tensor with shape ``[N, d]``\n y: Tensor with shape ``[M, d]``, optional\n reduction: reduction to apply along the last dimension. Choose between `'mean'`, `'sum'`\n (applied along column dimension) or `'none'`, `None` for no reduction\n zero_diagonal: if the diagonal of the distance matrix should be set to 0. If only :math:`x` is given\n this defaults to ``True`` else if :math:`y` is also given it defaults to ``False``\n\n Returns:\n A ``[N,N]`` matrix of distances if only ``x`` is given, else a ``[N,M]`` matrix\n\n Example:\n >>> import torch\n >>> from torchmetrics.functional import pairwise_cosine_similarity\n >>> x = torch.tensor([[2, 3], [3, 5], [5, 8]], dtype=torch.float32)\n >>> y = torch.tensor([[1, 0], [2, 1]], dtype=torch.float32)\n >>> pairwise_cosine_similarity(x, y)\n tensor([[0.5547, 0.8682],\n [0.5145, 0.8437],\n [0.5300, 0.8533]])\n >>> pairwise_cosine_similarity(x)\n tensor([[0.0000, 0.9989, 0.9996],\n [0.9989, 0.0000, 0.9998],\n [0.9996, 0.9998, 0.0000]])\n \"\"\"\n distance = _pairwise_cosine_similarity_update(x, y, zero_diagonal)\n return _reduce_distance_matrix(distance, reduction)\n", "path": "src/torchmetrics/functional/pairwise/cosine.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Optional\n\nimport torch\nfrom torch import Tensor\nfrom typing_extensions import Literal\n\nfrom torchmetrics.functional.pairwise.helpers import _check_input, _reduce_distance_matrix\nfrom torchmetrics.utilities.compute import _safe_matmul\n\n\ndef _pairwise_cosine_similarity_update(\n x: Tensor, y: Optional[Tensor] = None, zero_diagonal: Optional[bool] = None\n) -> Tensor:\n \"\"\"Calculates the pairwise cosine similarity matrix.\n\n Args:\n x: tensor of shape ``[N,d]``\n y: tensor of shape ``[M,d]``\n zero_diagonal: determines if the diagonal of the distance matrix should be set to zero\n \"\"\"\n x, y, zero_diagonal = _check_input(x, y, zero_diagonal)\n\n norm = torch.norm(x, p=2, dim=1)\n x = x / norm.unsqueeze(1)\n norm = torch.norm(y, p=2, dim=1)\n y = y / norm.unsqueeze(1)\n\n distance = _safe_matmul(x, y)\n if zero_diagonal:\n distance.fill_diagonal_(0)\n return distance\n\n\ndef pairwise_cosine_similarity(\n x: Tensor,\n y: Optional[Tensor] = None,\n reduction: Literal[\"mean\", \"sum\", \"none\", None] = None,\n zero_diagonal: Optional[bool] = None,\n) -> Tensor:\n r\"\"\"Calculates pairwise cosine similarity:\n\n .. math::\n s_{cos}(x,y) = \\frac{<x,y>}{||x|| \\cdot ||y||}\n = \\frac{\\sum_{d=1}^D x_d \\cdot y_d }{\\sqrt{\\sum_{d=1}^D x_i^2} \\cdot \\sqrt{\\sum_{d=1}^D y_i^2}}\n\n If both :math:`x` and :math:`y` are passed in, the calculation will be performed pairwise\n between the rows of :math:`x` and :math:`y`.\n If only :math:`x` is passed in, the calculation will be performed between the rows of :math:`x`.\n\n Args:\n x: Tensor with shape ``[N, d]``\n y: Tensor with shape ``[M, d]``, optional\n reduction: reduction to apply along the last dimension. Choose between `'mean'`, `'sum'`\n (applied along column dimension) or `'none'`, `None` for no reduction\n zero_diagonal: if the diagonal of the distance matrix should be set to 0. If only :math:`x` is given\n this defaults to ``True`` else if :math:`y` is also given it defaults to ``False``\n\n Returns:\n A ``[N,N]`` matrix of distances if only ``x`` is given, else a ``[N,M]`` matrix\n\n Example:\n >>> import torch\n >>> from torchmetrics.functional import pairwise_cosine_similarity\n >>> x = torch.tensor([[2, 3], [3, 5], [5, 8]], dtype=torch.float32)\n >>> y = torch.tensor([[1, 0], [2, 1]], dtype=torch.float32)\n >>> pairwise_cosine_similarity(x, y)\n tensor([[0.5547, 0.8682],\n [0.5145, 0.8437],\n [0.5300, 0.8533]])\n >>> pairwise_cosine_similarity(x)\n tensor([[0.0000, 0.9989, 0.9996],\n [0.9989, 0.0000, 0.9998],\n [0.9996, 0.9998, 0.0000]])\n \"\"\"\n distance = _pairwise_cosine_similarity_update(x, y, zero_diagonal)\n return _reduce_distance_matrix(distance, reduction)\n", "path": "src/torchmetrics/functional/pairwise/cosine.py"}]} | 1,760 | 185 |
gh_patches_debug_1038 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-341 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Individually run API tests don't build tables database
## Description
Running a individual test in `mathesar` that doesn't use the `engine` or `test_db` fixture will not have the tables databases built for the test. As a result, many will error when trying to access the tables database.
## Expected behavior
The tables database should always be built.
## To Reproduce
Run any test in `mathesar` that doesn't use `engine` or `test_db`. Ex:
```
docker exec mathesar_web_1 pytest mathesar/tests/views/api/test_schema_api.py::test_schema_update
```
## Additional context
Introduced due to the changes in #329, since `pytest-django` no longer creates the tables db for us.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conftest.py`
Content:
```
1 """
2 This file should provide utilities for setting up test DBs and the like. It's
3 intended to be the containment zone for anything specific about the testing
4 environment (e.g., the login info for the Postgres instance for testing)
5 """
6 import pytest
7 from sqlalchemy import create_engine, text
8 from config.settings import DATABASES
9
10 TEST_DB = "mathesar_db_test"
11
12
13 @pytest.fixture(scope="session")
14 def test_db_name():
15 return TEST_DB
16
17
18 @pytest.fixture(scope="session")
19 def test_db():
20 superuser_engine = _get_superuser_engine()
21 with superuser_engine.connect() as conn:
22 conn.execution_options(isolation_level="AUTOCOMMIT")
23 conn.execute(text(f"DROP DATABASE IF EXISTS {TEST_DB} WITH (FORCE)"))
24 conn.execute(text(f"CREATE DATABASE {TEST_DB}"))
25 yield TEST_DB
26 with superuser_engine.connect() as conn:
27 conn.execution_options(isolation_level="AUTOCOMMIT")
28 conn.execute(text(f"DROP DATABASE {TEST_DB} WITH (FORCE)"))
29
30
31 @pytest.fixture(scope="session")
32 def engine(test_db):
33 return create_engine(
34 _get_connection_string(
35 DATABASES["default"]["USER"],
36 DATABASES["default"]["PASSWORD"],
37 DATABASES["default"]["HOST"],
38 test_db,
39 ),
40 future=True,
41 )
42
43
44 def _get_superuser_engine():
45 return create_engine(
46 _get_connection_string(
47 username=DATABASES["default"]["USER"],
48 password=DATABASES["default"]["PASSWORD"],
49 hostname=DATABASES["default"]["HOST"],
50 database=DATABASES["default"]["NAME"],
51 ),
52 future=True,
53 )
54
55
56 def _get_connection_string(username, password, hostname, database):
57 return f"postgresql://{username}:{password}@{hostname}/{database}"
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -15,7 +15,7 @@
return TEST_DB
[email protected](scope="session")
[email protected](scope="session", autouse=True)
def test_db():
superuser_engine = _get_superuser_engine()
with superuser_engine.connect() as conn:
| {"golden_diff": "diff --git a/conftest.py b/conftest.py\n--- a/conftest.py\n+++ b/conftest.py\n@@ -15,7 +15,7 @@\n return TEST_DB\n \n \[email protected](scope=\"session\")\[email protected](scope=\"session\", autouse=True)\n def test_db():\n superuser_engine = _get_superuser_engine()\n with superuser_engine.connect() as conn:\n", "issue": "Individually run API tests don't build tables database\n## Description\r\nRunning a individual test in `mathesar` that doesn't use the `engine` or `test_db` fixture will not have the tables databases built for the test. As a result, many will error when trying to access the tables database.\r\n\r\n## Expected behavior\r\nThe tables database should always be built.\r\n\r\n## To Reproduce\r\nRun any test in `mathesar` that doesn't use `engine` or `test_db`. Ex:\r\n```\r\ndocker exec mathesar_web_1 pytest mathesar/tests/views/api/test_schema_api.py::test_schema_update\r\n```\r\n\r\n## Additional context\r\nIntroduced due to the changes in #329, since `pytest-django` no longer creates the tables db for us.\r\n\n", "before_files": [{"content": "\"\"\"\nThis file should provide utilities for setting up test DBs and the like. It's\nintended to be the containment zone for anything specific about the testing\nenvironment (e.g., the login info for the Postgres instance for testing)\n\"\"\"\nimport pytest\nfrom sqlalchemy import create_engine, text\nfrom config.settings import DATABASES\n\nTEST_DB = \"mathesar_db_test\"\n\n\[email protected](scope=\"session\")\ndef test_db_name():\n return TEST_DB\n\n\[email protected](scope=\"session\")\ndef test_db():\n superuser_engine = _get_superuser_engine()\n with superuser_engine.connect() as conn:\n conn.execution_options(isolation_level=\"AUTOCOMMIT\")\n conn.execute(text(f\"DROP DATABASE IF EXISTS {TEST_DB} WITH (FORCE)\"))\n conn.execute(text(f\"CREATE DATABASE {TEST_DB}\"))\n yield TEST_DB\n with superuser_engine.connect() as conn:\n conn.execution_options(isolation_level=\"AUTOCOMMIT\")\n conn.execute(text(f\"DROP DATABASE {TEST_DB} WITH (FORCE)\"))\n\n\[email protected](scope=\"session\")\ndef engine(test_db):\n return create_engine(\n _get_connection_string(\n DATABASES[\"default\"][\"USER\"],\n DATABASES[\"default\"][\"PASSWORD\"],\n DATABASES[\"default\"][\"HOST\"],\n test_db,\n ),\n future=True,\n )\n\n\ndef _get_superuser_engine():\n return create_engine(\n _get_connection_string(\n username=DATABASES[\"default\"][\"USER\"],\n password=DATABASES[\"default\"][\"PASSWORD\"],\n hostname=DATABASES[\"default\"][\"HOST\"],\n database=DATABASES[\"default\"][\"NAME\"],\n ),\n future=True,\n )\n\n\ndef _get_connection_string(username, password, hostname, database):\n return f\"postgresql://{username}:{password}@{hostname}/{database}\"\n", "path": "conftest.py"}], "after_files": [{"content": "\"\"\"\nThis file should provide utilities for setting up test DBs and the like. It's\nintended to be the containment zone for anything specific about the testing\nenvironment (e.g., the login info for the Postgres instance for testing)\n\"\"\"\nimport pytest\nfrom sqlalchemy import create_engine, text\nfrom config.settings import DATABASES\n\nTEST_DB = \"mathesar_db_test\"\n\n\[email protected](scope=\"session\")\ndef test_db_name():\n return TEST_DB\n\n\[email protected](scope=\"session\", autouse=True)\ndef test_db():\n superuser_engine = _get_superuser_engine()\n with superuser_engine.connect() as conn:\n conn.execution_options(isolation_level=\"AUTOCOMMIT\")\n conn.execute(text(f\"DROP DATABASE IF EXISTS {TEST_DB} WITH (FORCE)\"))\n conn.execute(text(f\"CREATE DATABASE {TEST_DB}\"))\n yield TEST_DB\n with superuser_engine.connect() as conn:\n conn.execution_options(isolation_level=\"AUTOCOMMIT\")\n conn.execute(text(f\"DROP DATABASE {TEST_DB} WITH (FORCE)\"))\n\n\[email protected](scope=\"session\")\ndef engine(test_db):\n return create_engine(\n _get_connection_string(\n DATABASES[\"default\"][\"USER\"],\n DATABASES[\"default\"][\"PASSWORD\"],\n DATABASES[\"default\"][\"HOST\"],\n test_db,\n ),\n future=True,\n )\n\n\ndef _get_superuser_engine():\n return create_engine(\n _get_connection_string(\n username=DATABASES[\"default\"][\"USER\"],\n password=DATABASES[\"default\"][\"PASSWORD\"],\n hostname=DATABASES[\"default\"][\"HOST\"],\n database=DATABASES[\"default\"][\"NAME\"],\n ),\n future=True,\n )\n\n\ndef _get_connection_string(username, password, hostname, database):\n return f\"postgresql://{username}:{password}@{hostname}/{database}\"\n", "path": "conftest.py"}]} | 904 | 90 |
gh_patches_debug_23746 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-3114 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tcp_message script not working
Hi,
I tried to execute the TCP message replace script from the doc but it seems is not working. I don't know if this is a issue with the doc script or with mitmproxy.
The script was unchanged.
##### Steps to reproduce the problem:
1. mitmdump --mode transparent --tcp-host ".*" -k -s examples/complex/tcp_message.py
Loading script: examples/tcp_message.py
Proxy server listening at http://*:8080
192.168.1.241:37604: clientconnect
::ffff:192.168.1.241:37604: Certificate verification error for None: hostname 'no-hostname' doesn't match either of '*.local.org', 'local.org'
::ffff:192.168.1.241:37604: Ignoring server verification error, continuing with connection
Addon error: Traceback (most recent call last):
File "examples/tcp_message.py", line 16, in tcp_message
modified_msg = tcp_msg.message.replace("foo", "bar")
AttributeError: 'TCPFlow' object has no attribute 'message'
192.168.1.241:37604 -> tcp -> 10.0.0.2:5443
Addon error: Traceback (most recent call last):
File "examples/tcp_message.py", line 16, in tcp_message
modified_msg = tcp_msg.message.replace("foo", "bar")
AttributeError: 'TCPFlow' object has no attribute 'message'
192.168.1.241:37604 <- tcp <- 10.0.0.2:5443
##### System information
<!-- Paste the output of "mitmproxy --version" here. -->
mitmdump --version
Mitmproxy: 3.0.4
Python: 3.6.0
OpenSSL: OpenSSL 1.1.0h 27 Mar 2018
Platform: Linux-3.19.0-65-generic-x86_64-with-debian-jessie-sid
<!-- Please use the mitmproxy forums (https://discourse.mitmproxy.org/) for support/how-to questions. Thanks! :) -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/complex/tcp_message.py`
Content:
```
1 """
2 tcp_message Inline Script Hook API Demonstration
3 ------------------------------------------------
4
5 * modifies packets containing "foo" to "bar"
6 * prints various details for each packet.
7
8 example cmdline invocation:
9 mitmdump -T --host --tcp ".*" -q -s examples/tcp_message.py
10 """
11 from mitmproxy.utils import strutils
12 from mitmproxy import ctx
13
14
15 def tcp_message(tcp_msg):
16 modified_msg = tcp_msg.message.replace("foo", "bar")
17
18 is_modified = False if modified_msg == tcp_msg.message else True
19 tcp_msg.message = modified_msg
20
21 ctx.log.info(
22 "[tcp_message{}] from {} {} to {} {}:\r\n{}".format(
23 " (modified)" if is_modified else "",
24 "client" if tcp_msg.sender == tcp_msg.client_conn else "server",
25 tcp_msg.sender.address,
26 "server" if tcp_msg.receiver == tcp_msg.server_conn else "client",
27 tcp_msg.receiver.address, strutils.bytes_to_escaped_str(tcp_msg.message))
28 )
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/complex/tcp_message.py b/examples/complex/tcp_message.py
--- a/examples/complex/tcp_message.py
+++ b/examples/complex/tcp_message.py
@@ -6,23 +6,22 @@
* prints various details for each packet.
example cmdline invocation:
-mitmdump -T --host --tcp ".*" -q -s examples/tcp_message.py
+mitmdump --rawtcp --tcp-host ".*" -s examples/complex/tcp_message.py
"""
from mitmproxy.utils import strutils
from mitmproxy import ctx
+from mitmproxy import tcp
-def tcp_message(tcp_msg):
- modified_msg = tcp_msg.message.replace("foo", "bar")
-
- is_modified = False if modified_msg == tcp_msg.message else True
- tcp_msg.message = modified_msg
+def tcp_message(flow: tcp.TCPFlow):
+ message = flow.messages[-1]
+ old_content = message.content
+ message.content = old_content.replace(b"foo", b"bar")
ctx.log.info(
- "[tcp_message{}] from {} {} to {} {}:\r\n{}".format(
- " (modified)" if is_modified else "",
- "client" if tcp_msg.sender == tcp_msg.client_conn else "server",
- tcp_msg.sender.address,
- "server" if tcp_msg.receiver == tcp_msg.server_conn else "client",
- tcp_msg.receiver.address, strutils.bytes_to_escaped_str(tcp_msg.message))
+ "[tcp_message{}] from {} to {}:\n{}".format(
+ " (modified)" if message.content != old_content else "",
+ "client" if message.from_client else "server",
+ "server" if message.from_client else "client",
+ strutils.bytes_to_escaped_str(message.content))
)
| {"golden_diff": "diff --git a/examples/complex/tcp_message.py b/examples/complex/tcp_message.py\n--- a/examples/complex/tcp_message.py\n+++ b/examples/complex/tcp_message.py\n@@ -6,23 +6,22 @@\n * prints various details for each packet.\n \n example cmdline invocation:\n-mitmdump -T --host --tcp \".*\" -q -s examples/tcp_message.py\n+mitmdump --rawtcp --tcp-host \".*\" -s examples/complex/tcp_message.py\n \"\"\"\n from mitmproxy.utils import strutils\n from mitmproxy import ctx\n+from mitmproxy import tcp\n \n \n-def tcp_message(tcp_msg):\n- modified_msg = tcp_msg.message.replace(\"foo\", \"bar\")\n-\n- is_modified = False if modified_msg == tcp_msg.message else True\n- tcp_msg.message = modified_msg\n+def tcp_message(flow: tcp.TCPFlow):\n+ message = flow.messages[-1]\n+ old_content = message.content\n+ message.content = old_content.replace(b\"foo\", b\"bar\")\n \n ctx.log.info(\n- \"[tcp_message{}] from {} {} to {} {}:\\r\\n{}\".format(\n- \" (modified)\" if is_modified else \"\",\n- \"client\" if tcp_msg.sender == tcp_msg.client_conn else \"server\",\n- tcp_msg.sender.address,\n- \"server\" if tcp_msg.receiver == tcp_msg.server_conn else \"client\",\n- tcp_msg.receiver.address, strutils.bytes_to_escaped_str(tcp_msg.message))\n+ \"[tcp_message{}] from {} to {}:\\n{}\".format(\n+ \" (modified)\" if message.content != old_content else \"\",\n+ \"client\" if message.from_client else \"server\",\n+ \"server\" if message.from_client else \"client\",\n+ strutils.bytes_to_escaped_str(message.content))\n )\n", "issue": "tcp_message script not working\nHi,\r\n\r\nI tried to execute the TCP message replace script from the doc but it seems is not working. I don't know if this is a issue with the doc script or with mitmproxy.\r\n\r\nThe script was unchanged.\r\n\r\n##### Steps to reproduce the problem:\r\n\r\n1. mitmdump --mode transparent --tcp-host \".*\" -k -s examples/complex/tcp_message.py\r\n\r\nLoading script: examples/tcp_message.py\r\nProxy server listening at http://*:8080\r\n192.168.1.241:37604: clientconnect\r\n::ffff:192.168.1.241:37604: Certificate verification error for None: hostname 'no-hostname' doesn't match either of '*.local.org', 'local.org'\r\n::ffff:192.168.1.241:37604: Ignoring server verification error, continuing with connection\r\nAddon error: Traceback (most recent call last):\r\n File \"examples/tcp_message.py\", line 16, in tcp_message\r\n modified_msg = tcp_msg.message.replace(\"foo\", \"bar\")\r\nAttributeError: 'TCPFlow' object has no attribute 'message'\r\n\r\n192.168.1.241:37604 -> tcp -> 10.0.0.2:5443\r\nAddon error: Traceback (most recent call last):\r\n File \"examples/tcp_message.py\", line 16, in tcp_message\r\n modified_msg = tcp_msg.message.replace(\"foo\", \"bar\")\r\nAttributeError: 'TCPFlow' object has no attribute 'message'\r\n\r\n192.168.1.241:37604 <- tcp <- 10.0.0.2:5443\r\n\r\n##### System information\r\n\r\n<!-- Paste the output of \"mitmproxy --version\" here. -->\r\n\r\nmitmdump --version\r\nMitmproxy: 3.0.4 \r\nPython: 3.6.0\r\nOpenSSL: OpenSSL 1.1.0h 27 Mar 2018\r\nPlatform: Linux-3.19.0-65-generic-x86_64-with-debian-jessie-sid\r\n\r\n<!-- Please use the mitmproxy forums (https://discourse.mitmproxy.org/) for support/how-to questions. Thanks! :) -->\r\n\n", "before_files": [{"content": "\"\"\"\ntcp_message Inline Script Hook API Demonstration\n------------------------------------------------\n\n* modifies packets containing \"foo\" to \"bar\"\n* prints various details for each packet.\n\nexample cmdline invocation:\nmitmdump -T --host --tcp \".*\" -q -s examples/tcp_message.py\n\"\"\"\nfrom mitmproxy.utils import strutils\nfrom mitmproxy import ctx\n\n\ndef tcp_message(tcp_msg):\n modified_msg = tcp_msg.message.replace(\"foo\", \"bar\")\n\n is_modified = False if modified_msg == tcp_msg.message else True\n tcp_msg.message = modified_msg\n\n ctx.log.info(\n \"[tcp_message{}] from {} {} to {} {}:\\r\\n{}\".format(\n \" (modified)\" if is_modified else \"\",\n \"client\" if tcp_msg.sender == tcp_msg.client_conn else \"server\",\n tcp_msg.sender.address,\n \"server\" if tcp_msg.receiver == tcp_msg.server_conn else \"client\",\n tcp_msg.receiver.address, strutils.bytes_to_escaped_str(tcp_msg.message))\n )\n", "path": "examples/complex/tcp_message.py"}], "after_files": [{"content": "\"\"\"\ntcp_message Inline Script Hook API Demonstration\n------------------------------------------------\n\n* modifies packets containing \"foo\" to \"bar\"\n* prints various details for each packet.\n\nexample cmdline invocation:\nmitmdump --rawtcp --tcp-host \".*\" -s examples/complex/tcp_message.py\n\"\"\"\nfrom mitmproxy.utils import strutils\nfrom mitmproxy import ctx\nfrom mitmproxy import tcp\n\n\ndef tcp_message(flow: tcp.TCPFlow):\n message = flow.messages[-1]\n old_content = message.content\n message.content = old_content.replace(b\"foo\", b\"bar\")\n\n ctx.log.info(\n \"[tcp_message{}] from {} to {}:\\n{}\".format(\n \" (modified)\" if message.content != old_content else \"\",\n \"client\" if message.from_client else \"server\",\n \"server\" if message.from_client else \"client\",\n strutils.bytes_to_escaped_str(message.content))\n )\n", "path": "examples/complex/tcp_message.py"}]} | 1,044 | 387 |
gh_patches_debug_758 | rasdani/github-patches | git_diff | vllm-project__vllm-2337 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[v0.2.7] Release Tracker
**ETA**: Jan 3rd - 4th
## Major changes
TBD
## PRs to be merged before the release
- [x] #2221
- [ ] ~~#2293~~ (deferred)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vllm/__init__.py`
Content:
```
1 """vLLM: a high-throughput and memory-efficient inference engine for LLMs"""
2
3 from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
4 from vllm.engine.async_llm_engine import AsyncLLMEngine
5 from vllm.engine.llm_engine import LLMEngine
6 from vllm.engine.ray_utils import initialize_cluster
7 from vllm.entrypoints.llm import LLM
8 from vllm.outputs import CompletionOutput, RequestOutput
9 from vllm.sampling_params import SamplingParams
10
11 __version__ = "0.2.6"
12
13 __all__ = [
14 "LLM",
15 "SamplingParams",
16 "RequestOutput",
17 "CompletionOutput",
18 "LLMEngine",
19 "EngineArgs",
20 "AsyncLLMEngine",
21 "AsyncEngineArgs",
22 "initialize_cluster",
23 ]
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vllm/__init__.py b/vllm/__init__.py
--- a/vllm/__init__.py
+++ b/vllm/__init__.py
@@ -8,7 +8,7 @@
from vllm.outputs import CompletionOutput, RequestOutput
from vllm.sampling_params import SamplingParams
-__version__ = "0.2.6"
+__version__ = "0.2.7"
__all__ = [
"LLM",
| {"golden_diff": "diff --git a/vllm/__init__.py b/vllm/__init__.py\n--- a/vllm/__init__.py\n+++ b/vllm/__init__.py\n@@ -8,7 +8,7 @@\n from vllm.outputs import CompletionOutput, RequestOutput\n from vllm.sampling_params import SamplingParams\n \n-__version__ = \"0.2.6\"\n+__version__ = \"0.2.7\"\n \n __all__ = [\n \"LLM\",\n", "issue": "[v0.2.7] Release Tracker\n**ETA**: Jan 3rd - 4th\r\n\r\n## Major changes\r\n\r\nTBD\r\n\r\n## PRs to be merged before the release\r\n\r\n- [x] #2221 \r\n- [ ] ~~#2293~~ (deferred)\n", "before_files": [{"content": "\"\"\"vLLM: a high-throughput and memory-efficient inference engine for LLMs\"\"\"\n\nfrom vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs\nfrom vllm.engine.async_llm_engine import AsyncLLMEngine\nfrom vllm.engine.llm_engine import LLMEngine\nfrom vllm.engine.ray_utils import initialize_cluster\nfrom vllm.entrypoints.llm import LLM\nfrom vllm.outputs import CompletionOutput, RequestOutput\nfrom vllm.sampling_params import SamplingParams\n\n__version__ = \"0.2.6\"\n\n__all__ = [\n \"LLM\",\n \"SamplingParams\",\n \"RequestOutput\",\n \"CompletionOutput\",\n \"LLMEngine\",\n \"EngineArgs\",\n \"AsyncLLMEngine\",\n \"AsyncEngineArgs\",\n \"initialize_cluster\",\n]\n", "path": "vllm/__init__.py"}], "after_files": [{"content": "\"\"\"vLLM: a high-throughput and memory-efficient inference engine for LLMs\"\"\"\n\nfrom vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs\nfrom vllm.engine.async_llm_engine import AsyncLLMEngine\nfrom vllm.engine.llm_engine import LLMEngine\nfrom vllm.engine.ray_utils import initialize_cluster\nfrom vllm.entrypoints.llm import LLM\nfrom vllm.outputs import CompletionOutput, RequestOutput\nfrom vllm.sampling_params import SamplingParams\n\n__version__ = \"0.2.7\"\n\n__all__ = [\n \"LLM\",\n \"SamplingParams\",\n \"RequestOutput\",\n \"CompletionOutput\",\n \"LLMEngine\",\n \"EngineArgs\",\n \"AsyncLLMEngine\",\n \"AsyncEngineArgs\",\n \"initialize_cluster\",\n]\n", "path": "vllm/__init__.py"}]} | 541 | 108 |
gh_patches_debug_7314 | rasdani/github-patches | git_diff | chainer__chainer-552 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
where doesn't work with int16 and float64 on cuda6.5
On only cuda 6.5, this code doesn't work:
```
x = cupy.array([1,2,3], dtype=cupy.int16)
y = cupy.array([1,2,3], dtype=cupy.float64)
c = cupy.array([1,0,1], dtype=cupy.bool_)
cupy.where(c, x, y)
```
Other combinations such as (int16, float32) and (int32, float64) correctly work.
Maybe this is a bug on cuda 6.5, and fixed on 7.0.
Note that `cupy.where(c, y, x)` can work.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupy/sorting/search.py`
Content:
```
1 from cupy import elementwise
2 from cupy import reduction
3
4
5 def argmax(a, axis=None, dtype=None, out=None, keepdims=False):
6 """Returns the indices of the maximum along an axis.
7
8 Args:
9 a (cupy.ndarray): Array to take argmax.
10 axis (int): Along which axis to find the maximum. ``a`` is flattened by
11 default.
12 dtype: Data type specifier.
13 out (cupy.ndarray): Output array.
14 keepdims (bool): If True, the axis ``axis`` is preserved as an axis of
15 length one.
16
17 Returns:
18 cupy.ndarray: The indices of the maximum of ``a`` along an axis.
19
20 .. seealso:: :func:`numpy.argmax`
21
22 """
23 return reduction.argmax(a, axis=axis, dtype=dtype, out=out,
24 keepdims=keepdims)
25
26
27 # TODO(okuta): Implement nanargmax
28
29
30 def argmin(a, axis=None, dtype=None, out=None, keepdims=False):
31 """Returns the indices of the minimum along an axis.
32
33 Args:
34 a (cupy.ndarray): Array to take argmin.
35 axis (int): Along which axis to find the minimum. ``a`` is flattened by
36 default.
37 dtype: Data type specifier.
38 out (cupy.ndarray): Output array.
39 keepdims (bool): If True, the axis ``axis`` is preserved as an axis of
40 length one.
41
42 Returns:
43 cupy.ndarray: The indices of the minimum of ``a`` along an axis.
44
45 .. seealso:: :func:`numpy.argmin`
46
47 """
48 return reduction.argmin(a, axis=axis, dtype=dtype, out=out,
49 keepdims=keepdims)
50
51
52 # TODO(okuta): Implement nanargmin
53
54
55 # TODO(okuta): Implement argwhere
56
57
58 # TODO(okuta): Implement nonzero
59
60
61 # TODO(okuta): Implement flatnonzero
62
63
64 def where(condition, x=None, y=None):
65 """Return elements, either from x or y, depending on condition.
66
67 .. note::
68
69 Currently Cupy doesn't support ``where(condition)``, that Numpy
70 supports.
71
72 Args:
73 condition (cupy.ndarray): When True, take x, otherwise take y.
74 x (cupy.ndarray): Values from which to choose on ``True``.
75 y (cupy.ndarray): Values from which to choose on ``False``.
76
77 Returns:
78 cupy.ndarray: Each element of output contains elements of ``x`` when
79 ``condition`` is ``True``, otherwise elements of ``y``.
80
81 """
82
83 missing = (x is None, y is None).count(True)
84
85 if missing == 1:
86 raise ValueError("Must provide both 'x' and 'y' or neither.")
87 if missing == 2:
88 # TODO(unno): return nonzero(cond)
89 return NotImplementedError()
90
91 return _where_ufunc(condition.astype('?'), x, y)
92
93 _where_ufunc = elementwise.create_ufunc(
94 'cupy_where',
95 ('???->?', '?bb->b', '?BB->B', '?hh->h', '?HH->H', '?ii->i', '?II->I',
96 '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f', '?dd->d'),
97 'out0 = in0 ? in1 : in2')
98
99
100 # TODO(okuta): Implement searchsorted
101
102
103 # TODO(okuta): Implement extract
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cupy/sorting/search.py b/cupy/sorting/search.py
--- a/cupy/sorting/search.py
+++ b/cupy/sorting/search.py
@@ -93,7 +93,12 @@
_where_ufunc = elementwise.create_ufunc(
'cupy_where',
('???->?', '?bb->b', '?BB->B', '?hh->h', '?HH->H', '?ii->i', '?II->I',
- '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f', '?dd->d'),
+ '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f',
+ # On CUDA 6.5 these combinations don't work correctly (on CUDA >=7.0, it
+ # works).
+ # See issue #551.
+ '?hd->d', '?Hd->d',
+ '?dd->d'),
'out0 = in0 ? in1 : in2')
| {"golden_diff": "diff --git a/cupy/sorting/search.py b/cupy/sorting/search.py\n--- a/cupy/sorting/search.py\n+++ b/cupy/sorting/search.py\n@@ -93,7 +93,12 @@\n _where_ufunc = elementwise.create_ufunc(\n 'cupy_where',\n ('???->?', '?bb->b', '?BB->B', '?hh->h', '?HH->H', '?ii->i', '?II->I',\n- '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f', '?dd->d'),\n+ '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f',\n+ # On CUDA 6.5 these combinations don't work correctly (on CUDA >=7.0, it\n+ # works).\n+ # See issue #551.\n+ '?hd->d', '?Hd->d',\n+ '?dd->d'),\n 'out0 = in0 ? in1 : in2')\n", "issue": "where doesn't work with int16 and float64 on cuda6.5\nOn only cuda 6.5, this code doesn't work:\n\n```\nx = cupy.array([1,2,3], dtype=cupy.int16)\ny = cupy.array([1,2,3], dtype=cupy.float64)\nc = cupy.array([1,0,1], dtype=cupy.bool_)\ncupy.where(c, x, y)\n```\n\nOther combinations such as (int16, float32) and (int32, float64) correctly work.\nMaybe this is a bug on cuda 6.5, and fixed on 7.0.\n\nNote that `cupy.where(c, y, x)` can work.\n\n", "before_files": [{"content": "from cupy import elementwise\nfrom cupy import reduction\n\n\ndef argmax(a, axis=None, dtype=None, out=None, keepdims=False):\n \"\"\"Returns the indices of the maximum along an axis.\n\n Args:\n a (cupy.ndarray): Array to take argmax.\n axis (int): Along which axis to find the maximum. ``a`` is flattened by\n default.\n dtype: Data type specifier.\n out (cupy.ndarray): Output array.\n keepdims (bool): If True, the axis ``axis`` is preserved as an axis of\n length one.\n\n Returns:\n cupy.ndarray: The indices of the maximum of ``a`` along an axis.\n\n .. seealso:: :func:`numpy.argmax`\n\n \"\"\"\n return reduction.argmax(a, axis=axis, dtype=dtype, out=out,\n keepdims=keepdims)\n\n\n# TODO(okuta): Implement nanargmax\n\n\ndef argmin(a, axis=None, dtype=None, out=None, keepdims=False):\n \"\"\"Returns the indices of the minimum along an axis.\n\n Args:\n a (cupy.ndarray): Array to take argmin.\n axis (int): Along which axis to find the minimum. ``a`` is flattened by\n default.\n dtype: Data type specifier.\n out (cupy.ndarray): Output array.\n keepdims (bool): If True, the axis ``axis`` is preserved as an axis of\n length one.\n\n Returns:\n cupy.ndarray: The indices of the minimum of ``a`` along an axis.\n\n .. seealso:: :func:`numpy.argmin`\n\n \"\"\"\n return reduction.argmin(a, axis=axis, dtype=dtype, out=out,\n keepdims=keepdims)\n\n\n# TODO(okuta): Implement nanargmin\n\n\n# TODO(okuta): Implement argwhere\n\n\n# TODO(okuta): Implement nonzero\n\n\n# TODO(okuta): Implement flatnonzero\n\n\ndef where(condition, x=None, y=None):\n \"\"\"Return elements, either from x or y, depending on condition.\n\n .. note::\n\n Currently Cupy doesn't support ``where(condition)``, that Numpy\n supports.\n\n Args:\n condition (cupy.ndarray): When True, take x, otherwise take y.\n x (cupy.ndarray): Values from which to choose on ``True``.\n y (cupy.ndarray): Values from which to choose on ``False``.\n\n Returns:\n cupy.ndarray: Each element of output contains elements of ``x`` when\n ``condition`` is ``True``, otherwise elements of ``y``.\n\n \"\"\"\n\n missing = (x is None, y is None).count(True)\n\n if missing == 1:\n raise ValueError(\"Must provide both 'x' and 'y' or neither.\")\n if missing == 2:\n # TODO(unno): return nonzero(cond)\n return NotImplementedError()\n\n return _where_ufunc(condition.astype('?'), x, y)\n\n_where_ufunc = elementwise.create_ufunc(\n 'cupy_where',\n ('???->?', '?bb->b', '?BB->B', '?hh->h', '?HH->H', '?ii->i', '?II->I',\n '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f', '?dd->d'),\n 'out0 = in0 ? in1 : in2')\n\n\n# TODO(okuta): Implement searchsorted\n\n\n# TODO(okuta): Implement extract\n", "path": "cupy/sorting/search.py"}], "after_files": [{"content": "from cupy import elementwise\nfrom cupy import reduction\n\n\ndef argmax(a, axis=None, dtype=None, out=None, keepdims=False):\n \"\"\"Returns the indices of the maximum along an axis.\n\n Args:\n a (cupy.ndarray): Array to take argmax.\n axis (int): Along which axis to find the maximum. ``a`` is flattened by\n default.\n dtype: Data type specifier.\n out (cupy.ndarray): Output array.\n keepdims (bool): If True, the axis ``axis`` is preserved as an axis of\n length one.\n\n Returns:\n cupy.ndarray: The indices of the maximum of ``a`` along an axis.\n\n .. seealso:: :func:`numpy.argmax`\n\n \"\"\"\n return reduction.argmax(a, axis=axis, dtype=dtype, out=out,\n keepdims=keepdims)\n\n\n# TODO(okuta): Implement nanargmax\n\n\ndef argmin(a, axis=None, dtype=None, out=None, keepdims=False):\n \"\"\"Returns the indices of the minimum along an axis.\n\n Args:\n a (cupy.ndarray): Array to take argmin.\n axis (int): Along which axis to find the minimum. ``a`` is flattened by\n default.\n dtype: Data type specifier.\n out (cupy.ndarray): Output array.\n keepdims (bool): If True, the axis ``axis`` is preserved as an axis of\n length one.\n\n Returns:\n cupy.ndarray: The indices of the minimum of ``a`` along an axis.\n\n .. seealso:: :func:`numpy.argmin`\n\n \"\"\"\n return reduction.argmin(a, axis=axis, dtype=dtype, out=out,\n keepdims=keepdims)\n\n\n# TODO(okuta): Implement nanargmin\n\n\n# TODO(okuta): Implement argwhere\n\n\n# TODO(okuta): Implement nonzero\n\n\n# TODO(okuta): Implement flatnonzero\n\n\ndef where(condition, x=None, y=None):\n \"\"\"Return elements, either from x or y, depending on condition.\n\n .. note::\n\n Currently Cupy doesn't support ``where(condition)``, that Numpy\n supports.\n\n Args:\n condition (cupy.ndarray): When True, take x, otherwise take y.\n x (cupy.ndarray): Values from which to choose on ``True``.\n y (cupy.ndarray): Values from which to choose on ``False``.\n\n Returns:\n cupy.ndarray: Each element of output contains elements of ``x`` when\n ``condition`` is ``True``, otherwise elements of ``y``.\n\n \"\"\"\n\n missing = (x is None, y is None).count(True)\n\n if missing == 1:\n raise ValueError(\"Must provide both 'x' and 'y' or neither.\")\n if missing == 2:\n # TODO(unno): return nonzero(cond)\n return NotImplementedError()\n\n return _where_ufunc(condition.astype('?'), x, y)\n\n_where_ufunc = elementwise.create_ufunc(\n 'cupy_where',\n ('???->?', '?bb->b', '?BB->B', '?hh->h', '?HH->H', '?ii->i', '?II->I',\n '?ll->l', '?LL->L', '?qq->q', '?QQ->Q', '?ee->e', '?ff->f',\n # On CUDA 6.5 these combinations don't work correctly (on CUDA >=7.0, it\n # works).\n # See issue #551.\n '?hd->d', '?Hd->d',\n '?dd->d'),\n 'out0 = in0 ? in1 : in2')\n\n\n# TODO(okuta): Implement searchsorted\n\n\n# TODO(okuta): Implement extract\n", "path": "cupy/sorting/search.py"}]} | 1,397 | 246 |
gh_patches_debug_43198 | rasdani/github-patches | git_diff | pallets__werkzeug-1284 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[feature request] werkzeug.contrib.profiler.ProfilerMiddleware custom file name formatting
It would be nice to allow users to customize the output file name format when using the ProfilerMiddleware.
Currently, the file name format is `'%(method)s.%(path)s.%(elapsed)06dms.%(time)d'`, which when multiple invocations have occurred leads to the profile files being ordered by their respective elapsed times. Having the ability to customize this format allows profiling users to make use of the resultant profile files in additional ways.
I propose the `profiler.ProfilerMiddleware` is changed to allow users to customize the format of the profile filenames for their specific needs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `werkzeug/contrib/profiler.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 werkzeug.contrib.profiler
4 ~~~~~~~~~~~~~~~~~~~~~~~~~
5
6 This module provides a simple WSGI profiler middleware for finding
7 bottlenecks in web application. It uses the :mod:`profile` or
8 :mod:`cProfile` module to do the profiling and writes the stats to the
9 stream provided (defaults to stderr).
10
11 Example usage::
12
13 from werkzeug.contrib.profiler import ProfilerMiddleware
14 app = ProfilerMiddleware(app)
15
16 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
17 :license: BSD, see LICENSE for more details.
18 """
19 import sys
20 import time
21 import os.path
22 try:
23 try:
24 from cProfile import Profile
25 except ImportError:
26 from profile import Profile
27 from pstats import Stats
28 available = True
29 except ImportError:
30 available = False
31
32
33 class MergeStream(object):
34
35 """An object that redirects `write` calls to multiple streams.
36 Use this to log to both `sys.stdout` and a file::
37
38 f = open('profiler.log', 'w')
39 stream = MergeStream(sys.stdout, f)
40 profiler = ProfilerMiddleware(app, stream)
41 """
42
43 def __init__(self, *streams):
44 if not streams:
45 raise TypeError('at least one stream must be given')
46 self.streams = streams
47
48 def write(self, data):
49 for stream in self.streams:
50 stream.write(data)
51
52
53 class ProfilerMiddleware(object):
54
55 """Simple profiler middleware. Wraps a WSGI application and profiles
56 a request. This intentionally buffers the response so that timings are
57 more exact.
58
59 By giving the `profile_dir` argument, pstat.Stats files are saved to that
60 directory, one file per request. Without it, a summary is printed to
61 `stream` instead.
62
63 For the exact meaning of `sort_by` and `restrictions` consult the
64 :mod:`profile` documentation.
65
66 .. versionadded:: 0.9
67 Added support for `restrictions` and `profile_dir`.
68
69 :param app: the WSGI application to profile.
70 :param stream: the stream for the profiled stats. defaults to stderr.
71 :param sort_by: a tuple of columns to sort the result by.
72 :param restrictions: a tuple of profiling strictions, not used if dumping
73 to `profile_dir`.
74 :param profile_dir: directory name to save pstat files
75 """
76
77 def __init__(self, app, stream=None,
78 sort_by=('time', 'calls'), restrictions=(), profile_dir=None):
79 if not available:
80 raise RuntimeError('the profiler is not available because '
81 'profile or pstat is not installed.')
82 self._app = app
83 self._stream = stream or sys.stdout
84 self._sort_by = sort_by
85 self._restrictions = restrictions
86 self._profile_dir = profile_dir
87
88 def __call__(self, environ, start_response):
89 response_body = []
90
91 def catching_start_response(status, headers, exc_info=None):
92 start_response(status, headers, exc_info)
93 return response_body.append
94
95 def runapp():
96 appiter = self._app(environ, catching_start_response)
97 response_body.extend(appiter)
98 if hasattr(appiter, 'close'):
99 appiter.close()
100
101 p = Profile()
102 start = time.time()
103 p.runcall(runapp)
104 body = b''.join(response_body)
105 elapsed = time.time() - start
106
107 if self._profile_dir is not None:
108 prof_filename = os.path.join(self._profile_dir,
109 '%s.%s.%06dms.%d.prof' % (
110 environ['REQUEST_METHOD'],
111 environ.get('PATH_INFO').strip(
112 '/').replace('/', '.') or 'root',
113 elapsed * 1000.0,
114 time.time()
115 ))
116 p.dump_stats(prof_filename)
117
118 else:
119 stats = Stats(p, stream=self._stream)
120 stats.sort_stats(*self._sort_by)
121
122 self._stream.write('-' * 80)
123 self._stream.write('\nPATH: %r\n' % environ.get('PATH_INFO'))
124 stats.print_stats(*self._restrictions)
125 self._stream.write('-' * 80 + '\n\n')
126
127 return [body]
128
129
130 def make_action(app_factory, hostname='localhost', port=5000,
131 threaded=False, processes=1, stream=None,
132 sort_by=('time', 'calls'), restrictions=()):
133 """Return a new callback for :mod:`werkzeug.script` that starts a local
134 server with the profiler enabled.
135
136 ::
137
138 from werkzeug.contrib import profiler
139 action_profile = profiler.make_action(make_app)
140 """
141 def action(hostname=('h', hostname), port=('p', port),
142 threaded=threaded, processes=processes):
143 """Start a new development server."""
144 from werkzeug.serving import run_simple
145 app = ProfilerMiddleware(app_factory(), stream, sort_by, restrictions)
146 run_simple(hostname, port, app, False, None, threaded, processes)
147 return action
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/werkzeug/contrib/profiler.py b/werkzeug/contrib/profiler.py
--- a/werkzeug/contrib/profiler.py
+++ b/werkzeug/contrib/profiler.py
@@ -9,7 +9,6 @@
stream provided (defaults to stderr).
Example usage::
-
from werkzeug.contrib.profiler import ProfilerMiddleware
app = ProfilerMiddleware(app)
@@ -60,22 +59,43 @@
directory, one file per request. Without it, a summary is printed to
`stream` instead.
+ The file name format can be customized by passing
+ ``filename_format``. If it is a string, it will be formatted using
+ :meth:`str.format` with the following fields available:
+
+ - ``{method}`` - the request method; GET, POST, etc
+ - ``{path}`` - the request path or 'root' should one not exist
+ - ``{elapsed}`` - the elapsed time of the request
+ - ``{time}`` - the time of the request
+
+ If it is a callable, it will be called with the WSGI ``environ``
+ dict and should return a filename. Either way, the ``'.prof'``
+ extension will be appended to the name. The default format is
+ ``'{method}.{path}.{elapsed:06d}ms.{time:d}'``.
+
For the exact meaning of `sort_by` and `restrictions` consult the
:mod:`profile` documentation.
.. versionadded:: 0.9
Added support for `restrictions` and `profile_dir`.
+ .. versionadded:: 0.15
+ Added ``profile_file_name_format``.
+
:param app: the WSGI application to profile.
:param stream: the stream for the profiled stats. defaults to stderr.
:param sort_by: a tuple of columns to sort the result by.
- :param restrictions: a tuple of profiling strictions, not used if dumping
+ :param restrictions: a tuple of profiling restrictions, not used if dumping
to `profile_dir`.
:param profile_dir: directory name to save pstat files
+ :param filename_format: format of the filename excluding the extension.
"""
- def __init__(self, app, stream=None,
- sort_by=('time', 'calls'), restrictions=(), profile_dir=None):
+ def __init__(
+ self, app, stream=None,
+ sort_by=('time', 'calls'), restrictions=(), profile_dir=None,
+ filename_format='%(method)s.%(path)s.%(elapsed)06dms.%(time)d'
+ ):
if not available:
raise RuntimeError('the profiler is not available because '
'profile or pstat is not installed.')
@@ -84,6 +104,7 @@
self._sort_by = sort_by
self._restrictions = restrictions
self._profile_dir = profile_dir
+ self._filename_format = filename_format
def __call__(self, environ, start_response):
response_body = []
@@ -105,14 +126,18 @@
elapsed = time.time() - start
if self._profile_dir is not None:
- prof_filename = os.path.join(self._profile_dir,
- '%s.%s.%06dms.%d.prof' % (
- environ['REQUEST_METHOD'],
- environ.get('PATH_INFO').strip(
- '/').replace('/', '.') or 'root',
- elapsed * 1000.0,
- time.time()
- ))
+ if callable(self._filename_format):
+ filename = self._filename_format(environ)
+ else:
+ filename = self._filename_format.format(
+ method=environ['REQUEST_METHOD'],
+ path=(
+ environ.get('PATH_INFO').strip('/').replace('/', '.')
+ or 'root'),
+ elapsed=elapsed * 1000.0,
+ time=time.time(),
+ )
+ prof_filename = os.path.join(self._profile_dir, filename + '.prof')
p.dump_stats(prof_filename)
else:
| {"golden_diff": "diff --git a/werkzeug/contrib/profiler.py b/werkzeug/contrib/profiler.py\n--- a/werkzeug/contrib/profiler.py\n+++ b/werkzeug/contrib/profiler.py\n@@ -9,7 +9,6 @@\n stream provided (defaults to stderr).\n \n Example usage::\n-\n from werkzeug.contrib.profiler import ProfilerMiddleware\n app = ProfilerMiddleware(app)\n \n@@ -60,22 +59,43 @@\n directory, one file per request. Without it, a summary is printed to\n `stream` instead.\n \n+ The file name format can be customized by passing\n+ ``filename_format``. If it is a string, it will be formatted using\n+ :meth:`str.format` with the following fields available:\n+\n+ - ``{method}`` - the request method; GET, POST, etc\n+ - ``{path}`` - the request path or 'root' should one not exist\n+ - ``{elapsed}`` - the elapsed time of the request\n+ - ``{time}`` - the time of the request\n+\n+ If it is a callable, it will be called with the WSGI ``environ``\n+ dict and should return a filename. Either way, the ``'.prof'``\n+ extension will be appended to the name. The default format is\n+ ``'{method}.{path}.{elapsed:06d}ms.{time:d}'``.\n+\n For the exact meaning of `sort_by` and `restrictions` consult the\n :mod:`profile` documentation.\n \n .. versionadded:: 0.9\n Added support for `restrictions` and `profile_dir`.\n \n+ .. versionadded:: 0.15\n+ Added ``profile_file_name_format``.\n+\n :param app: the WSGI application to profile.\n :param stream: the stream for the profiled stats. defaults to stderr.\n :param sort_by: a tuple of columns to sort the result by.\n- :param restrictions: a tuple of profiling strictions, not used if dumping\n+ :param restrictions: a tuple of profiling restrictions, not used if dumping\n to `profile_dir`.\n :param profile_dir: directory name to save pstat files\n+ :param filename_format: format of the filename excluding the extension.\n \"\"\"\n \n- def __init__(self, app, stream=None,\n- sort_by=('time', 'calls'), restrictions=(), profile_dir=None):\n+ def __init__(\n+ self, app, stream=None,\n+ sort_by=('time', 'calls'), restrictions=(), profile_dir=None,\n+ filename_format='%(method)s.%(path)s.%(elapsed)06dms.%(time)d'\n+ ):\n if not available:\n raise RuntimeError('the profiler is not available because '\n 'profile or pstat is not installed.')\n@@ -84,6 +104,7 @@\n self._sort_by = sort_by\n self._restrictions = restrictions\n self._profile_dir = profile_dir\n+ self._filename_format = filename_format\n \n def __call__(self, environ, start_response):\n response_body = []\n@@ -105,14 +126,18 @@\n elapsed = time.time() - start\n \n if self._profile_dir is not None:\n- prof_filename = os.path.join(self._profile_dir,\n- '%s.%s.%06dms.%d.prof' % (\n- environ['REQUEST_METHOD'],\n- environ.get('PATH_INFO').strip(\n- '/').replace('/', '.') or 'root',\n- elapsed * 1000.0,\n- time.time()\n- ))\n+ if callable(self._filename_format):\n+ filename = self._filename_format(environ)\n+ else:\n+ filename = self._filename_format.format(\n+ method=environ['REQUEST_METHOD'],\n+ path=(\n+ environ.get('PATH_INFO').strip('/').replace('/', '.')\n+ or 'root'),\n+ elapsed=elapsed * 1000.0,\n+ time=time.time(),\n+ )\n+ prof_filename = os.path.join(self._profile_dir, filename + '.prof')\n p.dump_stats(prof_filename)\n \n else:\n", "issue": "[feature request] werkzeug.contrib.profiler.ProfilerMiddleware custom file name formatting\nIt would be nice to allow users to customize the output file name format when using the ProfilerMiddleware.\r\n\r\nCurrently, the file name format is `'%(method)s.%(path)s.%(elapsed)06dms.%(time)d'`, which when multiple invocations have occurred leads to the profile files being ordered by their respective elapsed times. Having the ability to customize this format allows profiling users to make use of the resultant profile files in additional ways.\r\n\r\nI propose the `profiler.ProfilerMiddleware` is changed to allow users to customize the format of the profile filenames for their specific needs.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n werkzeug.contrib.profiler\n ~~~~~~~~~~~~~~~~~~~~~~~~~\n\n This module provides a simple WSGI profiler middleware for finding\n bottlenecks in web application. It uses the :mod:`profile` or\n :mod:`cProfile` module to do the profiling and writes the stats to the\n stream provided (defaults to stderr).\n\n Example usage::\n\n from werkzeug.contrib.profiler import ProfilerMiddleware\n app = ProfilerMiddleware(app)\n\n :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.\n :license: BSD, see LICENSE for more details.\n\"\"\"\nimport sys\nimport time\nimport os.path\ntry:\n try:\n from cProfile import Profile\n except ImportError:\n from profile import Profile\n from pstats import Stats\n available = True\nexcept ImportError:\n available = False\n\n\nclass MergeStream(object):\n\n \"\"\"An object that redirects `write` calls to multiple streams.\n Use this to log to both `sys.stdout` and a file::\n\n f = open('profiler.log', 'w')\n stream = MergeStream(sys.stdout, f)\n profiler = ProfilerMiddleware(app, stream)\n \"\"\"\n\n def __init__(self, *streams):\n if not streams:\n raise TypeError('at least one stream must be given')\n self.streams = streams\n\n def write(self, data):\n for stream in self.streams:\n stream.write(data)\n\n\nclass ProfilerMiddleware(object):\n\n \"\"\"Simple profiler middleware. Wraps a WSGI application and profiles\n a request. This intentionally buffers the response so that timings are\n more exact.\n\n By giving the `profile_dir` argument, pstat.Stats files are saved to that\n directory, one file per request. Without it, a summary is printed to\n `stream` instead.\n\n For the exact meaning of `sort_by` and `restrictions` consult the\n :mod:`profile` documentation.\n\n .. versionadded:: 0.9\n Added support for `restrictions` and `profile_dir`.\n\n :param app: the WSGI application to profile.\n :param stream: the stream for the profiled stats. defaults to stderr.\n :param sort_by: a tuple of columns to sort the result by.\n :param restrictions: a tuple of profiling strictions, not used if dumping\n to `profile_dir`.\n :param profile_dir: directory name to save pstat files\n \"\"\"\n\n def __init__(self, app, stream=None,\n sort_by=('time', 'calls'), restrictions=(), profile_dir=None):\n if not available:\n raise RuntimeError('the profiler is not available because '\n 'profile or pstat is not installed.')\n self._app = app\n self._stream = stream or sys.stdout\n self._sort_by = sort_by\n self._restrictions = restrictions\n self._profile_dir = profile_dir\n\n def __call__(self, environ, start_response):\n response_body = []\n\n def catching_start_response(status, headers, exc_info=None):\n start_response(status, headers, exc_info)\n return response_body.append\n\n def runapp():\n appiter = self._app(environ, catching_start_response)\n response_body.extend(appiter)\n if hasattr(appiter, 'close'):\n appiter.close()\n\n p = Profile()\n start = time.time()\n p.runcall(runapp)\n body = b''.join(response_body)\n elapsed = time.time() - start\n\n if self._profile_dir is not None:\n prof_filename = os.path.join(self._profile_dir,\n '%s.%s.%06dms.%d.prof' % (\n environ['REQUEST_METHOD'],\n environ.get('PATH_INFO').strip(\n '/').replace('/', '.') or 'root',\n elapsed * 1000.0,\n time.time()\n ))\n p.dump_stats(prof_filename)\n\n else:\n stats = Stats(p, stream=self._stream)\n stats.sort_stats(*self._sort_by)\n\n self._stream.write('-' * 80)\n self._stream.write('\\nPATH: %r\\n' % environ.get('PATH_INFO'))\n stats.print_stats(*self._restrictions)\n self._stream.write('-' * 80 + '\\n\\n')\n\n return [body]\n\n\ndef make_action(app_factory, hostname='localhost', port=5000,\n threaded=False, processes=1, stream=None,\n sort_by=('time', 'calls'), restrictions=()):\n \"\"\"Return a new callback for :mod:`werkzeug.script` that starts a local\n server with the profiler enabled.\n\n ::\n\n from werkzeug.contrib import profiler\n action_profile = profiler.make_action(make_app)\n \"\"\"\n def action(hostname=('h', hostname), port=('p', port),\n threaded=threaded, processes=processes):\n \"\"\"Start a new development server.\"\"\"\n from werkzeug.serving import run_simple\n app = ProfilerMiddleware(app_factory(), stream, sort_by, restrictions)\n run_simple(hostname, port, app, False, None, threaded, processes)\n return action\n", "path": "werkzeug/contrib/profiler.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n werkzeug.contrib.profiler\n ~~~~~~~~~~~~~~~~~~~~~~~~~\n\n This module provides a simple WSGI profiler middleware for finding\n bottlenecks in web application. It uses the :mod:`profile` or\n :mod:`cProfile` module to do the profiling and writes the stats to the\n stream provided (defaults to stderr).\n\n Example usage::\n from werkzeug.contrib.profiler import ProfilerMiddleware\n app = ProfilerMiddleware(app)\n\n :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.\n :license: BSD, see LICENSE for more details.\n\"\"\"\nimport sys\nimport time\nimport os.path\ntry:\n try:\n from cProfile import Profile\n except ImportError:\n from profile import Profile\n from pstats import Stats\n available = True\nexcept ImportError:\n available = False\n\n\nclass MergeStream(object):\n\n \"\"\"An object that redirects `write` calls to multiple streams.\n Use this to log to both `sys.stdout` and a file::\n\n f = open('profiler.log', 'w')\n stream = MergeStream(sys.stdout, f)\n profiler = ProfilerMiddleware(app, stream)\n \"\"\"\n\n def __init__(self, *streams):\n if not streams:\n raise TypeError('at least one stream must be given')\n self.streams = streams\n\n def write(self, data):\n for stream in self.streams:\n stream.write(data)\n\n\nclass ProfilerMiddleware(object):\n\n \"\"\"Simple profiler middleware. Wraps a WSGI application and profiles\n a request. This intentionally buffers the response so that timings are\n more exact.\n\n By giving the `profile_dir` argument, pstat.Stats files are saved to that\n directory, one file per request. Without it, a summary is printed to\n `stream` instead.\n\n The file name format can be customized by passing\n ``filename_format``. If it is a string, it will be formatted using\n :meth:`str.format` with the following fields available:\n\n - ``{method}`` - the request method; GET, POST, etc\n - ``{path}`` - the request path or 'root' should one not exist\n - ``{elapsed}`` - the elapsed time of the request\n - ``{time}`` - the time of the request\n\n If it is a callable, it will be called with the WSGI ``environ``\n dict and should return a filename. Either way, the ``'.prof'``\n extension will be appended to the name. The default format is\n ``'{method}.{path}.{elapsed:06d}ms.{time:d}'``.\n\n For the exact meaning of `sort_by` and `restrictions` consult the\n :mod:`profile` documentation.\n\n .. versionadded:: 0.9\n Added support for `restrictions` and `profile_dir`.\n\n .. versionadded:: 0.15\n Added ``profile_file_name_format``.\n\n :param app: the WSGI application to profile.\n :param stream: the stream for the profiled stats. defaults to stderr.\n :param sort_by: a tuple of columns to sort the result by.\n :param restrictions: a tuple of profiling restrictions, not used if dumping\n to `profile_dir`.\n :param profile_dir: directory name to save pstat files\n :param filename_format: format of the filename excluding the extension.\n \"\"\"\n\n def __init__(\n self, app, stream=None,\n sort_by=('time', 'calls'), restrictions=(), profile_dir=None,\n filename_format='%(method)s.%(path)s.%(elapsed)06dms.%(time)d'\n ):\n if not available:\n raise RuntimeError('the profiler is not available because '\n 'profile or pstat is not installed.')\n self._app = app\n self._stream = stream or sys.stdout\n self._sort_by = sort_by\n self._restrictions = restrictions\n self._profile_dir = profile_dir\n self._filename_format = filename_format\n\n def __call__(self, environ, start_response):\n response_body = []\n\n def catching_start_response(status, headers, exc_info=None):\n start_response(status, headers, exc_info)\n return response_body.append\n\n def runapp():\n appiter = self._app(environ, catching_start_response)\n response_body.extend(appiter)\n if hasattr(appiter, 'close'):\n appiter.close()\n\n p = Profile()\n start = time.time()\n p.runcall(runapp)\n body = b''.join(response_body)\n elapsed = time.time() - start\n\n if self._profile_dir is not None:\n if callable(self._filename_format):\n filename = self._filename_format(environ)\n else:\n filename = self._filename_format.format(\n method=environ['REQUEST_METHOD'],\n path=(\n environ.get('PATH_INFO').strip('/').replace('/', '.')\n or 'root'),\n elapsed=elapsed * 1000.0,\n time=time.time(),\n )\n prof_filename = os.path.join(self._profile_dir, filename + '.prof')\n p.dump_stats(prof_filename)\n\n else:\n stats = Stats(p, stream=self._stream)\n stats.sort_stats(*self._sort_by)\n\n self._stream.write('-' * 80)\n self._stream.write('\\nPATH: %r\\n' % environ.get('PATH_INFO'))\n stats.print_stats(*self._restrictions)\n self._stream.write('-' * 80 + '\\n\\n')\n\n return [body]\n\n\ndef make_action(app_factory, hostname='localhost', port=5000,\n threaded=False, processes=1, stream=None,\n sort_by=('time', 'calls'), restrictions=()):\n \"\"\"Return a new callback for :mod:`werkzeug.script` that starts a local\n server with the profiler enabled.\n\n ::\n\n from werkzeug.contrib import profiler\n action_profile = profiler.make_action(make_app)\n \"\"\"\n def action(hostname=('h', hostname), port=('p', port),\n threaded=threaded, processes=processes):\n \"\"\"Start a new development server.\"\"\"\n from werkzeug.serving import run_simple\n app = ProfilerMiddleware(app_factory(), stream, sort_by, restrictions)\n run_simple(hostname, port, app, False, None, threaded, processes)\n return action\n", "path": "werkzeug/contrib/profiler.py"}]} | 1,864 | 935 |
gh_patches_debug_603 | rasdani/github-patches | git_diff | pex-tool__pex-1761 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.87
On the docket:
+ [ ] A relative --tmpdir foils pex3 lock create. #1758
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.86"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.86"
+__version__ = "2.1.87"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.86\"\n+__version__ = \"2.1.87\"\n", "issue": "Release 2.1.87\nOn the docket:\r\n+ [ ] A relative --tmpdir foils pex3 lock create. #1758\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.86\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.87\"\n", "path": "pex/version.py"}]} | 343 | 96 |
gh_patches_debug_6507 | rasdani/github-patches | git_diff | aws__aws-sam-cli-2007 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect inline help in "sam local generate-event" command
<!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). -->
### Describe your idea/feature/enhancement
Using the CLI I had some problems with the inline help of the command "sam local generate-event". I was trying to pipe the event generated by that command with "sam local invoke" and it failed. The part of the inline help that it is incorrect is this:
`{...} After you generate a sample event, you can use it to test your Lambda function locally
$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke <function logical id> {...}`
In the web documentation here (https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-generate-event.html) the help is correct:
`After you generate a sample event, you can use it to test your Lambda function locally
sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke -e - <function logical id>`
### Proposal
Replace the incorrect help by the correct one.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `samcli/commands/local/generate_event/cli.py`
Content:
```
1 """
2 Sets up the cli for generate-event
3 """
4
5 import click
6
7 from samcli.cli.main import pass_context
8 from samcli.commands.local.generate_event.event_generation import GenerateEventCommand
9
10 HELP_TEXT = """
11 You can use this command to generate sample payloads from different event sources
12 such as S3, API Gateway, and SNS. These payloads contain the information that the
13 event sources send to your Lambda functions.\n
14 \b
15 Generate the event that S3 sends to your Lambda function when a new object is uploaded
16 $ sam local generate-event s3 [put/delete]\n
17 \b
18 You can even customize the event by adding parameter flags. To find which flags apply to your command,
19 run:\n
20 $ sam local generate-event s3 [put/delete] --help\n
21 Then you can add in those flags that you wish to customize using\n
22 $ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key>\n
23 \b
24 After you generate a sample event, you can use it to test your Lambda function locally
25 $ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke <function logical id>
26 """
27
28
29 @click.command(name="generate-event", cls=GenerateEventCommand, help=HELP_TEXT)
30 @pass_context
31 def cli(self):
32 """
33 Generate an event for one of the services listed below:
34 """
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/samcli/commands/local/generate_event/cli.py b/samcli/commands/local/generate_event/cli.py
--- a/samcli/commands/local/generate_event/cli.py
+++ b/samcli/commands/local/generate_event/cli.py
@@ -22,7 +22,7 @@
$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key>\n
\b
After you generate a sample event, you can use it to test your Lambda function locally
-$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke <function logical id>
+$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke -e - <function logical id>
"""
| {"golden_diff": "diff --git a/samcli/commands/local/generate_event/cli.py b/samcli/commands/local/generate_event/cli.py\n--- a/samcli/commands/local/generate_event/cli.py\n+++ b/samcli/commands/local/generate_event/cli.py\n@@ -22,7 +22,7 @@\n $ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key>\\n\n \\b\n After you generate a sample event, you can use it to test your Lambda function locally\n-$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke <function logical id>\n+$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke -e - <function logical id>\n \"\"\"\n", "issue": "Incorrect inline help in \"sam local generate-event\" command\n<!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). -->\r\n\r\n### Describe your idea/feature/enhancement\r\n\r\nUsing the CLI I had some problems with the inline help of the command \"sam local generate-event\". I was trying to pipe the event generated by that command with \"sam local invoke\" and it failed. The part of the inline help that it is incorrect is this:\r\n\r\n`{...} After you generate a sample event, you can use it to test your Lambda function locally\r\n $ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke <function logical id> {...}`\r\n\r\nIn the web documentation here (https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-generate-event.html) the help is correct:\r\n\r\n`After you generate a sample event, you can use it to test your Lambda function locally\r\nsam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke -e - <function logical id>`\r\n### Proposal\r\n\r\nReplace the incorrect help by the correct one.\r\n\n", "before_files": [{"content": "\"\"\"\nSets up the cli for generate-event\n\"\"\"\n\nimport click\n\nfrom samcli.cli.main import pass_context\nfrom samcli.commands.local.generate_event.event_generation import GenerateEventCommand\n\nHELP_TEXT = \"\"\"\nYou can use this command to generate sample payloads from different event sources\nsuch as S3, API Gateway, and SNS. These payloads contain the information that the\nevent sources send to your Lambda functions.\\n\n\\b\nGenerate the event that S3 sends to your Lambda function when a new object is uploaded\n$ sam local generate-event s3 [put/delete]\\n\n\\b\nYou can even customize the event by adding parameter flags. To find which flags apply to your command,\nrun:\\n\n$ sam local generate-event s3 [put/delete] --help\\n\nThen you can add in those flags that you wish to customize using\\n\n$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key>\\n\n\\b\nAfter you generate a sample event, you can use it to test your Lambda function locally\n$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke <function logical id>\n\"\"\"\n\n\[email protected](name=\"generate-event\", cls=GenerateEventCommand, help=HELP_TEXT)\n@pass_context\ndef cli(self):\n \"\"\"\n Generate an event for one of the services listed below:\n \"\"\"\n", "path": "samcli/commands/local/generate_event/cli.py"}], "after_files": [{"content": "\"\"\"\nSets up the cli for generate-event\n\"\"\"\n\nimport click\n\nfrom samcli.cli.main import pass_context\nfrom samcli.commands.local.generate_event.event_generation import GenerateEventCommand\n\nHELP_TEXT = \"\"\"\nYou can use this command to generate sample payloads from different event sources\nsuch as S3, API Gateway, and SNS. These payloads contain the information that the\nevent sources send to your Lambda functions.\\n\n\\b\nGenerate the event that S3 sends to your Lambda function when a new object is uploaded\n$ sam local generate-event s3 [put/delete]\\n\n\\b\nYou can even customize the event by adding parameter flags. To find which flags apply to your command,\nrun:\\n\n$ sam local generate-event s3 [put/delete] --help\\n\nThen you can add in those flags that you wish to customize using\\n\n$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key>\\n\n\\b\nAfter you generate a sample event, you can use it to test your Lambda function locally\n$ sam local generate-event s3 [put/delete] --bucket <bucket> --key <key> | sam local invoke -e - <function logical id>\n\"\"\"\n\n\[email protected](name=\"generate-event\", cls=GenerateEventCommand, help=HELP_TEXT)\n@pass_context\ndef cli(self):\n \"\"\"\n Generate an event for one of the services listed below:\n \"\"\"\n", "path": "samcli/commands/local/generate_event/cli.py"}]} | 884 | 176 |
gh_patches_debug_6923 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-2831 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't package pyusb, 'Unable to find "libusb-1.0.so.0"'
To reproduce
1. `virtualenv env`
`source env/bin/activate`
`pip install https://github.com/pyinstaller/pyinstaller/archive/develop.zip`
`pip install pyusb`
`pip freeze > requirements.txt`
2. Create simple python file which imports pyusb
test.py
```
import usb.core
print 'Success!
```
3. Run the python file normally
```
$ python test.py
Success!
```
4. Package it with pyinstaller
```
pyinstaller --log-level=DEBUG test.py &> log.txt
```
log.txt
```
14 INFO: PyInstaller: 3.3.dev0+5fd013bfe
14 INFO: Python: 2.7.12+
14 INFO: Platform: Linux-4.8.0-53-generic-x86_64-with-Ubuntu-16.10-yakkety
14 INFO: wrote /home/joel/Code/random/pyinstaller/test.spec
15 DEBUG: Testing for UPX ...
17 INFO: UPX is not available.
17 DEBUG: script: /home/joel/Code/random/pyinstaller/test.py
17 INFO: Extending PYTHONPATH with paths
['/home/joel/Code/random/pyinstaller', '/home/joel/Code/random/pyinstaller']
17 INFO: checking Analysis
17 INFO: Building Analysis because out00-Analysis.toc is non existent
17 INFO: Initializing module dependency graph...
18 INFO: Initializing module graph hooks...
19 DEBUG: Hidden import: codecs
65 DEBUG: Hidden import 'codecs' already found
65 INFO: running Analysis out00-Analysis.toc
65 DEBUG: Analyzing /home/joel/Code/random/pyinstaller/env/bin/python
71 DEBUG: Skipping libdl.so.2 dependency of python
71 DEBUG: Skipping libc.so.6 dependency of python
71 DEBUG: Skipping libutil.so.1 dependency of python
71 DEBUG: Skipping libpthread.so.0 dependency of python
71 DEBUG: Skipping libm.so.6 dependency of python
72 DEBUG: Adding libz.so.1 dependency of python from /lib/x86_64-linux-gnu/libz.so.1
72 DEBUG: Analyzing /lib/x86_64-linux-gnu/libz.so.1
78 DEBUG: Skipping libc.so.6 dependency of libz.so.1
78 INFO: Caching module hooks...
81 INFO: Analyzing /home/joel/Code/random/pyinstaller/test.py
1078 INFO: Loading module hooks...
1079 INFO: Loading module hook "hook-usb.py"...
Unable to find "libusb-1.0.so.0" when adding binary and data files.
```
This causes the packaging process to halt with an empty `dist/` and `build/test/` folders.
I dug around and found out, that `hook-usb.py` reports the binaries to be `[('libusb-1.0.so.0', '')]` and I think this causes that `src_root_path_or_glob`(`utils.py`, lines 450+) to just contain 'libusb1.0.so.0', which produces just `[]`after it get's run through the `glob.glob()` and not the real path, which I think is`/lib/x86_64-linux-gnu/libusb-1.0.so.0` on my computer.
Environment:
```
Python 2.7.12+
Ubuntu 16.10
Virtualenv 15.1.0
```
requirements.txt
```
appdirs==1.4.3
packaging==16.8
PyInstaller==3.3.dev0+5fd013bfe
pyparsing==2.2.0
pyusb==1.0.0
six==1.10.0
```
Also, this seems to be somewhat related to #1682 and commit https://github.com/pyinstaller/pyinstaller/commit/23901eb1ce7e60f1aac2a73e6cb3b02f1a34b9b1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/hook-usb.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2013-2017, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License with exception
5 # for distributing bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #-----------------------------------------------------------------------------
9
10 import ctypes.util
11 import os
12 import usb.core
13 import usb.backend
14
15 from PyInstaller.depend.utils import _resolveCtypesImports
16 from PyInstaller.compat import is_cygwin
17 from PyInstaller.utils.hooks import logger
18
19
20 # Include glob for library lookup in run-time hook.
21 hiddenimports = ['glob']
22
23 # https://github.com/walac/pyusb/blob/master/docs/faq.rst
24 # https://github.com/walac/pyusb/blob/master/docs/tutorial.rst
25
26 binaries=[]
27
28 # first try to use pyusb library locator
29 try:
30 # get the backend symbols before find
31 pyusb_backend_dir = set(dir(usb.backend))
32
33 # perform find, which will load a usb library if found
34 usb.core.find()
35
36 # get the backend symbols which have been added (loaded)
37 backends = set(dir(usb.backend)) - pyusb_backend_dir
38
39 # for each of the loaded backends, see if they have a library
40 binaries = []
41 for usblib in [getattr(usb.backend, be)._lib for be in backends]:
42 if usblib is not None:
43 binaries = [(usblib._name, '')]
44
45 except (ValueError, usb.core.USBError) as exc:
46 logger.warning("%s", exc)
47
48
49 # if nothing found, try to use our custom mechanism
50 if not binaries:
51 # Try to resolve your libusb libraries in the following order:
52 #
53 # libusb-1.0, libusb-0.1, openusb
54 #
55 # NOTE: Mind updating run-time hook when adding further libs.
56 libusb_candidates = (
57 # libusb10
58 'usb-1.0', 'usb', 'libusb-1.0',
59 # libusb01
60 'usb-0.1', 'libusb0',
61 # openusb
62 'openusb',
63 )
64
65 for candidate in libusb_candidates:
66 libname = ctypes.util.find_library(candidate)
67 if libname is not None:
68 break
69
70 if libname is not None:
71 # Use basename here because Python returns full library path
72 # on Mac OSX when using ctypes.util.find_library.
73 bins = [os.path.basename(libname)]
74 binaries = _resolveCtypesImports(bins)
75 elif is_cygwin:
76 bins = ['cygusb-1.0-0.dll', 'cygusb0.dll']
77 binaries = _resolveCtypesImports(bins)[:1] # use only the first one
78 else:
79 binaries = []
80
81 if binaries:
82 # `_resolveCtypesImports` returns a 3-tuple, but `binaries` are only
83 # 2-tuples, so remove the last element:
84 assert len(binaries[0]) == 3
85 binaries = [(binaries[0][1], '')]
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/PyInstaller/hooks/hook-usb.py b/PyInstaller/hooks/hook-usb.py
--- a/PyInstaller/hooks/hook-usb.py
+++ b/PyInstaller/hooks/hook-usb.py
@@ -40,7 +40,9 @@
binaries = []
for usblib in [getattr(usb.backend, be)._lib for be in backends]:
if usblib is not None:
- binaries = [(usblib._name, '')]
+ binaries = _resolveCtypesImports([os.path.basename(usblib._name)])
+ assert len(binaries[0]) == 3
+ binaries = [(binaries[0][1], '')]
except (ValueError, usb.core.USBError) as exc:
logger.warning("%s", exc)
| {"golden_diff": "diff --git a/PyInstaller/hooks/hook-usb.py b/PyInstaller/hooks/hook-usb.py\n--- a/PyInstaller/hooks/hook-usb.py\n+++ b/PyInstaller/hooks/hook-usb.py\n@@ -40,7 +40,9 @@\n binaries = []\n for usblib in [getattr(usb.backend, be)._lib for be in backends]:\n if usblib is not None:\n- binaries = [(usblib._name, '')]\n+ binaries = _resolveCtypesImports([os.path.basename(usblib._name)])\n+ assert len(binaries[0]) == 3\n+ binaries = [(binaries[0][1], '')]\n \n except (ValueError, usb.core.USBError) as exc:\n logger.warning(\"%s\", exc)\n", "issue": "Can't package pyusb, 'Unable to find \"libusb-1.0.so.0\"'\nTo reproduce\r\n\r\n1. `virtualenv env`\r\n`source env/bin/activate`\r\n`pip install https://github.com/pyinstaller/pyinstaller/archive/develop.zip`\r\n`pip install pyusb`\r\n`pip freeze > requirements.txt`\r\n2. Create simple python file which imports pyusb\r\ntest.py\r\n```\r\nimport usb.core\r\nprint 'Success!\r\n```\r\n3. Run the python file normally\r\n```\r\n$ python test.py\r\nSuccess!\r\n```\r\n4. Package it with pyinstaller\r\n```\r\npyinstaller --log-level=DEBUG test.py &> log.txt\r\n```\r\nlog.txt\r\n```\r\n14 INFO: PyInstaller: 3.3.dev0+5fd013bfe\r\n14 INFO: Python: 2.7.12+\r\n14 INFO: Platform: Linux-4.8.0-53-generic-x86_64-with-Ubuntu-16.10-yakkety\r\n14 INFO: wrote /home/joel/Code/random/pyinstaller/test.spec\r\n15 DEBUG: Testing for UPX ...\r\n17 INFO: UPX is not available.\r\n17 DEBUG: script: /home/joel/Code/random/pyinstaller/test.py\r\n17 INFO: Extending PYTHONPATH with paths\r\n['/home/joel/Code/random/pyinstaller', '/home/joel/Code/random/pyinstaller']\r\n17 INFO: checking Analysis\r\n17 INFO: Building Analysis because out00-Analysis.toc is non existent\r\n17 INFO: Initializing module dependency graph...\r\n18 INFO: Initializing module graph hooks...\r\n19 DEBUG: Hidden import: codecs\r\n65 DEBUG: Hidden import 'codecs' already found\r\n65 INFO: running Analysis out00-Analysis.toc\r\n65 DEBUG: Analyzing /home/joel/Code/random/pyinstaller/env/bin/python\r\n71 DEBUG: Skipping libdl.so.2 dependency of python\r\n71 DEBUG: Skipping libc.so.6 dependency of python\r\n71 DEBUG: Skipping libutil.so.1 dependency of python\r\n71 DEBUG: Skipping libpthread.so.0 dependency of python\r\n71 DEBUG: Skipping libm.so.6 dependency of python\r\n72 DEBUG: Adding libz.so.1 dependency of python from /lib/x86_64-linux-gnu/libz.so.1\r\n72 DEBUG: Analyzing /lib/x86_64-linux-gnu/libz.so.1\r\n78 DEBUG: Skipping libc.so.6 dependency of libz.so.1\r\n78 INFO: Caching module hooks...\r\n81 INFO: Analyzing /home/joel/Code/random/pyinstaller/test.py\r\n1078 INFO: Loading module hooks...\r\n1079 INFO: Loading module hook \"hook-usb.py\"...\r\nUnable to find \"libusb-1.0.so.0\" when adding binary and data files.\r\n```\r\nThis causes the packaging process to halt with an empty `dist/` and `build/test/` folders.\r\n\r\nI dug around and found out, that `hook-usb.py` reports the binaries to be `[('libusb-1.0.so.0', '')]` and I think this causes that `src_root_path_or_glob`(`utils.py`, lines 450+) to just contain 'libusb1.0.so.0', which produces just `[]`after it get's run through the `glob.glob()` and not the real path, which I think is`/lib/x86_64-linux-gnu/libusb-1.0.so.0` on my computer.\r\n\r\nEnvironment:\r\n```\r\nPython 2.7.12+\r\nUbuntu 16.10\r\nVirtualenv 15.1.0\r\n```\r\nrequirements.txt\r\n```\r\nappdirs==1.4.3\r\npackaging==16.8\r\nPyInstaller==3.3.dev0+5fd013bfe\r\npyparsing==2.2.0\r\npyusb==1.0.0\r\nsix==1.10.0\r\n```\r\n\r\nAlso, this seems to be somewhat related to #1682 and commit https://github.com/pyinstaller/pyinstaller/commit/23901eb1ce7e60f1aac2a73e6cb3b02f1a34b9b1\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2013-2017, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\nimport ctypes.util\nimport os\nimport usb.core\nimport usb.backend\n\nfrom PyInstaller.depend.utils import _resolveCtypesImports\nfrom PyInstaller.compat import is_cygwin\nfrom PyInstaller.utils.hooks import logger\n\n\n# Include glob for library lookup in run-time hook.\nhiddenimports = ['glob']\n\n# https://github.com/walac/pyusb/blob/master/docs/faq.rst\n# https://github.com/walac/pyusb/blob/master/docs/tutorial.rst\n\nbinaries=[]\n\n# first try to use pyusb library locator\ntry:\n # get the backend symbols before find\n pyusb_backend_dir = set(dir(usb.backend))\n\n # perform find, which will load a usb library if found\n usb.core.find()\n\n # get the backend symbols which have been added (loaded)\n backends = set(dir(usb.backend)) - pyusb_backend_dir\n\n # for each of the loaded backends, see if they have a library\n binaries = []\n for usblib in [getattr(usb.backend, be)._lib for be in backends]:\n if usblib is not None:\n binaries = [(usblib._name, '')]\n\nexcept (ValueError, usb.core.USBError) as exc:\n logger.warning(\"%s\", exc)\n\n\n# if nothing found, try to use our custom mechanism\nif not binaries:\n # Try to resolve your libusb libraries in the following order:\n #\n # libusb-1.0, libusb-0.1, openusb\n #\n # NOTE: Mind updating run-time hook when adding further libs.\n libusb_candidates = (\n # libusb10\n 'usb-1.0', 'usb', 'libusb-1.0',\n # libusb01\n 'usb-0.1', 'libusb0',\n # openusb\n 'openusb',\n )\n\n for candidate in libusb_candidates:\n libname = ctypes.util.find_library(candidate)\n if libname is not None:\n break\n\n if libname is not None:\n # Use basename here because Python returns full library path\n # on Mac OSX when using ctypes.util.find_library.\n bins = [os.path.basename(libname)]\n binaries = _resolveCtypesImports(bins)\n elif is_cygwin:\n bins = ['cygusb-1.0-0.dll', 'cygusb0.dll']\n binaries = _resolveCtypesImports(bins)[:1] # use only the first one\n else:\n binaries = []\n\n if binaries:\n # `_resolveCtypesImports` returns a 3-tuple, but `binaries` are only\n # 2-tuples, so remove the last element:\n assert len(binaries[0]) == 3\n binaries = [(binaries[0][1], '')]\n", "path": "PyInstaller/hooks/hook-usb.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2013-2017, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\nimport ctypes.util\nimport os\nimport usb.core\nimport usb.backend\n\nfrom PyInstaller.depend.utils import _resolveCtypesImports\nfrom PyInstaller.compat import is_cygwin\nfrom PyInstaller.utils.hooks import logger\n\n\n# Include glob for library lookup in run-time hook.\nhiddenimports = ['glob']\n\n# https://github.com/walac/pyusb/blob/master/docs/faq.rst\n# https://github.com/walac/pyusb/blob/master/docs/tutorial.rst\n\nbinaries=[]\n\n# first try to use pyusb library locator\ntry:\n # get the backend symbols before find\n pyusb_backend_dir = set(dir(usb.backend))\n\n # perform find, which will load a usb library if found\n usb.core.find()\n\n # get the backend symbols which have been added (loaded)\n backends = set(dir(usb.backend)) - pyusb_backend_dir\n\n # for each of the loaded backends, see if they have a library\n binaries = []\n for usblib in [getattr(usb.backend, be)._lib for be in backends]:\n if usblib is not None:\n binaries = _resolveCtypesImports([os.path.basename(usblib._name)])\n assert len(binaries[0]) == 3\n binaries = [(binaries[0][1], '')]\n\nexcept (ValueError, usb.core.USBError) as exc:\n logger.warning(\"%s\", exc)\n\n\n# if nothing found, try to use our custom mechanism\nif not binaries:\n # Try to resolve your libusb libraries in the following order:\n #\n # libusb-1.0, libusb-0.1, openusb\n #\n # NOTE: Mind updating run-time hook when adding further libs.\n libusb_candidates = (\n # libusb10\n 'usb-1.0', 'usb', 'libusb-1.0',\n # libusb01\n 'usb-0.1', 'libusb0',\n # openusb\n 'openusb',\n )\n\n for candidate in libusb_candidates:\n libname = ctypes.util.find_library(candidate)\n if libname is not None:\n break\n\n if libname is not None:\n # Use basename here because Python returns full library path\n # on Mac OSX when using ctypes.util.find_library.\n bins = [os.path.basename(libname)]\n binaries = _resolveCtypesImports(bins)\n elif is_cygwin:\n bins = ['cygusb-1.0-0.dll', 'cygusb0.dll']\n binaries = _resolveCtypesImports(bins)[:1] # use only the first one\n else:\n binaries = []\n\n if binaries:\n # `_resolveCtypesImports` returns a 3-tuple, but `binaries` are only\n # 2-tuples, so remove the last element:\n assert len(binaries[0]) == 3\n binaries = [(binaries[0][1], '')]\n", "path": "PyInstaller/hooks/hook-usb.py"}]} | 2,039 | 176 |
gh_patches_debug_29651 | rasdani/github-patches | git_diff | facebookresearch__hydra-2520 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Document hydra.utils.{get_class,get_method}
Users would benefit from documentation for the following:
[`hydra.utils.get_class`](https://github.com/facebookresearch/hydra/blob/1cbe86ebecbeb134a3f2041120d57447a7394314/hydra/utils.py#L21)
[`hydra.utils.get_method`](https://github.com/facebookresearch/hydra/blob/1cbe86ebecbeb134a3f2041120d57447a7394314/hydra/utils.py#L32)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hydra/utils.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
3 import logging.config
4 import os
5 from pathlib import Path
6 from typing import Any, Callable
7
8 import hydra._internal.instantiate._instantiate2
9 import hydra.types
10 from hydra._internal.utils import _locate
11 from hydra.core.hydra_config import HydraConfig
12
13 log = logging.getLogger(__name__)
14
15 # Instantiation related symbols
16 instantiate = hydra._internal.instantiate._instantiate2.instantiate
17 call = instantiate
18 ConvertMode = hydra.types.ConvertMode
19
20
21 def get_class(path: str) -> type:
22 try:
23 cls = _locate(path)
24 if not isinstance(cls, type):
25 raise ValueError(
26 f"Located non-class of type '{type(cls).__name__}'"
27 + f" while loading '{path}'"
28 )
29 return cls
30 except Exception as e:
31 log.error(f"Error initializing class at {path}: {e}")
32 raise e
33
34
35 def get_method(path: str) -> Callable[..., Any]:
36 try:
37 obj = _locate(path)
38 if not callable(obj):
39 raise ValueError(
40 f"Located non-callable of type '{type(obj).__name__}'"
41 + f" while loading '{path}'"
42 )
43 cl: Callable[..., Any] = obj
44 return cl
45 except Exception as e:
46 log.error(f"Error getting callable at {path} : {e}")
47 raise e
48
49
50 # Alias for get_method
51 get_static_method = get_method
52
53
54 def get_original_cwd() -> str:
55 """
56 :return: the original working directory the Hydra application was launched from
57 """
58 if not HydraConfig.initialized():
59 raise ValueError(
60 "get_original_cwd() must only be used after HydraConfig is initialized"
61 )
62 ret = HydraConfig.get().runtime.cwd
63 assert ret is not None and isinstance(ret, str)
64 return ret
65
66
67 def to_absolute_path(path: str) -> str:
68 """
69 converts the specified path to be absolute path.
70 if the input path is relative, it's interpreted as relative to the original working directory
71 if it's absolute, it's returned as is
72 :param path: path to convert
73 :return:
74 """
75 p = Path(path)
76 if not HydraConfig.initialized():
77 base = Path(os.getcwd())
78 else:
79 base = Path(get_original_cwd())
80 if p.is_absolute():
81 ret = p
82 else:
83 ret = base / p
84 return str(ret)
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hydra/utils.py b/hydra/utils.py
--- a/hydra/utils.py
+++ b/hydra/utils.py
@@ -19,6 +19,14 @@
def get_class(path: str) -> type:
+ """
+ Look up a class based on a dotpath.
+ Fails if the path does not point to a class.
+
+ >>> import my_module
+ >>> from hydra.utils import get_class
+ >>> assert get_class("my_module.MyClass") is my_module.MyClass
+ """
try:
cls = _locate(path)
if not isinstance(cls, type):
@@ -28,11 +36,19 @@
)
return cls
except Exception as e:
- log.error(f"Error initializing class at {path}: {e}")
+ log.error(f"Error getting class at {path}: {e}")
raise e
def get_method(path: str) -> Callable[..., Any]:
+ """
+ Look up a callable based on a dotpath.
+ Fails if the path does not point to a callable object.
+
+ >>> import my_module
+ >>> from hydra.utils import get_method
+ >>> assert get_method("my_module.my_function") is my_module.my_function
+ """
try:
obj = _locate(path)
if not callable(obj):
@@ -51,6 +67,22 @@
get_static_method = get_method
+def get_object(path: str) -> Any:
+ """
+ Look up a callable based on a dotpath.
+
+ >>> import my_module
+ >>> from hydra.utils import get_object
+ >>> assert get_object("my_module.my_object") is my_module.my_object
+ """
+ try:
+ obj = _locate(path)
+ return obj
+ except Exception as e:
+ log.error(f"Error getting object at {path} : {e}")
+ raise e
+
+
def get_original_cwd() -> str:
"""
:return: the original working directory the Hydra application was launched from
| {"golden_diff": "diff --git a/hydra/utils.py b/hydra/utils.py\n--- a/hydra/utils.py\n+++ b/hydra/utils.py\n@@ -19,6 +19,14 @@\n \n \n def get_class(path: str) -> type:\n+ \"\"\"\n+ Look up a class based on a dotpath.\n+ Fails if the path does not point to a class.\n+\n+ >>> import my_module\n+ >>> from hydra.utils import get_class\n+ >>> assert get_class(\"my_module.MyClass\") is my_module.MyClass\n+ \"\"\"\n try:\n cls = _locate(path)\n if not isinstance(cls, type):\n@@ -28,11 +36,19 @@\n )\n return cls\n except Exception as e:\n- log.error(f\"Error initializing class at {path}: {e}\")\n+ log.error(f\"Error getting class at {path}: {e}\")\n raise e\n \n \n def get_method(path: str) -> Callable[..., Any]:\n+ \"\"\"\n+ Look up a callable based on a dotpath.\n+ Fails if the path does not point to a callable object.\n+\n+ >>> import my_module\n+ >>> from hydra.utils import get_method\n+ >>> assert get_method(\"my_module.my_function\") is my_module.my_function\n+ \"\"\"\n try:\n obj = _locate(path)\n if not callable(obj):\n@@ -51,6 +67,22 @@\n get_static_method = get_method\n \n \n+def get_object(path: str) -> Any:\n+ \"\"\"\n+ Look up a callable based on a dotpath.\n+\n+ >>> import my_module\n+ >>> from hydra.utils import get_object\n+ >>> assert get_object(\"my_module.my_object\") is my_module.my_object\n+ \"\"\"\n+ try:\n+ obj = _locate(path)\n+ return obj\n+ except Exception as e:\n+ log.error(f\"Error getting object at {path} : {e}\")\n+ raise e\n+\n+\n def get_original_cwd() -> str:\n \"\"\"\n :return: the original working directory the Hydra application was launched from\n", "issue": "Document hydra.utils.{get_class,get_method}\nUsers would benefit from documentation for the following:\r\n[`hydra.utils.get_class`](https://github.com/facebookresearch/hydra/blob/1cbe86ebecbeb134a3f2041120d57447a7394314/hydra/utils.py#L21)\r\n[`hydra.utils.get_method`](https://github.com/facebookresearch/hydra/blob/1cbe86ebecbeb134a3f2041120d57447a7394314/hydra/utils.py#L32)\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\nimport logging.config\nimport os\nfrom pathlib import Path\nfrom typing import Any, Callable\n\nimport hydra._internal.instantiate._instantiate2\nimport hydra.types\nfrom hydra._internal.utils import _locate\nfrom hydra.core.hydra_config import HydraConfig\n\nlog = logging.getLogger(__name__)\n\n# Instantiation related symbols\ninstantiate = hydra._internal.instantiate._instantiate2.instantiate\ncall = instantiate\nConvertMode = hydra.types.ConvertMode\n\n\ndef get_class(path: str) -> type:\n try:\n cls = _locate(path)\n if not isinstance(cls, type):\n raise ValueError(\n f\"Located non-class of type '{type(cls).__name__}'\"\n + f\" while loading '{path}'\"\n )\n return cls\n except Exception as e:\n log.error(f\"Error initializing class at {path}: {e}\")\n raise e\n\n\ndef get_method(path: str) -> Callable[..., Any]:\n try:\n obj = _locate(path)\n if not callable(obj):\n raise ValueError(\n f\"Located non-callable of type '{type(obj).__name__}'\"\n + f\" while loading '{path}'\"\n )\n cl: Callable[..., Any] = obj\n return cl\n except Exception as e:\n log.error(f\"Error getting callable at {path} : {e}\")\n raise e\n\n\n# Alias for get_method\nget_static_method = get_method\n\n\ndef get_original_cwd() -> str:\n \"\"\"\n :return: the original working directory the Hydra application was launched from\n \"\"\"\n if not HydraConfig.initialized():\n raise ValueError(\n \"get_original_cwd() must only be used after HydraConfig is initialized\"\n )\n ret = HydraConfig.get().runtime.cwd\n assert ret is not None and isinstance(ret, str)\n return ret\n\n\ndef to_absolute_path(path: str) -> str:\n \"\"\"\n converts the specified path to be absolute path.\n if the input path is relative, it's interpreted as relative to the original working directory\n if it's absolute, it's returned as is\n :param path: path to convert\n :return:\n \"\"\"\n p = Path(path)\n if not HydraConfig.initialized():\n base = Path(os.getcwd())\n else:\n base = Path(get_original_cwd())\n if p.is_absolute():\n ret = p\n else:\n ret = base / p\n return str(ret)\n", "path": "hydra/utils.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\nimport logging.config\nimport os\nfrom pathlib import Path\nfrom typing import Any, Callable\n\nimport hydra._internal.instantiate._instantiate2\nimport hydra.types\nfrom hydra._internal.utils import _locate\nfrom hydra.core.hydra_config import HydraConfig\n\nlog = logging.getLogger(__name__)\n\n# Instantiation related symbols\ninstantiate = hydra._internal.instantiate._instantiate2.instantiate\ncall = instantiate\nConvertMode = hydra.types.ConvertMode\n\n\ndef get_class(path: str) -> type:\n \"\"\"\n Look up a class based on a dotpath.\n Fails if the path does not point to a class.\n\n >>> import my_module\n >>> from hydra.utils import get_class\n >>> assert get_class(\"my_module.MyClass\") is my_module.MyClass\n \"\"\"\n try:\n cls = _locate(path)\n if not isinstance(cls, type):\n raise ValueError(\n f\"Located non-class of type '{type(cls).__name__}'\"\n + f\" while loading '{path}'\"\n )\n return cls\n except Exception as e:\n log.error(f\"Error getting class at {path}: {e}\")\n raise e\n\n\ndef get_method(path: str) -> Callable[..., Any]:\n \"\"\"\n Look up a callable based on a dotpath.\n Fails if the path does not point to a callable object.\n\n >>> import my_module\n >>> from hydra.utils import get_method\n >>> assert get_method(\"my_module.my_function\") is my_module.my_function\n \"\"\"\n try:\n obj = _locate(path)\n if not callable(obj):\n raise ValueError(\n f\"Located non-callable of type '{type(obj).__name__}'\"\n + f\" while loading '{path}'\"\n )\n cl: Callable[..., Any] = obj\n return cl\n except Exception as e:\n log.error(f\"Error getting callable at {path} : {e}\")\n raise e\n\n\n# Alias for get_method\nget_static_method = get_method\n\n\ndef get_object(path: str) -> Any:\n \"\"\"\n Look up a callable based on a dotpath.\n\n >>> import my_module\n >>> from hydra.utils import get_object\n >>> assert get_object(\"my_module.my_object\") is my_module.my_object\n \"\"\"\n try:\n obj = _locate(path)\n return obj\n except Exception as e:\n log.error(f\"Error getting object at {path} : {e}\")\n raise e\n\n\ndef get_original_cwd() -> str:\n \"\"\"\n :return: the original working directory the Hydra application was launched from\n \"\"\"\n if not HydraConfig.initialized():\n raise ValueError(\n \"get_original_cwd() must only be used after HydraConfig is initialized\"\n )\n ret = HydraConfig.get().runtime.cwd\n assert ret is not None and isinstance(ret, str)\n return ret\n\n\ndef to_absolute_path(path: str) -> str:\n \"\"\"\n converts the specified path to be absolute path.\n if the input path is relative, it's interpreted as relative to the original working directory\n if it's absolute, it's returned as is\n :param path: path to convert\n :return:\n \"\"\"\n p = Path(path)\n if not HydraConfig.initialized():\n base = Path(os.getcwd())\n else:\n base = Path(get_original_cwd())\n if p.is_absolute():\n ret = p\n else:\n ret = base / p\n return str(ret)\n", "path": "hydra/utils.py"}]} | 1,109 | 465 |
gh_patches_debug_17981 | rasdani/github-patches | git_diff | mdn__kuma-6760 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
T - Update robots.tx (Warnings have spiked on Search Console)
**Summary**
We are seeing a large spike in warnings on Google Search Console. The spike started on December 22, 2019.
<img width="952" alt="Coverage" src="https://user-images.githubusercontent.com/557852/72343521-5fb44000-36cf-11ea-8f1f-12955b98e34d.png">
**Steps To Reproduce (STR)**
1. Go to Search Console
2. Open Coverage Report
3. open "Indexed, though blocked by robots.txt"
**Actual behavior**
It looks like a bunch of URLs are blocked from indexing by Google that should not be blocked. Examples are:
* https://developer.mozilla.org/it/docs/Web/API/GlobalEventHandlers/onresize$translate
* https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/end%24edit
* https://developer.mozilla.org/en-US/docs/Web/API/RTCIceCandidateInit/sdpMid$translate?tolocale=de
When I test these URLs in Search Console's robots.txt tester it tells me that those URLs are allowed, and they should be. $translate, or $edit are not in robots.txt
**Expected behavior**
Those URLs should not be blocked, so that Google can crawl the pages, discover the no-index meta tag and remove them from its index.
**Additional context**
I assume that something is broken when it comes to handling the $ chracter in the URL. Maybe we just remove everything with the $url from our robots.txt. As far as I can tell, almost all of those URLs now live under wiki.developer.mozilla.org anyway.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kuma/landing/views.py`
Content:
```
1 from django.conf import settings
2 from django.http import HttpResponse
3 from django.shortcuts import redirect, render
4 from django.views import static
5 from django.views.decorators.cache import never_cache
6 from django.views.generic import RedirectView
7
8 from kuma.core.decorators import ensure_wiki_domain, shared_cache_control
9 from kuma.core.utils import is_wiki
10 from kuma.feeder.models import Bundle
11 from kuma.feeder.sections import SECTION_HACKS
12 from kuma.search.models import Filter
13
14 from .utils import favicon_url
15
16
17 @shared_cache_control
18 def contribute_json(request):
19 return static.serve(request, "contribute.json", document_root=settings.ROOT)
20
21
22 @shared_cache_control
23 def home(request):
24 """Home page."""
25 context = {}
26 # Need for both wiki and react homepage
27 context["updates"] = list(Bundle.objects.recent_entries(SECTION_HACKS.updates)[:5])
28
29 # The default template name
30 template_name = "landing/react_homepage.html"
31 if is_wiki(request):
32 template_name = "landing/homepage.html"
33 context["default_filters"] = Filter.objects.default_filters()
34 return render(request, template_name, context)
35
36
37 @ensure_wiki_domain
38 @never_cache
39 def maintenance_mode(request):
40 if settings.MAINTENANCE_MODE:
41 return render(request, "landing/maintenance-mode.html")
42 else:
43 return redirect("home")
44
45
46 @ensure_wiki_domain
47 @shared_cache_control
48 def promote_buttons(request):
49 """Bug 646192: MDN affiliate buttons"""
50 return render(request, "landing/promote_buttons.html")
51
52
53 ROBOTS_ALLOWED_TXT = """\
54 User-agent: *
55 Sitemap: https://developer.mozilla.org/sitemap.xml
56
57 Disallow: /admin/
58 Disallow: /api/
59 Disallow: /*/dashboards/*
60 Disallow: /*docs/feeds
61 Disallow: /*docs/templates
62 Disallow: /*docs*Template:
63 Disallow: /*docs/all
64 Disallow: /*docs/tag*
65 Disallow: /*docs/needs-review*
66 Disallow: /*docs/localization-tag*
67 Disallow: /*docs/with-errors
68 Disallow: /*docs/without-parent
69 Disallow: /*docs/top-level
70 Disallow: /*docs/new
71 Disallow: /*docs/get-documents
72 Disallow: /*docs/submit_akismet_spam
73 Disallow: /*docs/load*
74 Disallow: /*docs/Experiment:*
75 Disallow: /*$api
76 Disallow: /*$compare
77 Disallow: /*$revision
78 Disallow: /*$history
79 Disallow: /*$children
80 Disallow: /*$flag
81 Disallow: /*$locales
82 Disallow: /*$toc
83 Disallow: /*$move
84 Disallow: /*$quick-review
85 Disallow: /*$samples
86 Disallow: /*$revert
87 Disallow: /*$repair_breadcrumbs
88 Disallow: /*$delete
89 Disallow: /*$restore
90 Disallow: /*$purge
91 Disallow: /*$subscribe
92 Disallow: /*$subscribe_to_tree
93 Disallow: /*$vote
94 Disallow: /*docs.json
95 Disallow: /*docs/ckeditor_config.js
96 Disallow: /*/files/
97 Disallow: /media
98 Disallow: /*move-requested
99 Disallow: /*preview-wiki-content
100 Disallow: /*profiles*/edit
101 Disallow: /skins
102 Disallow: /*type=feed
103 Disallow: /*users/
104 """ + "\n".join(
105 "Disallow: /{locale}/search".format(locale=locale)
106 for locale in settings.ENABLED_LOCALES
107 )
108
109 ROBOTS_GO_AWAY_TXT = """\
110 User-Agent: *
111 Disallow: /
112 """
113
114
115 @shared_cache_control
116 def robots_txt(request):
117 """Serve robots.txt that allows or forbids robots."""
118 host = request.get_host()
119 if host in settings.ALLOW_ROBOTS_DOMAINS:
120 robots = ""
121 elif host in settings.ALLOW_ROBOTS_WEB_DOMAINS:
122 robots = ROBOTS_ALLOWED_TXT
123 else:
124 robots = ROBOTS_GO_AWAY_TXT
125 return HttpResponse(robots, content_type="text/plain")
126
127
128 class FaviconRedirect(RedirectView):
129 """Redirect to the favicon in the static img folder (bug 1402497)"""
130
131 def get_redirect_url(self, *args, **kwargs):
132 return favicon_url()
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kuma/landing/views.py b/kuma/landing/views.py
--- a/kuma/landing/views.py
+++ b/kuma/landing/views.py
@@ -50,6 +50,13 @@
return render(request, "landing/promote_buttons.html")
+ROBOTS_ALL_ALLOWED_TXT = """\
+User-agent: *
+Sitemap: https://wiki.developer.mozilla.org/sitemap.xml
+
+Disallow:
+"""
+
ROBOTS_ALLOWED_TXT = """\
User-agent: *
Sitemap: https://developer.mozilla.org/sitemap.xml
@@ -119,7 +126,10 @@
if host in settings.ALLOW_ROBOTS_DOMAINS:
robots = ""
elif host in settings.ALLOW_ROBOTS_WEB_DOMAINS:
- robots = ROBOTS_ALLOWED_TXT
+ if host == settings.WIKI_HOST:
+ robots = ROBOTS_ALL_ALLOWED_TXT
+ else:
+ robots = ROBOTS_ALLOWED_TXT
else:
robots = ROBOTS_GO_AWAY_TXT
return HttpResponse(robots, content_type="text/plain")
| {"golden_diff": "diff --git a/kuma/landing/views.py b/kuma/landing/views.py\n--- a/kuma/landing/views.py\n+++ b/kuma/landing/views.py\n@@ -50,6 +50,13 @@\n return render(request, \"landing/promote_buttons.html\")\n \n \n+ROBOTS_ALL_ALLOWED_TXT = \"\"\"\\\n+User-agent: *\n+Sitemap: https://wiki.developer.mozilla.org/sitemap.xml\n+\n+Disallow:\n+\"\"\"\n+\n ROBOTS_ALLOWED_TXT = \"\"\"\\\n User-agent: *\n Sitemap: https://developer.mozilla.org/sitemap.xml\n@@ -119,7 +126,10 @@\n if host in settings.ALLOW_ROBOTS_DOMAINS:\n robots = \"\"\n elif host in settings.ALLOW_ROBOTS_WEB_DOMAINS:\n- robots = ROBOTS_ALLOWED_TXT\n+ if host == settings.WIKI_HOST:\n+ robots = ROBOTS_ALL_ALLOWED_TXT\n+ else:\n+ robots = ROBOTS_ALLOWED_TXT\n else:\n robots = ROBOTS_GO_AWAY_TXT\n return HttpResponse(robots, content_type=\"text/plain\")\n", "issue": "T - Update robots.tx (Warnings have spiked on Search Console)\n**Summary**\r\nWe are seeing a large spike in warnings on Google Search Console. The spike started on December 22, 2019.\r\n\r\n<img width=\"952\" alt=\"Coverage\" src=\"https://user-images.githubusercontent.com/557852/72343521-5fb44000-36cf-11ea-8f1f-12955b98e34d.png\">\r\n\r\n**Steps To Reproduce (STR)**\r\n1. Go to Search Console\r\n2. Open Coverage Report\r\n3. open \"Indexed, though blocked by robots.txt\"\r\n\r\n\r\n**Actual behavior**\r\nIt looks like a bunch of URLs are blocked from indexing by Google that should not be blocked. Examples are:\r\n* https://developer.mozilla.org/it/docs/Web/API/GlobalEventHandlers/onresize$translate\r\n* https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/end%24edit\r\n* https://developer.mozilla.org/en-US/docs/Web/API/RTCIceCandidateInit/sdpMid$translate?tolocale=de\r\n\r\nWhen I test these URLs in Search Console's robots.txt tester it tells me that those URLs are allowed, and they should be. $translate, or $edit are not in robots.txt\r\n\r\n\r\n**Expected behavior**\r\nThose URLs should not be blocked, so that Google can crawl the pages, discover the no-index meta tag and remove them from its index.\r\n\r\n\r\n**Additional context**\r\nI assume that something is broken when it comes to handling the $ chracter in the URL. Maybe we just remove everything with the $url from our robots.txt. As far as I can tell, almost all of those URLs now live under wiki.developer.mozilla.org anyway.\n", "before_files": [{"content": "from django.conf import settings\nfrom django.http import HttpResponse\nfrom django.shortcuts import redirect, render\nfrom django.views import static\nfrom django.views.decorators.cache import never_cache\nfrom django.views.generic import RedirectView\n\nfrom kuma.core.decorators import ensure_wiki_domain, shared_cache_control\nfrom kuma.core.utils import is_wiki\nfrom kuma.feeder.models import Bundle\nfrom kuma.feeder.sections import SECTION_HACKS\nfrom kuma.search.models import Filter\n\nfrom .utils import favicon_url\n\n\n@shared_cache_control\ndef contribute_json(request):\n return static.serve(request, \"contribute.json\", document_root=settings.ROOT)\n\n\n@shared_cache_control\ndef home(request):\n \"\"\"Home page.\"\"\"\n context = {}\n # Need for both wiki and react homepage\n context[\"updates\"] = list(Bundle.objects.recent_entries(SECTION_HACKS.updates)[:5])\n\n # The default template name\n template_name = \"landing/react_homepage.html\"\n if is_wiki(request):\n template_name = \"landing/homepage.html\"\n context[\"default_filters\"] = Filter.objects.default_filters()\n return render(request, template_name, context)\n\n\n@ensure_wiki_domain\n@never_cache\ndef maintenance_mode(request):\n if settings.MAINTENANCE_MODE:\n return render(request, \"landing/maintenance-mode.html\")\n else:\n return redirect(\"home\")\n\n\n@ensure_wiki_domain\n@shared_cache_control\ndef promote_buttons(request):\n \"\"\"Bug 646192: MDN affiliate buttons\"\"\"\n return render(request, \"landing/promote_buttons.html\")\n\n\nROBOTS_ALLOWED_TXT = \"\"\"\\\nUser-agent: *\nSitemap: https://developer.mozilla.org/sitemap.xml\n\nDisallow: /admin/\nDisallow: /api/\nDisallow: /*/dashboards/*\nDisallow: /*docs/feeds\nDisallow: /*docs/templates\nDisallow: /*docs*Template:\nDisallow: /*docs/all\nDisallow: /*docs/tag*\nDisallow: /*docs/needs-review*\nDisallow: /*docs/localization-tag*\nDisallow: /*docs/with-errors\nDisallow: /*docs/without-parent\nDisallow: /*docs/top-level\nDisallow: /*docs/new\nDisallow: /*docs/get-documents\nDisallow: /*docs/submit_akismet_spam\nDisallow: /*docs/load*\nDisallow: /*docs/Experiment:*\nDisallow: /*$api\nDisallow: /*$compare\nDisallow: /*$revision\nDisallow: /*$history\nDisallow: /*$children\nDisallow: /*$flag\nDisallow: /*$locales\nDisallow: /*$toc\nDisallow: /*$move\nDisallow: /*$quick-review\nDisallow: /*$samples\nDisallow: /*$revert\nDisallow: /*$repair_breadcrumbs\nDisallow: /*$delete\nDisallow: /*$restore\nDisallow: /*$purge\nDisallow: /*$subscribe\nDisallow: /*$subscribe_to_tree\nDisallow: /*$vote\nDisallow: /*docs.json\nDisallow: /*docs/ckeditor_config.js\nDisallow: /*/files/\nDisallow: /media\nDisallow: /*move-requested\nDisallow: /*preview-wiki-content\nDisallow: /*profiles*/edit\nDisallow: /skins\nDisallow: /*type=feed\nDisallow: /*users/\n\"\"\" + \"\\n\".join(\n \"Disallow: /{locale}/search\".format(locale=locale)\n for locale in settings.ENABLED_LOCALES\n)\n\nROBOTS_GO_AWAY_TXT = \"\"\"\\\nUser-Agent: *\nDisallow: /\n\"\"\"\n\n\n@shared_cache_control\ndef robots_txt(request):\n \"\"\"Serve robots.txt that allows or forbids robots.\"\"\"\n host = request.get_host()\n if host in settings.ALLOW_ROBOTS_DOMAINS:\n robots = \"\"\n elif host in settings.ALLOW_ROBOTS_WEB_DOMAINS:\n robots = ROBOTS_ALLOWED_TXT\n else:\n robots = ROBOTS_GO_AWAY_TXT\n return HttpResponse(robots, content_type=\"text/plain\")\n\n\nclass FaviconRedirect(RedirectView):\n \"\"\"Redirect to the favicon in the static img folder (bug 1402497)\"\"\"\n\n def get_redirect_url(self, *args, **kwargs):\n return favicon_url()\n", "path": "kuma/landing/views.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.http import HttpResponse\nfrom django.shortcuts import redirect, render\nfrom django.views import static\nfrom django.views.decorators.cache import never_cache\nfrom django.views.generic import RedirectView\n\nfrom kuma.core.decorators import ensure_wiki_domain, shared_cache_control\nfrom kuma.core.utils import is_wiki\nfrom kuma.feeder.models import Bundle\nfrom kuma.feeder.sections import SECTION_HACKS\nfrom kuma.search.models import Filter\n\nfrom .utils import favicon_url\n\n\n@shared_cache_control\ndef contribute_json(request):\n return static.serve(request, \"contribute.json\", document_root=settings.ROOT)\n\n\n@shared_cache_control\ndef home(request):\n \"\"\"Home page.\"\"\"\n context = {}\n # Need for both wiki and react homepage\n context[\"updates\"] = list(Bundle.objects.recent_entries(SECTION_HACKS.updates)[:5])\n\n # The default template name\n template_name = \"landing/react_homepage.html\"\n if is_wiki(request):\n template_name = \"landing/homepage.html\"\n context[\"default_filters\"] = Filter.objects.default_filters()\n return render(request, template_name, context)\n\n\n@ensure_wiki_domain\n@never_cache\ndef maintenance_mode(request):\n if settings.MAINTENANCE_MODE:\n return render(request, \"landing/maintenance-mode.html\")\n else:\n return redirect(\"home\")\n\n\n@ensure_wiki_domain\n@shared_cache_control\ndef promote_buttons(request):\n \"\"\"Bug 646192: MDN affiliate buttons\"\"\"\n return render(request, \"landing/promote_buttons.html\")\n\n\nROBOTS_ALL_ALLOWED_TXT = \"\"\"\\\nUser-agent: *\nSitemap: https://wiki.developer.mozilla.org/sitemap.xml\n\nDisallow:\n\"\"\"\n\nROBOTS_ALLOWED_TXT = \"\"\"\\\nUser-agent: *\nSitemap: https://developer.mozilla.org/sitemap.xml\n\nDisallow: /admin/\nDisallow: /api/\nDisallow: /*/dashboards/*\nDisallow: /*docs/feeds\nDisallow: /*docs/templates\nDisallow: /*docs*Template:\nDisallow: /*docs/all\nDisallow: /*docs/tag*\nDisallow: /*docs/needs-review*\nDisallow: /*docs/localization-tag*\nDisallow: /*docs/with-errors\nDisallow: /*docs/without-parent\nDisallow: /*docs/top-level\nDisallow: /*docs/new\nDisallow: /*docs/get-documents\nDisallow: /*docs/submit_akismet_spam\nDisallow: /*docs/load*\nDisallow: /*docs/Experiment:*\nDisallow: /*$api\nDisallow: /*$compare\nDisallow: /*$revision\nDisallow: /*$history\nDisallow: /*$children\nDisallow: /*$flag\nDisallow: /*$locales\nDisallow: /*$toc\nDisallow: /*$move\nDisallow: /*$quick-review\nDisallow: /*$samples\nDisallow: /*$revert\nDisallow: /*$repair_breadcrumbs\nDisallow: /*$delete\nDisallow: /*$restore\nDisallow: /*$purge\nDisallow: /*$subscribe\nDisallow: /*$subscribe_to_tree\nDisallow: /*$vote\nDisallow: /*docs.json\nDisallow: /*docs/ckeditor_config.js\nDisallow: /*/files/\nDisallow: /media\nDisallow: /*move-requested\nDisallow: /*preview-wiki-content\nDisallow: /*profiles*/edit\nDisallow: /skins\nDisallow: /*type=feed\nDisallow: /*users/\n\"\"\" + \"\\n\".join(\n \"Disallow: /{locale}/search\".format(locale=locale)\n for locale in settings.ENABLED_LOCALES\n)\n\nROBOTS_GO_AWAY_TXT = \"\"\"\\\nUser-Agent: *\nDisallow: /\n\"\"\"\n\n\n@shared_cache_control\ndef robots_txt(request):\n \"\"\"Serve robots.txt that allows or forbids robots.\"\"\"\n host = request.get_host()\n if host in settings.ALLOW_ROBOTS_DOMAINS:\n robots = \"\"\n elif host in settings.ALLOW_ROBOTS_WEB_DOMAINS:\n if host == settings.WIKI_HOST:\n robots = ROBOTS_ALL_ALLOWED_TXT\n else:\n robots = ROBOTS_ALLOWED_TXT\n else:\n robots = ROBOTS_GO_AWAY_TXT\n return HttpResponse(robots, content_type=\"text/plain\")\n\n\nclass FaviconRedirect(RedirectView):\n \"\"\"Redirect to the favicon in the static img folder (bug 1402497)\"\"\"\n\n def get_redirect_url(self, *args, **kwargs):\n return favicon_url()\n", "path": "kuma/landing/views.py"}]} | 1,856 | 240 |
gh_patches_debug_28659 | rasdani/github-patches | git_diff | searxng__searxng-2303 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: seznam(CZ) ignored in search results
<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->
**Versions of SearX(NG):**
2022.11.25-1314c1c5, vanilla
2022.11.27-90b429bb, vanilla
2022.11.28, vanilla
2.3.7+20221122, fork
1.1.0-37-4d9586e2, vanilla
**How did you install SearX(NG)?**
I didn't. I tried several public instances and all suffer from the same issue.
**What happened?**
When searching for some Czech terms, like "seznam.cz", there is never the "seznam(CZ)" tag under individual search results. When trying to limit the search to the seznam(CZ) engine:
`!szn seznam.cz`
I'm getting an error:
`Sorry!we didn't find any results. Please use another query or search in more categories.`
When trying to limit the search to bing using "!bi" keyword, it works as expected (all search results have the bing tag under them).
**How to reproduce:**
`!szn <search term>`
**Expected behavior:**
When limiting searching to the seznam(CZ) engine, all search results should have the "seznam(CZ)" tag under them.
When searching w/o limiting, "seznam(CZ)" tag should be mixed in with other tags among search results.
**Screenshots:**

**Additional context:**
Besides public instances w/ the latest docker image (2022.11.25-1314c1c5). It neither works w/ forks, upstream searX instances nor higher NG versions than those utilizing dockerized images from docker hub.
https://searx.be/
https://searx.tiekoetter.com/
https://searx.mistli.net/
https://search.privacyguides.net/
https://searx.webheberg.info/
https://spot.murena.io/
https://darmarit.org/searx/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/seznam.py`
Content:
```
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 """
3 Seznam
4 """
5
6 from urllib.parse import urlencode
7 from lxml import html
8 from searx.network import get
9 from searx.exceptions import SearxEngineAccessDeniedException
10 from searx.utils import (
11 extract_text,
12 eval_xpath_list,
13 eval_xpath_getindex,
14 eval_xpath,
15 )
16
17 # about
18 about = {
19 "website": "https://www.seznam.cz/",
20 "wikidata_id": "Q3490485",
21 "official_api_documentation": "https://api.sklik.cz/",
22 "use_official_api": False,
23 "require_api_key": False,
24 "results": "HTML",
25 "language": "cz",
26 }
27
28 categories = ['general', 'web']
29 base_url = 'https://search.seznam.cz/'
30
31
32 def request(query, params):
33 response_index = get(base_url, headers=params['headers'], raise_for_httperror=True)
34 dom = html.fromstring(response_index.text)
35
36 url_params = {
37 'q': query,
38 'oq': query,
39 }
40 for e in eval_xpath_list(dom, '//input[@type="hidden"]'):
41 name = e.get('name')
42 value = e.get('value')
43 url_params[name] = value
44
45 params['url'] = base_url + '?' + urlencode(url_params)
46 params['cookies'] = response_index.cookies
47 return params
48
49
50 def response(resp):
51 if resp.url.path.startswith('/verify'):
52 raise SearxEngineAccessDeniedException()
53
54 results = []
55
56 dom = html.fromstring(resp.content.decode())
57 for result_element in eval_xpath_list(dom, '//div[@data-dot="results"]/div'):
58 result_data = eval_xpath_getindex(result_element, './/div[contains(@class, "bec586")]', 0, default=None)
59 if result_data is None:
60 continue
61 title_element = eval_xpath_getindex(result_element, './/h3/a', 0)
62 results.append(
63 {
64 'url': title_element.get('href'),
65 'title': extract_text(title_element),
66 'content': extract_text(eval_xpath(result_data, './/div[@class="_3eded7"]')),
67 }
68 )
69
70 return results
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/engines/seznam.py b/searx/engines/seznam.py
--- a/searx/engines/seznam.py
+++ b/searx/engines/seznam.py
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: AGPL-3.0-or-later
-"""
- Seznam
+# lint: pylint
+"""Seznam
+
"""
from urllib.parse import urlencode
@@ -11,7 +12,6 @@
extract_text,
eval_xpath_list,
eval_xpath_getindex,
- eval_xpath,
)
# about
@@ -54,8 +54,12 @@
results = []
dom = html.fromstring(resp.content.decode())
- for result_element in eval_xpath_list(dom, '//div[@data-dot="results"]/div'):
- result_data = eval_xpath_getindex(result_element, './/div[contains(@class, "bec586")]', 0, default=None)
+ for result_element in eval_xpath_list(
+ dom, '//div[@id="searchpage-root"]//div[@class="Layout--left"]/div[@class="f2c528"]'
+ ):
+ result_data = eval_xpath_getindex(
+ result_element, './/div[@class="c8774a" or @class="e69e8d a11657"]', 0, default=None
+ )
if result_data is None:
continue
title_element = eval_xpath_getindex(result_element, './/h3/a', 0)
@@ -63,7 +67,7 @@
{
'url': title_element.get('href'),
'title': extract_text(title_element),
- 'content': extract_text(eval_xpath(result_data, './/div[@class="_3eded7"]')),
+ 'content': extract_text(result_data),
}
)
| {"golden_diff": "diff --git a/searx/engines/seznam.py b/searx/engines/seznam.py\n--- a/searx/engines/seznam.py\n+++ b/searx/engines/seznam.py\n@@ -1,6 +1,7 @@\n # SPDX-License-Identifier: AGPL-3.0-or-later\n-\"\"\"\n- Seznam\n+# lint: pylint\n+\"\"\"Seznam\n+\n \"\"\"\n \n from urllib.parse import urlencode\n@@ -11,7 +12,6 @@\n extract_text,\n eval_xpath_list,\n eval_xpath_getindex,\n- eval_xpath,\n )\n \n # about\n@@ -54,8 +54,12 @@\n results = []\n \n dom = html.fromstring(resp.content.decode())\n- for result_element in eval_xpath_list(dom, '//div[@data-dot=\"results\"]/div'):\n- result_data = eval_xpath_getindex(result_element, './/div[contains(@class, \"bec586\")]', 0, default=None)\n+ for result_element in eval_xpath_list(\n+ dom, '//div[@id=\"searchpage-root\"]//div[@class=\"Layout--left\"]/div[@class=\"f2c528\"]'\n+ ):\n+ result_data = eval_xpath_getindex(\n+ result_element, './/div[@class=\"c8774a\" or @class=\"e69e8d a11657\"]', 0, default=None\n+ )\n if result_data is None:\n continue\n title_element = eval_xpath_getindex(result_element, './/h3/a', 0)\n@@ -63,7 +67,7 @@\n {\n 'url': title_element.get('href'),\n 'title': extract_text(title_element),\n- 'content': extract_text(eval_xpath(result_data, './/div[@class=\"_3eded7\"]')),\n+ 'content': extract_text(result_data),\n }\n )\n", "issue": "Bug: seznam(CZ) ignored in search results\n<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->\r\n\r\n**Versions of SearX(NG):**\r\n2022.11.25-1314c1c5, vanilla\r\n2022.11.27-90b429bb, vanilla\r\n2022.11.28, vanilla\r\n2.3.7+20221122, fork\r\n1.1.0-37-4d9586e2, vanilla\r\n\r\n**How did you install SearX(NG)?**\r\nI didn't. I tried several public instances and all suffer from the same issue.\r\n\r\n**What happened?**\r\nWhen searching for some Czech terms, like \"seznam.cz\", there is never the \"seznam(CZ)\" tag under individual search results. When trying to limit the search to the seznam(CZ) engine:\r\n`!szn seznam.cz`\r\nI'm getting an error:\r\n`Sorry!we didn't find any results. Please use another query or search in more categories.`\r\nWhen trying to limit the search to bing using \"!bi\" keyword, it works as expected (all search results have the bing tag under them).\r\n\r\n**How to reproduce:**\r\n`!szn <search term>`\r\n\r\n**Expected behavior:**\r\nWhen limiting searching to the seznam(CZ) engine, all search results should have the \"seznam(CZ)\" tag under them.\r\nWhen searching w/o limiting, \"seznam(CZ)\" tag should be mixed in with other tags among search results.\r\n\r\n**Screenshots:**\r\n\r\n\r\n**Additional context:**\r\nBesides public instances w/ the latest docker image (2022.11.25-1314c1c5). It neither works w/ forks, upstream searX instances nor higher NG versions than those utilizing dockerized images from docker hub.\r\nhttps://searx.be/\r\nhttps://searx.tiekoetter.com/\r\nhttps://searx.mistli.net/\r\nhttps://search.privacyguides.net/\r\nhttps://searx.webheberg.info/\r\nhttps://spot.murena.io/\r\nhttps://darmarit.org/searx/\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n\"\"\"\n Seznam\n\"\"\"\n\nfrom urllib.parse import urlencode\nfrom lxml import html\nfrom searx.network import get\nfrom searx.exceptions import SearxEngineAccessDeniedException\nfrom searx.utils import (\n extract_text,\n eval_xpath_list,\n eval_xpath_getindex,\n eval_xpath,\n)\n\n# about\nabout = {\n \"website\": \"https://www.seznam.cz/\",\n \"wikidata_id\": \"Q3490485\",\n \"official_api_documentation\": \"https://api.sklik.cz/\",\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": \"HTML\",\n \"language\": \"cz\",\n}\n\ncategories = ['general', 'web']\nbase_url = 'https://search.seznam.cz/'\n\n\ndef request(query, params):\n response_index = get(base_url, headers=params['headers'], raise_for_httperror=True)\n dom = html.fromstring(response_index.text)\n\n url_params = {\n 'q': query,\n 'oq': query,\n }\n for e in eval_xpath_list(dom, '//input[@type=\"hidden\"]'):\n name = e.get('name')\n value = e.get('value')\n url_params[name] = value\n\n params['url'] = base_url + '?' + urlencode(url_params)\n params['cookies'] = response_index.cookies\n return params\n\n\ndef response(resp):\n if resp.url.path.startswith('/verify'):\n raise SearxEngineAccessDeniedException()\n\n results = []\n\n dom = html.fromstring(resp.content.decode())\n for result_element in eval_xpath_list(dom, '//div[@data-dot=\"results\"]/div'):\n result_data = eval_xpath_getindex(result_element, './/div[contains(@class, \"bec586\")]', 0, default=None)\n if result_data is None:\n continue\n title_element = eval_xpath_getindex(result_element, './/h3/a', 0)\n results.append(\n {\n 'url': title_element.get('href'),\n 'title': extract_text(title_element),\n 'content': extract_text(eval_xpath(result_data, './/div[@class=\"_3eded7\"]')),\n }\n )\n\n return results\n", "path": "searx/engines/seznam.py"}], "after_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Seznam\n\n\"\"\"\n\nfrom urllib.parse import urlencode\nfrom lxml import html\nfrom searx.network import get\nfrom searx.exceptions import SearxEngineAccessDeniedException\nfrom searx.utils import (\n extract_text,\n eval_xpath_list,\n eval_xpath_getindex,\n)\n\n# about\nabout = {\n \"website\": \"https://www.seznam.cz/\",\n \"wikidata_id\": \"Q3490485\",\n \"official_api_documentation\": \"https://api.sklik.cz/\",\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": \"HTML\",\n \"language\": \"cz\",\n}\n\ncategories = ['general', 'web']\nbase_url = 'https://search.seznam.cz/'\n\n\ndef request(query, params):\n response_index = get(base_url, headers=params['headers'], raise_for_httperror=True)\n dom = html.fromstring(response_index.text)\n\n url_params = {\n 'q': query,\n 'oq': query,\n }\n for e in eval_xpath_list(dom, '//input[@type=\"hidden\"]'):\n name = e.get('name')\n value = e.get('value')\n url_params[name] = value\n\n params['url'] = base_url + '?' + urlencode(url_params)\n params['cookies'] = response_index.cookies\n return params\n\n\ndef response(resp):\n if resp.url.path.startswith('/verify'):\n raise SearxEngineAccessDeniedException()\n\n results = []\n\n dom = html.fromstring(resp.content.decode())\n for result_element in eval_xpath_list(\n dom, '//div[@id=\"searchpage-root\"]//div[@class=\"Layout--left\"]/div[@class=\"f2c528\"]'\n ):\n result_data = eval_xpath_getindex(\n result_element, './/div[@class=\"c8774a\" or @class=\"e69e8d a11657\"]', 0, default=None\n )\n if result_data is None:\n continue\n title_element = eval_xpath_getindex(result_element, './/h3/a', 0)\n results.append(\n {\n 'url': title_element.get('href'),\n 'title': extract_text(title_element),\n 'content': extract_text(result_data),\n }\n )\n\n return results\n", "path": "searx/engines/seznam.py"}]} | 1,441 | 424 |
gh_patches_debug_17634 | rasdani/github-patches | git_diff | liqd__a4-opin-726 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
use small images in user avatar in moderators list
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `euth/users/serializers.py`
Content:
```
1 from rest_framework import serializers
2
3 from .models import User
4
5
6 class UserSerializer(serializers.ModelSerializer):
7 avatar = serializers.ImageField()
8
9 class Meta:
10 model = User
11 fields = ('id', 'username', 'avatar', 'default_avatar')
12 read_only_fields = ('id', 'username', 'avatar', 'default_avatar')
13
14
15 # mails should not be exposed in API, so there is a separate one for this
16 class UserWithMailSerializer(UserSerializer):
17 class Meta(UserSerializer.Meta):
18 fields = ('id', 'username', 'avatar', 'default_avatar', 'email')
19 read_only_fields = ('id', 'username', 'avatar', 'default_avatar',
20 'email')
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/euth/users/serializers.py b/euth/users/serializers.py
--- a/euth/users/serializers.py
+++ b/euth/users/serializers.py
@@ -1,16 +1,22 @@
+from easy_thumbnails.files import get_thumbnailer
from rest_framework import serializers
from .models import User
class UserSerializer(serializers.ModelSerializer):
- avatar = serializers.ImageField()
+ avatar = serializers.SerializerMethodField()
class Meta:
model = User
fields = ('id', 'username', 'avatar', 'default_avatar')
read_only_fields = ('id', 'username', 'avatar', 'default_avatar')
+ def get_avatar(self, obj):
+ if obj.avatar:
+ image = get_thumbnailer(obj.avatar)['avatar_small']
+ return image.url
+
# mails should not be exposed in API, so there is a separate one for this
class UserWithMailSerializer(UserSerializer):
| {"golden_diff": "diff --git a/euth/users/serializers.py b/euth/users/serializers.py\n--- a/euth/users/serializers.py\n+++ b/euth/users/serializers.py\n@@ -1,16 +1,22 @@\n+from easy_thumbnails.files import get_thumbnailer\n from rest_framework import serializers\n \n from .models import User\n \n \n class UserSerializer(serializers.ModelSerializer):\n- avatar = serializers.ImageField()\n+ avatar = serializers.SerializerMethodField()\n \n class Meta:\n model = User\n fields = ('id', 'username', 'avatar', 'default_avatar')\n read_only_fields = ('id', 'username', 'avatar', 'default_avatar')\n \n+ def get_avatar(self, obj):\n+ if obj.avatar:\n+ image = get_thumbnailer(obj.avatar)['avatar_small']\n+ return image.url\n+\n \n # mails should not be exposed in API, so there is a separate one for this\n class UserWithMailSerializer(UserSerializer):\n", "issue": "use small images in user avatar in moderators list\n\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom .models import User\n\n\nclass UserSerializer(serializers.ModelSerializer):\n avatar = serializers.ImageField()\n\n class Meta:\n model = User\n fields = ('id', 'username', 'avatar', 'default_avatar')\n read_only_fields = ('id', 'username', 'avatar', 'default_avatar')\n\n\n# mails should not be exposed in API, so there is a separate one for this\nclass UserWithMailSerializer(UserSerializer):\n class Meta(UserSerializer.Meta):\n fields = ('id', 'username', 'avatar', 'default_avatar', 'email')\n read_only_fields = ('id', 'username', 'avatar', 'default_avatar',\n 'email')\n", "path": "euth/users/serializers.py"}], "after_files": [{"content": "from easy_thumbnails.files import get_thumbnailer\nfrom rest_framework import serializers\n\nfrom .models import User\n\n\nclass UserSerializer(serializers.ModelSerializer):\n avatar = serializers.SerializerMethodField()\n\n class Meta:\n model = User\n fields = ('id', 'username', 'avatar', 'default_avatar')\n read_only_fields = ('id', 'username', 'avatar', 'default_avatar')\n\n def get_avatar(self, obj):\n if obj.avatar:\n image = get_thumbnailer(obj.avatar)['avatar_small']\n return image.url\n\n\n# mails should not be exposed in API, so there is a separate one for this\nclass UserWithMailSerializer(UserSerializer):\n class Meta(UserSerializer.Meta):\n fields = ('id', 'username', 'avatar', 'default_avatar', 'email')\n read_only_fields = ('id', 'username', 'avatar', 'default_avatar',\n 'email')\n", "path": "euth/users/serializers.py"}]} | 451 | 205 |
gh_patches_debug_647 | rasdani/github-patches | git_diff | pex-tool__pex-2095 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.129
On the docket:
+ [x] Pex resolves VCS and local project requirements from locks incorrectly. #2092
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.128"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.128"
+__version__ = "2.1.129"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.128\"\n+__version__ = \"2.1.129\"\n", "issue": "Release 2.1.129\nOn the docket:\r\n+ [x] Pex resolves VCS and local project requirements from locks incorrectly. #2092\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.128\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.129\"\n", "path": "pex/version.py"}]} | 346 | 98 |
gh_patches_debug_13708 | rasdani/github-patches | git_diff | bokeh__bokeh-8492 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typo in range_tool example
There is a small typo "range_rool" in the range_tool.py example. I would like to use this issue to create my first pull request and see how the process works.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/plotting/file/range_tool.py`
Content:
```
1 import numpy as np
2
3 from bokeh.io import show
4 from bokeh.layouts import column
5 from bokeh.models import ColumnDataSource, RangeTool
6 from bokeh.plotting import figure
7 from bokeh.sampledata.stocks import AAPL
8
9 dates = np.array(AAPL['date'], dtype=np.datetime64)
10 source = ColumnDataSource(data=dict(date=dates, close=AAPL['adj_close']))
11
12 p = figure(plot_height=300, plot_width=800, tools="", toolbar_location=None,
13 x_axis_type="datetime", x_axis_location="above",
14 background_fill_color="#efefef", x_range=(dates[1500], dates[2500]))
15
16 p.line('date', 'close', source=source)
17 p.yaxis.axis_label = 'Price'
18
19 select = figure(title="Drag the middle and edges of the selection box to change the range above",
20 plot_height=130, plot_width=800, y_range=p.y_range,
21 x_axis_type="datetime", y_axis_type=None,
22 tools="", toolbar_location=None, background_fill_color="#efefef")
23
24 range_rool = RangeTool(x_range=p.x_range)
25 range_rool.overlay.fill_color = "navy"
26 range_rool.overlay.fill_alpha = 0.2
27
28 select.line('date', 'close', source=source)
29 select.ygrid.grid_line_color = None
30 select.add_tools(range_rool)
31 select.toolbar.active_multi = range_rool
32
33 show(column(p, select))
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/plotting/file/range_tool.py b/examples/plotting/file/range_tool.py
--- a/examples/plotting/file/range_tool.py
+++ b/examples/plotting/file/range_tool.py
@@ -21,13 +21,13 @@
x_axis_type="datetime", y_axis_type=None,
tools="", toolbar_location=None, background_fill_color="#efefef")
-range_rool = RangeTool(x_range=p.x_range)
-range_rool.overlay.fill_color = "navy"
-range_rool.overlay.fill_alpha = 0.2
+range_tool = RangeTool(x_range=p.x_range)
+range_tool.overlay.fill_color = "navy"
+range_tool.overlay.fill_alpha = 0.2
select.line('date', 'close', source=source)
select.ygrid.grid_line_color = None
-select.add_tools(range_rool)
-select.toolbar.active_multi = range_rool
+select.add_tools(range_tool)
+select.toolbar.active_multi = range_tool
show(column(p, select))
| {"golden_diff": "diff --git a/examples/plotting/file/range_tool.py b/examples/plotting/file/range_tool.py\n--- a/examples/plotting/file/range_tool.py\n+++ b/examples/plotting/file/range_tool.py\n@@ -21,13 +21,13 @@\n x_axis_type=\"datetime\", y_axis_type=None,\n tools=\"\", toolbar_location=None, background_fill_color=\"#efefef\")\n \n-range_rool = RangeTool(x_range=p.x_range)\n-range_rool.overlay.fill_color = \"navy\"\n-range_rool.overlay.fill_alpha = 0.2\n+range_tool = RangeTool(x_range=p.x_range)\n+range_tool.overlay.fill_color = \"navy\"\n+range_tool.overlay.fill_alpha = 0.2\n \n select.line('date', 'close', source=source)\n select.ygrid.grid_line_color = None\n-select.add_tools(range_rool)\n-select.toolbar.active_multi = range_rool\n+select.add_tools(range_tool)\n+select.toolbar.active_multi = range_tool\n \n show(column(p, select))\n", "issue": "Typo in range_tool example\nThere is a small typo \"range_rool\" in the range_tool.py example. I would like to use this issue to create my first pull request and see how the process works.\n", "before_files": [{"content": "import numpy as np\n\nfrom bokeh.io import show\nfrom bokeh.layouts import column\nfrom bokeh.models import ColumnDataSource, RangeTool\nfrom bokeh.plotting import figure\nfrom bokeh.sampledata.stocks import AAPL\n\ndates = np.array(AAPL['date'], dtype=np.datetime64)\nsource = ColumnDataSource(data=dict(date=dates, close=AAPL['adj_close']))\n\np = figure(plot_height=300, plot_width=800, tools=\"\", toolbar_location=None,\n x_axis_type=\"datetime\", x_axis_location=\"above\",\n background_fill_color=\"#efefef\", x_range=(dates[1500], dates[2500]))\n\np.line('date', 'close', source=source)\np.yaxis.axis_label = 'Price'\n\nselect = figure(title=\"Drag the middle and edges of the selection box to change the range above\",\n plot_height=130, plot_width=800, y_range=p.y_range,\n x_axis_type=\"datetime\", y_axis_type=None,\n tools=\"\", toolbar_location=None, background_fill_color=\"#efefef\")\n\nrange_rool = RangeTool(x_range=p.x_range)\nrange_rool.overlay.fill_color = \"navy\"\nrange_rool.overlay.fill_alpha = 0.2\n\nselect.line('date', 'close', source=source)\nselect.ygrid.grid_line_color = None\nselect.add_tools(range_rool)\nselect.toolbar.active_multi = range_rool\n\nshow(column(p, select))\n", "path": "examples/plotting/file/range_tool.py"}], "after_files": [{"content": "import numpy as np\n\nfrom bokeh.io import show\nfrom bokeh.layouts import column\nfrom bokeh.models import ColumnDataSource, RangeTool\nfrom bokeh.plotting import figure\nfrom bokeh.sampledata.stocks import AAPL\n\ndates = np.array(AAPL['date'], dtype=np.datetime64)\nsource = ColumnDataSource(data=dict(date=dates, close=AAPL['adj_close']))\n\np = figure(plot_height=300, plot_width=800, tools=\"\", toolbar_location=None,\n x_axis_type=\"datetime\", x_axis_location=\"above\",\n background_fill_color=\"#efefef\", x_range=(dates[1500], dates[2500]))\n\np.line('date', 'close', source=source)\np.yaxis.axis_label = 'Price'\n\nselect = figure(title=\"Drag the middle and edges of the selection box to change the range above\",\n plot_height=130, plot_width=800, y_range=p.y_range,\n x_axis_type=\"datetime\", y_axis_type=None,\n tools=\"\", toolbar_location=None, background_fill_color=\"#efefef\")\n\nrange_tool = RangeTool(x_range=p.x_range)\nrange_tool.overlay.fill_color = \"navy\"\nrange_tool.overlay.fill_alpha = 0.2\n\nselect.line('date', 'close', source=source)\nselect.ygrid.grid_line_color = None\nselect.add_tools(range_tool)\nselect.toolbar.active_multi = range_tool\n\nshow(column(p, select))\n", "path": "examples/plotting/file/range_tool.py"}]} | 686 | 219 |
gh_patches_debug_62436 | rasdani/github-patches | git_diff | comic__grand-challenge.org-3379 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Server error page won't render because of missing context
If a view throws a 500 error, the 500.html should get rendered. We recently updated the template to inherit from base.html, and now it will not render anymore because it is missing context variables (the 500 view is by default passed an empty context).
I'm unsure if we should update the 500 view and add the missing context or if we should go back to not inheriting from base.html for the error views?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/config/urls/challenge_subdomain.py`
Content:
```
1 from django.conf import settings
2 from django.urls import include, path
3 from django.views.generic import TemplateView
4
5 from grandchallenge.challenges.views import ChallengeUpdate
6
7 urlpatterns = [
8 path(
9 "robots.txt",
10 TemplateView.as_view(
11 template_name="robots.txt", content_type="text/plain"
12 ),
13 name="subdomain_robots_txt",
14 ),
15 path(
16 "evaluation/",
17 include("grandchallenge.evaluation.urls", namespace="evaluation"),
18 ),
19 path("teams/", include("grandchallenge.teams.urls", namespace="teams")),
20 path(
21 "participants/",
22 include("grandchallenge.participants.urls", namespace="participants"),
23 ),
24 path("admins/", include("grandchallenge.admins.urls", namespace="admins")),
25 path("update/", ChallengeUpdate.as_view(), name="challenge-update"),
26 path("summernote/", include("django_summernote.urls")),
27 path("", include("grandchallenge.pages.urls", namespace="pages")),
28 ]
29
30 if settings.DEBUG and settings.ENABLE_DEBUG_TOOLBAR:
31 import debug_toolbar
32
33 urlpatterns = [
34 path("__debug__/", include(debug_toolbar.urls))
35 ] + urlpatterns
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/config/urls/challenge_subdomain.py b/app/config/urls/challenge_subdomain.py
--- a/app/config/urls/challenge_subdomain.py
+++ b/app/config/urls/challenge_subdomain.py
@@ -4,6 +4,9 @@
from grandchallenge.challenges.views import ChallengeUpdate
+handler500 = "grandchallenge.core.views.handler500"
+
+
urlpatterns = [
path(
"robots.txt",
| {"golden_diff": "diff --git a/app/config/urls/challenge_subdomain.py b/app/config/urls/challenge_subdomain.py\n--- a/app/config/urls/challenge_subdomain.py\n+++ b/app/config/urls/challenge_subdomain.py\n@@ -4,6 +4,9 @@\n \n from grandchallenge.challenges.views import ChallengeUpdate\n \n+handler500 = \"grandchallenge.core.views.handler500\"\n+\n+\n urlpatterns = [\n path(\n \"robots.txt\",\n", "issue": "Server error page won't render because of missing context\nIf a view throws a 500 error, the 500.html should get rendered. We recently updated the template to inherit from base.html, and now it will not render anymore because it is missing context variables (the 500 view is by default passed an empty context). \r\n\r\nI'm unsure if we should update the 500 view and add the missing context or if we should go back to not inheriting from base.html for the error views? \r\n\r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.urls import include, path\nfrom django.views.generic import TemplateView\n\nfrom grandchallenge.challenges.views import ChallengeUpdate\n\nurlpatterns = [\n path(\n \"robots.txt\",\n TemplateView.as_view(\n template_name=\"robots.txt\", content_type=\"text/plain\"\n ),\n name=\"subdomain_robots_txt\",\n ),\n path(\n \"evaluation/\",\n include(\"grandchallenge.evaluation.urls\", namespace=\"evaluation\"),\n ),\n path(\"teams/\", include(\"grandchallenge.teams.urls\", namespace=\"teams\")),\n path(\n \"participants/\",\n include(\"grandchallenge.participants.urls\", namespace=\"participants\"),\n ),\n path(\"admins/\", include(\"grandchallenge.admins.urls\", namespace=\"admins\")),\n path(\"update/\", ChallengeUpdate.as_view(), name=\"challenge-update\"),\n path(\"summernote/\", include(\"django_summernote.urls\")),\n path(\"\", include(\"grandchallenge.pages.urls\", namespace=\"pages\")),\n]\n\nif settings.DEBUG and settings.ENABLE_DEBUG_TOOLBAR:\n import debug_toolbar\n\n urlpatterns = [\n path(\"__debug__/\", include(debug_toolbar.urls))\n ] + urlpatterns\n", "path": "app/config/urls/challenge_subdomain.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.urls import include, path\nfrom django.views.generic import TemplateView\n\nfrom grandchallenge.challenges.views import ChallengeUpdate\n\nhandler500 = \"grandchallenge.core.views.handler500\"\n\n\nurlpatterns = [\n path(\n \"robots.txt\",\n TemplateView.as_view(\n template_name=\"robots.txt\", content_type=\"text/plain\"\n ),\n name=\"subdomain_robots_txt\",\n ),\n path(\n \"evaluation/\",\n include(\"grandchallenge.evaluation.urls\", namespace=\"evaluation\"),\n ),\n path(\"teams/\", include(\"grandchallenge.teams.urls\", namespace=\"teams\")),\n path(\n \"participants/\",\n include(\"grandchallenge.participants.urls\", namespace=\"participants\"),\n ),\n path(\"admins/\", include(\"grandchallenge.admins.urls\", namespace=\"admins\")),\n path(\"update/\", ChallengeUpdate.as_view(), name=\"challenge-update\"),\n path(\"summernote/\", include(\"django_summernote.urls\")),\n path(\"\", include(\"grandchallenge.pages.urls\", namespace=\"pages\")),\n]\n\nif settings.DEBUG and settings.ENABLE_DEBUG_TOOLBAR:\n import debug_toolbar\n\n urlpatterns = [\n path(\"__debug__/\", include(debug_toolbar.urls))\n ] + urlpatterns\n", "path": "app/config/urls/challenge_subdomain.py"}]} | 674 | 98 |
gh_patches_debug_22478 | rasdani/github-patches | git_diff | python-pillow__Pillow-3588 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No n_frames, or bad value for n_frames
When I feed the test file flower2.jpg into this code (from #1630)
```python
im = Image.open( fn )
imgcnt = im.n_frames
colors = im.getcolors( im.width * im.height )
if args.hist:
for cnt, col in colors:
allcolors[ col ] += cnt
for iz in range( 1, imgcnt ):
im = Image.open( fn ) # does getcolors implicitly close????
# without the open, get "seek of closed
# file" error on line below.
im.seek( iz )
colors = im.getcolors( im.width * im.height )
for cnt, col in colors:
allcolors[ col ] += cnt
```
I get "AttributeError: n_frames"
But other .jpg files do not get that error... this one: http://nevcal.com/temporary/20151110-105826gl.jpg has no problem with the attribute error on that line, but it gets a value of 2, apparently handles the seek OK, but dies in the second call to getcolors, with "OSError: image file is truncated (0 bytes not processed)".
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/PIL/MpoImagePlugin.py`
Content:
```
1 #
2 # The Python Imaging Library.
3 # $Id$
4 #
5 # MPO file handling
6 #
7 # See "Multi-Picture Format" (CIPA DC-007-Translation 2009, Standard of the
8 # Camera & Imaging Products Association)
9 #
10 # The multi-picture object combines multiple JPEG images (with a modified EXIF
11 # data format) into a single file. While it can theoretically be used much like
12 # a GIF animation, it is commonly used to represent 3D photographs and is (as
13 # of this writing) the most commonly used format by 3D cameras.
14 #
15 # History:
16 # 2014-03-13 Feneric Created
17 #
18 # See the README file for information on usage and redistribution.
19 #
20
21 from . import Image, JpegImagePlugin
22
23 # __version__ is deprecated and will be removed in a future version. Use
24 # PIL.__version__ instead.
25 __version__ = "0.1"
26
27
28 def _accept(prefix):
29 return JpegImagePlugin._accept(prefix)
30
31
32 def _save(im, fp, filename):
33 # Note that we can only save the current frame at present
34 return JpegImagePlugin._save(im, fp, filename)
35
36
37 ##
38 # Image plugin for MPO images.
39
40 class MpoImageFile(JpegImagePlugin.JpegImageFile):
41
42 format = "MPO"
43 format_description = "MPO (CIPA DC-007)"
44 _close_exclusive_fp_after_loading = False
45
46 def _open(self):
47 self.fp.seek(0) # prep the fp in order to pass the JPEG test
48 JpegImagePlugin.JpegImageFile._open(self)
49 self.mpinfo = self._getmp()
50 self.__framecount = self.mpinfo[0xB001]
51 self.__mpoffsets = [mpent['DataOffset'] + self.info['mpoffset']
52 for mpent in self.mpinfo[0xB002]]
53 self.__mpoffsets[0] = 0
54 # Note that the following assertion will only be invalid if something
55 # gets broken within JpegImagePlugin.
56 assert self.__framecount == len(self.__mpoffsets)
57 del self.info['mpoffset'] # no longer needed
58 self.__fp = self.fp # FIXME: hack
59 self.__fp.seek(self.__mpoffsets[0]) # get ready to read first frame
60 self.__frame = 0
61 self.offset = 0
62 # for now we can only handle reading and individual frame extraction
63 self.readonly = 1
64
65 def load_seek(self, pos):
66 self.__fp.seek(pos)
67
68 @property
69 def n_frames(self):
70 return self.__framecount
71
72 @property
73 def is_animated(self):
74 return self.__framecount > 1
75
76 def seek(self, frame):
77 if not self._seek_check(frame):
78 return
79 self.fp = self.__fp
80 self.offset = self.__mpoffsets[frame]
81 self.tile = [
82 ("jpeg", (0, 0) + self.size, self.offset, (self.mode, ""))
83 ]
84 self.__frame = frame
85
86 def tell(self):
87 return self.__frame
88
89 def _close__fp(self):
90 try:
91 if self.__fp != self.fp:
92 self.__fp.close()
93 except AttributeError:
94 pass
95 finally:
96 self.__fp = None
97
98
99 # ---------------------------------------------------------------------
100 # Registry stuff
101
102 # Note that since MPO shares a factory with JPEG, we do not need to do a
103 # separate registration for it here.
104 # Image.register_open(MpoImageFile.format,
105 # JpegImagePlugin.jpeg_factory, _accept)
106 Image.register_save(MpoImageFile.format, _save)
107
108 Image.register_extension(MpoImageFile.format, ".mpo")
109
110 Image.register_mime(MpoImageFile.format, "image/mpo")
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/PIL/MpoImagePlugin.py b/src/PIL/MpoImagePlugin.py
--- a/src/PIL/MpoImagePlugin.py
+++ b/src/PIL/MpoImagePlugin.py
@@ -18,7 +18,8 @@
# See the README file for information on usage and redistribution.
#
-from . import Image, JpegImagePlugin
+from . import Image, ImageFile, JpegImagePlugin
+from ._binary import i16be as i16
# __version__ is deprecated and will be removed in a future version. Use
# PIL.__version__ instead.
@@ -78,6 +79,20 @@
return
self.fp = self.__fp
self.offset = self.__mpoffsets[frame]
+
+ self.fp.seek(self.offset + 2) # skip SOI marker
+ if "parsed_exif" in self.info:
+ del self.info["parsed_exif"]
+ if i16(self.fp.read(2)) == 0xFFE1: # APP1
+ n = i16(self.fp.read(2))-2
+ self.info["exif"] = ImageFile._safe_read(self.fp, n)
+
+ exif = self._getexif()
+ if 40962 in exif and 40963 in exif:
+ self._size = (exif[40962], exif[40963])
+ elif "exif" in self.info:
+ del self.info["exif"]
+
self.tile = [
("jpeg", (0, 0) + self.size, self.offset, (self.mode, ""))
]
| {"golden_diff": "diff --git a/src/PIL/MpoImagePlugin.py b/src/PIL/MpoImagePlugin.py\n--- a/src/PIL/MpoImagePlugin.py\n+++ b/src/PIL/MpoImagePlugin.py\n@@ -18,7 +18,8 @@\n # See the README file for information on usage and redistribution.\n #\n \n-from . import Image, JpegImagePlugin\n+from . import Image, ImageFile, JpegImagePlugin\n+from ._binary import i16be as i16\n \n # __version__ is deprecated and will be removed in a future version. Use\n # PIL.__version__ instead.\n@@ -78,6 +79,20 @@\n return\n self.fp = self.__fp\n self.offset = self.__mpoffsets[frame]\n+\n+ self.fp.seek(self.offset + 2) # skip SOI marker\n+ if \"parsed_exif\" in self.info:\n+ del self.info[\"parsed_exif\"]\n+ if i16(self.fp.read(2)) == 0xFFE1: # APP1\n+ n = i16(self.fp.read(2))-2\n+ self.info[\"exif\"] = ImageFile._safe_read(self.fp, n)\n+\n+ exif = self._getexif()\n+ if 40962 in exif and 40963 in exif:\n+ self._size = (exif[40962], exif[40963])\n+ elif \"exif\" in self.info:\n+ del self.info[\"exif\"]\n+\n self.tile = [\n (\"jpeg\", (0, 0) + self.size, self.offset, (self.mode, \"\"))\n ]\n", "issue": "No n_frames, or bad value for n_frames\nWhen I feed the test file flower2.jpg into this code (from #1630)\r\n\r\n```python\r\nim = Image.open( fn )\r\nimgcnt = im.n_frames\r\ncolors = im.getcolors( im.width * im.height )\r\nif args.hist:\r\n for cnt, col in colors:\r\n allcolors[ col ] += cnt\r\n for iz in range( 1, imgcnt ):\r\n im = Image.open( fn ) # does getcolors implicitly close????\r\n # without the open, get \"seek of closed\r\n # file\" error on line below.\r\n im.seek( iz )\r\n colors = im.getcolors( im.width * im.height )\r\n for cnt, col in colors:\r\n allcolors[ col ] += cnt\r\n```\r\n\r\nI get \"AttributeError: n_frames\"\r\n\r\nBut other .jpg files do not get that error... this one: http://nevcal.com/temporary/20151110-105826gl.jpg has no problem with the attribute error on that line, but it gets a value of 2, apparently handles the seek OK, but dies in the second call to getcolors, with \"OSError: image file is truncated (0 bytes not processed)\".\r\n\n", "before_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# MPO file handling\n#\n# See \"Multi-Picture Format\" (CIPA DC-007-Translation 2009, Standard of the\n# Camera & Imaging Products Association)\n#\n# The multi-picture object combines multiple JPEG images (with a modified EXIF\n# data format) into a single file. While it can theoretically be used much like\n# a GIF animation, it is commonly used to represent 3D photographs and is (as\n# of this writing) the most commonly used format by 3D cameras.\n#\n# History:\n# 2014-03-13 Feneric Created\n#\n# See the README file for information on usage and redistribution.\n#\n\nfrom . import Image, JpegImagePlugin\n\n# __version__ is deprecated and will be removed in a future version. Use\n# PIL.__version__ instead.\n__version__ = \"0.1\"\n\n\ndef _accept(prefix):\n return JpegImagePlugin._accept(prefix)\n\n\ndef _save(im, fp, filename):\n # Note that we can only save the current frame at present\n return JpegImagePlugin._save(im, fp, filename)\n\n\n##\n# Image plugin for MPO images.\n\nclass MpoImageFile(JpegImagePlugin.JpegImageFile):\n\n format = \"MPO\"\n format_description = \"MPO (CIPA DC-007)\"\n _close_exclusive_fp_after_loading = False\n\n def _open(self):\n self.fp.seek(0) # prep the fp in order to pass the JPEG test\n JpegImagePlugin.JpegImageFile._open(self)\n self.mpinfo = self._getmp()\n self.__framecount = self.mpinfo[0xB001]\n self.__mpoffsets = [mpent['DataOffset'] + self.info['mpoffset']\n for mpent in self.mpinfo[0xB002]]\n self.__mpoffsets[0] = 0\n # Note that the following assertion will only be invalid if something\n # gets broken within JpegImagePlugin.\n assert self.__framecount == len(self.__mpoffsets)\n del self.info['mpoffset'] # no longer needed\n self.__fp = self.fp # FIXME: hack\n self.__fp.seek(self.__mpoffsets[0]) # get ready to read first frame\n self.__frame = 0\n self.offset = 0\n # for now we can only handle reading and individual frame extraction\n self.readonly = 1\n\n def load_seek(self, pos):\n self.__fp.seek(pos)\n\n @property\n def n_frames(self):\n return self.__framecount\n\n @property\n def is_animated(self):\n return self.__framecount > 1\n\n def seek(self, frame):\n if not self._seek_check(frame):\n return\n self.fp = self.__fp\n self.offset = self.__mpoffsets[frame]\n self.tile = [\n (\"jpeg\", (0, 0) + self.size, self.offset, (self.mode, \"\"))\n ]\n self.__frame = frame\n\n def tell(self):\n return self.__frame\n\n def _close__fp(self):\n try:\n if self.__fp != self.fp:\n self.__fp.close()\n except AttributeError:\n pass\n finally:\n self.__fp = None\n\n\n# ---------------------------------------------------------------------\n# Registry stuff\n\n# Note that since MPO shares a factory with JPEG, we do not need to do a\n# separate registration for it here.\n# Image.register_open(MpoImageFile.format,\n# JpegImagePlugin.jpeg_factory, _accept)\nImage.register_save(MpoImageFile.format, _save)\n\nImage.register_extension(MpoImageFile.format, \".mpo\")\n\nImage.register_mime(MpoImageFile.format, \"image/mpo\")\n", "path": "src/PIL/MpoImagePlugin.py"}], "after_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# MPO file handling\n#\n# See \"Multi-Picture Format\" (CIPA DC-007-Translation 2009, Standard of the\n# Camera & Imaging Products Association)\n#\n# The multi-picture object combines multiple JPEG images (with a modified EXIF\n# data format) into a single file. While it can theoretically be used much like\n# a GIF animation, it is commonly used to represent 3D photographs and is (as\n# of this writing) the most commonly used format by 3D cameras.\n#\n# History:\n# 2014-03-13 Feneric Created\n#\n# See the README file for information on usage and redistribution.\n#\n\nfrom . import Image, ImageFile, JpegImagePlugin\nfrom ._binary import i16be as i16\n\n# __version__ is deprecated and will be removed in a future version. Use\n# PIL.__version__ instead.\n__version__ = \"0.1\"\n\n\ndef _accept(prefix):\n return JpegImagePlugin._accept(prefix)\n\n\ndef _save(im, fp, filename):\n # Note that we can only save the current frame at present\n return JpegImagePlugin._save(im, fp, filename)\n\n\n##\n# Image plugin for MPO images.\n\nclass MpoImageFile(JpegImagePlugin.JpegImageFile):\n\n format = \"MPO\"\n format_description = \"MPO (CIPA DC-007)\"\n _close_exclusive_fp_after_loading = False\n\n def _open(self):\n self.fp.seek(0) # prep the fp in order to pass the JPEG test\n JpegImagePlugin.JpegImageFile._open(self)\n self.mpinfo = self._getmp()\n self.__framecount = self.mpinfo[0xB001]\n self.__mpoffsets = [mpent['DataOffset'] + self.info['mpoffset']\n for mpent in self.mpinfo[0xB002]]\n self.__mpoffsets[0] = 0\n # Note that the following assertion will only be invalid if something\n # gets broken within JpegImagePlugin.\n assert self.__framecount == len(self.__mpoffsets)\n del self.info['mpoffset'] # no longer needed\n self.__fp = self.fp # FIXME: hack\n self.__fp.seek(self.__mpoffsets[0]) # get ready to read first frame\n self.__frame = 0\n self.offset = 0\n # for now we can only handle reading and individual frame extraction\n self.readonly = 1\n\n def load_seek(self, pos):\n self.__fp.seek(pos)\n\n @property\n def n_frames(self):\n return self.__framecount\n\n @property\n def is_animated(self):\n return self.__framecount > 1\n\n def seek(self, frame):\n if not self._seek_check(frame):\n return\n self.fp = self.__fp\n self.offset = self.__mpoffsets[frame]\n\n self.fp.seek(self.offset + 2) # skip SOI marker\n if \"parsed_exif\" in self.info:\n del self.info[\"parsed_exif\"]\n if i16(self.fp.read(2)) == 0xFFE1: # APP1\n n = i16(self.fp.read(2))-2\n self.info[\"exif\"] = ImageFile._safe_read(self.fp, n)\n\n exif = self._getexif()\n if 40962 in exif and 40963 in exif:\n self._size = (exif[40962], exif[40963])\n elif \"exif\" in self.info:\n del self.info[\"exif\"]\n\n self.tile = [\n (\"jpeg\", (0, 0) + self.size, self.offset, (self.mode, \"\"))\n ]\n self.__frame = frame\n\n def tell(self):\n return self.__frame\n\n def _close__fp(self):\n try:\n if self.__fp != self.fp:\n self.__fp.close()\n except AttributeError:\n pass\n finally:\n self.__fp = None\n\n\n# ---------------------------------------------------------------------\n# Registry stuff\n\n# Note that since MPO shares a factory with JPEG, we do not need to do a\n# separate registration for it here.\n# Image.register_open(MpoImageFile.format,\n# JpegImagePlugin.jpeg_factory, _accept)\nImage.register_save(MpoImageFile.format, _save)\n\nImage.register_extension(MpoImageFile.format, \".mpo\")\n\nImage.register_mime(MpoImageFile.format, \"image/mpo\")\n", "path": "src/PIL/MpoImagePlugin.py"}]} | 1,607 | 376 |
gh_patches_debug_58134 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-382 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Order of poll answer choices mixed up after saving
The order of poll answer choices is mixed up after saving. Restoring original order is not possible:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/polls/models.py`
Content:
```
1 from django.contrib.contenttypes.fields import GenericRelation
2 from django.db import models
3
4 from adhocracy4.comments import models as comment_models
5 from adhocracy4.models.base import UserGeneratedContentModel
6 from adhocracy4.modules import models as module_models
7
8 from . import validators
9
10
11 class Poll(module_models.Item):
12 comments = GenericRelation(comment_models.Comment,
13 related_query_name='poll',
14 object_id_field='object_pk')
15
16
17 class Question(models.Model):
18 label = models.CharField(max_length=255)
19 weight = models.SmallIntegerField()
20
21 poll = models.ForeignKey(
22 'Poll',
23 on_delete=models.CASCADE,
24 related_name='questions'
25 )
26
27 def user_choices_list(self, user):
28 if not user.is_authenticated():
29 return []
30
31 return self.choices\
32 .filter(votes__creator=user)\
33 .values_list('id', flat=True)
34
35 def __str__(self):
36 return self.label
37
38 class Meta:
39 ordering = ['weight']
40
41
42 class ChoiceQuerySet(models.QuerySet):
43
44 def annotate_vote_count(self):
45 return self.annotate(
46 vote_count=models.Count(
47 'votes'
48 )
49 )
50
51
52 class Choice(models.Model):
53 label = models.CharField(max_length=255)
54
55 question = models.ForeignKey(
56 'Question',
57 on_delete=models.CASCADE,
58 related_name='choices',
59 )
60
61 objects = ChoiceQuerySet.as_manager()
62
63 def __str__(self):
64 return '%s @%s' % (self.label, self.question)
65
66
67 class Vote(UserGeneratedContentModel):
68 choice = models.ForeignKey(
69 'Choice',
70 on_delete=models.CASCADE,
71 related_name='votes'
72 )
73
74 def validate_unique(self, exclude=None):
75 super(Vote, self).validate_unique(exclude)
76 validators.single_vote_per_user(self.creator,
77 self.choice.question,
78 self.pk)
79
80 # Make Vote instances behave like items for rule checking
81 @property
82 def module(self):
83 self.choice.question.poll.module
84
85 @property
86 def project(self):
87 return self.module.project
88
89 def __str__(self):
90 return '%s: %s' % (self.creator, self.choice)
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/polls/models.py b/apps/polls/models.py
--- a/apps/polls/models.py
+++ b/apps/polls/models.py
@@ -60,6 +60,9 @@
objects = ChoiceQuerySet.as_manager()
+ class Meta:
+ ordering = ['id']
+
def __str__(self):
return '%s @%s' % (self.label, self.question)
| {"golden_diff": "diff --git a/apps/polls/models.py b/apps/polls/models.py\n--- a/apps/polls/models.py\n+++ b/apps/polls/models.py\n@@ -60,6 +60,9 @@\n \n objects = ChoiceQuerySet.as_manager()\n \n+ class Meta:\n+ ordering = ['id']\n+\n def __str__(self):\n return '%s @%s' % (self.label, self.question)\n", "issue": "Order of poll answer choices mixed up after saving\nThe order of poll answer choices is mixed up after saving. Restoring original order is not possible:\r\n\r\n\n", "before_files": [{"content": "from django.contrib.contenttypes.fields import GenericRelation\nfrom django.db import models\n\nfrom adhocracy4.comments import models as comment_models\nfrom adhocracy4.models.base import UserGeneratedContentModel\nfrom adhocracy4.modules import models as module_models\n\nfrom . import validators\n\n\nclass Poll(module_models.Item):\n comments = GenericRelation(comment_models.Comment,\n related_query_name='poll',\n object_id_field='object_pk')\n\n\nclass Question(models.Model):\n label = models.CharField(max_length=255)\n weight = models.SmallIntegerField()\n\n poll = models.ForeignKey(\n 'Poll',\n on_delete=models.CASCADE,\n related_name='questions'\n )\n\n def user_choices_list(self, user):\n if not user.is_authenticated():\n return []\n\n return self.choices\\\n .filter(votes__creator=user)\\\n .values_list('id', flat=True)\n\n def __str__(self):\n return self.label\n\n class Meta:\n ordering = ['weight']\n\n\nclass ChoiceQuerySet(models.QuerySet):\n\n def annotate_vote_count(self):\n return self.annotate(\n vote_count=models.Count(\n 'votes'\n )\n )\n\n\nclass Choice(models.Model):\n label = models.CharField(max_length=255)\n\n question = models.ForeignKey(\n 'Question',\n on_delete=models.CASCADE,\n related_name='choices',\n )\n\n objects = ChoiceQuerySet.as_manager()\n\n def __str__(self):\n return '%s @%s' % (self.label, self.question)\n\n\nclass Vote(UserGeneratedContentModel):\n choice = models.ForeignKey(\n 'Choice',\n on_delete=models.CASCADE,\n related_name='votes'\n )\n\n def validate_unique(self, exclude=None):\n super(Vote, self).validate_unique(exclude)\n validators.single_vote_per_user(self.creator,\n self.choice.question,\n self.pk)\n\n # Make Vote instances behave like items for rule checking\n @property\n def module(self):\n self.choice.question.poll.module\n\n @property\n def project(self):\n return self.module.project\n\n def __str__(self):\n return '%s: %s' % (self.creator, self.choice)\n", "path": "apps/polls/models.py"}], "after_files": [{"content": "from django.contrib.contenttypes.fields import GenericRelation\nfrom django.db import models\n\nfrom adhocracy4.comments import models as comment_models\nfrom adhocracy4.models.base import UserGeneratedContentModel\nfrom adhocracy4.modules import models as module_models\n\nfrom . import validators\n\n\nclass Poll(module_models.Item):\n comments = GenericRelation(comment_models.Comment,\n related_query_name='poll',\n object_id_field='object_pk')\n\n\nclass Question(models.Model):\n label = models.CharField(max_length=255)\n weight = models.SmallIntegerField()\n\n poll = models.ForeignKey(\n 'Poll',\n on_delete=models.CASCADE,\n related_name='questions'\n )\n\n def user_choices_list(self, user):\n if not user.is_authenticated():\n return []\n\n return self.choices\\\n .filter(votes__creator=user)\\\n .values_list('id', flat=True)\n\n def __str__(self):\n return self.label\n\n class Meta:\n ordering = ['weight']\n\n\nclass ChoiceQuerySet(models.QuerySet):\n\n def annotate_vote_count(self):\n return self.annotate(\n vote_count=models.Count(\n 'votes'\n )\n )\n\n\nclass Choice(models.Model):\n label = models.CharField(max_length=255)\n\n question = models.ForeignKey(\n 'Question',\n on_delete=models.CASCADE,\n related_name='choices',\n )\n\n objects = ChoiceQuerySet.as_manager()\n\n class Meta:\n ordering = ['id']\n\n def __str__(self):\n return '%s @%s' % (self.label, self.question)\n\n\nclass Vote(UserGeneratedContentModel):\n choice = models.ForeignKey(\n 'Choice',\n on_delete=models.CASCADE,\n related_name='votes'\n )\n\n def validate_unique(self, exclude=None):\n super(Vote, self).validate_unique(exclude)\n validators.single_vote_per_user(self.creator,\n self.choice.question,\n self.pk)\n\n # Make Vote instances behave like items for rule checking\n @property\n def module(self):\n self.choice.question.poll.module\n\n @property\n def project(self):\n return self.module.project\n\n def __str__(self):\n return '%s: %s' % (self.creator, self.choice)\n", "path": "apps/polls/models.py"}]} | 1,021 | 91 |
gh_patches_debug_11332 | rasdani/github-patches | git_diff | certbot__certbot-5383 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Apache SSL cipher settings are old, no ChaCha20
The Nginx plugin's `options-ssl-nginx.conf` file uses Mozilla's current intermediate SSL cipher configuration.
The Apache plugin probably did too... except it hasn't been updated since 2014: 2faacc1b43786edd5386305f9cffec376b5a5d26
Should Certbot's Apache settings be updated?
The main difference is that the new configuration adds ChaCha20 cipher suites. (It also removes a few things.)
Should this wait until after further documentation/feature improvements in #4830?
If so, how about a stopgap patch to add ChaCha20 without removing anything?
Differences with OpenSSL 1.0.2 or 1.1.0:
* Adds ChaCha20. (Yay!)
* Adds newer 3DES cipher suites with key exchange/signature algorithms other than RSA/RSA. (Probably doesn't matter. Anything that supports ECDHE-ECDSA ought to support a better cipher than 3DES. ) (Edit: Mozilla suggests that EDH-RSA 3DES is useful, though,.)
* Removes AES-CCM. (OpenSSL 1.1. Probably only included inadvertently.)
* Removes Camellia. (Perfectly nice cipher, but everybody uses AES.)
* Removes some static DH and SRP key exchange cipher suites. (Probably only included by accident, mostly or entirely ignored by servers and clients.)
* Changes the order of some things.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `certbot-apache/certbot_apache/constants.py`
Content:
```
1 """Apache plugin constants."""
2 import pkg_resources
3
4
5 MOD_SSL_CONF_DEST = "options-ssl-apache.conf"
6 """Name of the mod_ssl config file as saved in `IConfig.config_dir`."""
7
8
9 UPDATED_MOD_SSL_CONF_DIGEST = ".updated-options-ssl-apache-conf-digest.txt"
10 """Name of the hash of the updated or informed mod_ssl_conf as saved in `IConfig.config_dir`."""
11
12 ALL_SSL_OPTIONS_HASHES = [
13 '2086bca02db48daf93468332543c60ac6acdb6f0b58c7bfdf578a5d47092f82a',
14 '4844d36c9a0f587172d9fa10f4f1c9518e3bcfa1947379f155e16a70a728c21a',
15 '5a922826719981c0a234b1fbcd495f3213e49d2519e845ea0748ba513044b65b',
16 '4066b90268c03c9ba0201068eaa39abbc02acf9558bb45a788b630eb85dadf27',
17 'f175e2e7c673bd88d0aff8220735f385f916142c44aa83b09f1df88dd4767a88',
18 'cfdd7c18d2025836ea3307399f509cfb1ebf2612c87dd600a65da2a8e2f2797b',
19 ]
20 """SHA256 hashes of the contents of previous versions of all versions of MOD_SSL_CONF_SRC"""
21
22 AUGEAS_LENS_DIR = pkg_resources.resource_filename(
23 "certbot_apache", "augeas_lens")
24 """Path to the Augeas lens directory"""
25
26 REWRITE_HTTPS_ARGS = [
27 "^", "https://%{SERVER_NAME}%{REQUEST_URI}", "[L,NE,R=permanent]"]
28 """Apache version<2.3.9 rewrite rule arguments used for redirections to
29 https vhost"""
30
31 REWRITE_HTTPS_ARGS_WITH_END = [
32 "^", "https://%{SERVER_NAME}%{REQUEST_URI}", "[END,NE,R=permanent]"]
33 """Apache version >= 2.3.9 rewrite rule arguments used for redirections to
34 https vhost"""
35
36 OLD_REWRITE_HTTPS_ARGS = [
37 ["^", "https://%{SERVER_NAME}%{REQUEST_URI}", "[L,QSA,R=permanent]"],
38 ["^", "https://%{SERVER_NAME}%{REQUEST_URI}", "[END,QSA,R=permanent]"]]
39
40 HSTS_ARGS = ["always", "set", "Strict-Transport-Security",
41 "\"max-age=31536000\""]
42 """Apache header arguments for HSTS"""
43
44 UIR_ARGS = ["always", "set", "Content-Security-Policy",
45 "upgrade-insecure-requests"]
46
47 HEADER_ARGS = {"Strict-Transport-Security": HSTS_ARGS,
48 "Upgrade-Insecure-Requests": UIR_ARGS}
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/certbot-apache/certbot_apache/constants.py b/certbot-apache/certbot_apache/constants.py
--- a/certbot-apache/certbot_apache/constants.py
+++ b/certbot-apache/certbot_apache/constants.py
@@ -16,6 +16,8 @@
'4066b90268c03c9ba0201068eaa39abbc02acf9558bb45a788b630eb85dadf27',
'f175e2e7c673bd88d0aff8220735f385f916142c44aa83b09f1df88dd4767a88',
'cfdd7c18d2025836ea3307399f509cfb1ebf2612c87dd600a65da2a8e2f2797b',
+ '80720bd171ccdc2e6b917ded340defae66919e4624962396b992b7218a561791',
+ 'c0c022ea6b8a51ecc8f1003d0a04af6c3f2bc1c3ce506b3c2dfc1f11ef931082',
]
"""SHA256 hashes of the contents of previous versions of all versions of MOD_SSL_CONF_SRC"""
| {"golden_diff": "diff --git a/certbot-apache/certbot_apache/constants.py b/certbot-apache/certbot_apache/constants.py\n--- a/certbot-apache/certbot_apache/constants.py\n+++ b/certbot-apache/certbot_apache/constants.py\n@@ -16,6 +16,8 @@\n '4066b90268c03c9ba0201068eaa39abbc02acf9558bb45a788b630eb85dadf27',\n 'f175e2e7c673bd88d0aff8220735f385f916142c44aa83b09f1df88dd4767a88',\n 'cfdd7c18d2025836ea3307399f509cfb1ebf2612c87dd600a65da2a8e2f2797b',\n+ '80720bd171ccdc2e6b917ded340defae66919e4624962396b992b7218a561791',\n+ 'c0c022ea6b8a51ecc8f1003d0a04af6c3f2bc1c3ce506b3c2dfc1f11ef931082',\n ]\n \"\"\"SHA256 hashes of the contents of previous versions of all versions of MOD_SSL_CONF_SRC\"\"\"\n", "issue": "Apache SSL cipher settings are old, no ChaCha20\nThe Nginx plugin's `options-ssl-nginx.conf` file uses Mozilla's current intermediate SSL cipher configuration.\r\n\r\nThe Apache plugin probably did too... except it hasn't been updated since 2014: 2faacc1b43786edd5386305f9cffec376b5a5d26\r\n\r\nShould Certbot's Apache settings be updated?\r\n\r\nThe main difference is that the new configuration adds ChaCha20 cipher suites. (It also removes a few things.)\r\n\r\nShould this wait until after further documentation/feature improvements in #4830?\r\n\r\nIf so, how about a stopgap patch to add ChaCha20 without removing anything?\r\n\r\nDifferences with OpenSSL 1.0.2 or 1.1.0:\r\n\r\n* Adds ChaCha20. (Yay!)\r\n* Adds newer 3DES cipher suites with key exchange/signature algorithms other than RSA/RSA. (Probably doesn't matter. Anything that supports ECDHE-ECDSA ought to support a better cipher than 3DES. ) (Edit: Mozilla suggests that EDH-RSA 3DES is useful, though,.)\r\n* Removes AES-CCM. (OpenSSL 1.1. Probably only included inadvertently.)\r\n* Removes Camellia. (Perfectly nice cipher, but everybody uses AES.)\r\n* Removes some static DH and SRP key exchange cipher suites. (Probably only included by accident, mostly or entirely ignored by servers and clients.)\r\n* Changes the order of some things.\n", "before_files": [{"content": "\"\"\"Apache plugin constants.\"\"\"\nimport pkg_resources\n\n\nMOD_SSL_CONF_DEST = \"options-ssl-apache.conf\"\n\"\"\"Name of the mod_ssl config file as saved in `IConfig.config_dir`.\"\"\"\n\n\nUPDATED_MOD_SSL_CONF_DIGEST = \".updated-options-ssl-apache-conf-digest.txt\"\n\"\"\"Name of the hash of the updated or informed mod_ssl_conf as saved in `IConfig.config_dir`.\"\"\"\n\nALL_SSL_OPTIONS_HASHES = [\n '2086bca02db48daf93468332543c60ac6acdb6f0b58c7bfdf578a5d47092f82a',\n '4844d36c9a0f587172d9fa10f4f1c9518e3bcfa1947379f155e16a70a728c21a',\n '5a922826719981c0a234b1fbcd495f3213e49d2519e845ea0748ba513044b65b',\n '4066b90268c03c9ba0201068eaa39abbc02acf9558bb45a788b630eb85dadf27',\n 'f175e2e7c673bd88d0aff8220735f385f916142c44aa83b09f1df88dd4767a88',\n 'cfdd7c18d2025836ea3307399f509cfb1ebf2612c87dd600a65da2a8e2f2797b',\n]\n\"\"\"SHA256 hashes of the contents of previous versions of all versions of MOD_SSL_CONF_SRC\"\"\"\n\nAUGEAS_LENS_DIR = pkg_resources.resource_filename(\n \"certbot_apache\", \"augeas_lens\")\n\"\"\"Path to the Augeas lens directory\"\"\"\n\nREWRITE_HTTPS_ARGS = [\n \"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[L,NE,R=permanent]\"]\n\"\"\"Apache version<2.3.9 rewrite rule arguments used for redirections to\nhttps vhost\"\"\"\n\nREWRITE_HTTPS_ARGS_WITH_END = [\n \"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[END,NE,R=permanent]\"]\n\"\"\"Apache version >= 2.3.9 rewrite rule arguments used for redirections to\n https vhost\"\"\"\n\nOLD_REWRITE_HTTPS_ARGS = [\n [\"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[L,QSA,R=permanent]\"],\n [\"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[END,QSA,R=permanent]\"]]\n\nHSTS_ARGS = [\"always\", \"set\", \"Strict-Transport-Security\",\n \"\\\"max-age=31536000\\\"\"]\n\"\"\"Apache header arguments for HSTS\"\"\"\n\nUIR_ARGS = [\"always\", \"set\", \"Content-Security-Policy\",\n \"upgrade-insecure-requests\"]\n\nHEADER_ARGS = {\"Strict-Transport-Security\": HSTS_ARGS,\n \"Upgrade-Insecure-Requests\": UIR_ARGS}\n", "path": "certbot-apache/certbot_apache/constants.py"}], "after_files": [{"content": "\"\"\"Apache plugin constants.\"\"\"\nimport pkg_resources\n\n\nMOD_SSL_CONF_DEST = \"options-ssl-apache.conf\"\n\"\"\"Name of the mod_ssl config file as saved in `IConfig.config_dir`.\"\"\"\n\n\nUPDATED_MOD_SSL_CONF_DIGEST = \".updated-options-ssl-apache-conf-digest.txt\"\n\"\"\"Name of the hash of the updated or informed mod_ssl_conf as saved in `IConfig.config_dir`.\"\"\"\n\nALL_SSL_OPTIONS_HASHES = [\n '2086bca02db48daf93468332543c60ac6acdb6f0b58c7bfdf578a5d47092f82a',\n '4844d36c9a0f587172d9fa10f4f1c9518e3bcfa1947379f155e16a70a728c21a',\n '5a922826719981c0a234b1fbcd495f3213e49d2519e845ea0748ba513044b65b',\n '4066b90268c03c9ba0201068eaa39abbc02acf9558bb45a788b630eb85dadf27',\n 'f175e2e7c673bd88d0aff8220735f385f916142c44aa83b09f1df88dd4767a88',\n 'cfdd7c18d2025836ea3307399f509cfb1ebf2612c87dd600a65da2a8e2f2797b',\n '80720bd171ccdc2e6b917ded340defae66919e4624962396b992b7218a561791',\n 'c0c022ea6b8a51ecc8f1003d0a04af6c3f2bc1c3ce506b3c2dfc1f11ef931082',\n]\n\"\"\"SHA256 hashes of the contents of previous versions of all versions of MOD_SSL_CONF_SRC\"\"\"\n\nAUGEAS_LENS_DIR = pkg_resources.resource_filename(\n \"certbot_apache\", \"augeas_lens\")\n\"\"\"Path to the Augeas lens directory\"\"\"\n\nREWRITE_HTTPS_ARGS = [\n \"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[L,NE,R=permanent]\"]\n\"\"\"Apache version<2.3.9 rewrite rule arguments used for redirections to\nhttps vhost\"\"\"\n\nREWRITE_HTTPS_ARGS_WITH_END = [\n \"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[END,NE,R=permanent]\"]\n\"\"\"Apache version >= 2.3.9 rewrite rule arguments used for redirections to\n https vhost\"\"\"\n\nOLD_REWRITE_HTTPS_ARGS = [\n [\"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[L,QSA,R=permanent]\"],\n [\"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[END,QSA,R=permanent]\"]]\n\nHSTS_ARGS = [\"always\", \"set\", \"Strict-Transport-Security\",\n \"\\\"max-age=31536000\\\"\"]\n\"\"\"Apache header arguments for HSTS\"\"\"\n\nUIR_ARGS = [\"always\", \"set\", \"Content-Security-Policy\",\n \"upgrade-insecure-requests\"]\n\nHEADER_ARGS = {\"Strict-Transport-Security\": HSTS_ARGS,\n \"Upgrade-Insecure-Requests\": UIR_ARGS}\n", "path": "certbot-apache/certbot_apache/constants.py"}]} | 1,465 | 392 |
gh_patches_debug_12635 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-3113 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'set_logger_provider' no longer importable from 'opentelemetry.sdk._logs'
A week ago I was able to run the example for logging in https://github.com/open-telemetry/opentelemetry-python/tree/main/docs/examples/logs.
**Steps to reproduce**
I install either version via:
```bash
pip install opentelemetry-exporter-otlp==1.15.0
```
or
```bash
pip install opentelemetry-exporter-otlp==1.14.0
```
Then run the following code:
```python
from opentelemetry.sdk._logs import set_logger_provider
```
**What is the expected behavior?**
Should import without any issues.
**What is the actual behavior?**
```python
ImportError: cannot import name 'set_logger_provider' from 'opentelemetry.sdk._logs' (/usr/local/lib/python3.10/dist-packages/opentelemetry/sdk/_logs/__init__.py)
```
**Additional context**
I found that when I installed `opentelemetry-exporter-otlp==1.14.0`, `opentelemetry-sdk==1.15.0` was installed as well. When I installed `opentelemetry-sdk==1.14.0`, everything works fine again.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/examples/logs/example.py`
Content:
```
1 import logging
2
3 from opentelemetry import trace
4 from opentelemetry.exporter.otlp.proto.grpc._log_exporter import (
5 OTLPLogExporter,
6 )
7 from opentelemetry.sdk._logs import (
8 LoggerProvider,
9 LoggingHandler,
10 set_logger_provider,
11 )
12 from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
13 from opentelemetry.sdk.resources import Resource
14 from opentelemetry.sdk.trace import TracerProvider
15 from opentelemetry.sdk.trace.export import (
16 BatchSpanProcessor,
17 ConsoleSpanExporter,
18 )
19
20 trace.set_tracer_provider(TracerProvider())
21 trace.get_tracer_provider().add_span_processor(
22 BatchSpanProcessor(ConsoleSpanExporter())
23 )
24
25 logger_provider = LoggerProvider(
26 resource=Resource.create(
27 {
28 "service.name": "shoppingcart",
29 "service.instance.id": "instance-12",
30 }
31 ),
32 )
33 set_logger_provider(logger_provider)
34
35 exporter = OTLPLogExporter(insecure=True)
36 logger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))
37 handler = LoggingHandler(level=logging.NOTSET, logger_provider=logger_provider)
38
39 # Attach OTLP handler to root logger
40 logging.getLogger().addHandler(handler)
41
42 # Log directly
43 logging.info("Jackdaws love my big sphinx of quartz.")
44
45 # Create different namespaced loggers
46 logger1 = logging.getLogger("myapp.area1")
47 logger2 = logging.getLogger("myapp.area2")
48
49 logger1.debug("Quick zephyrs blow, vexing daft Jim.")
50 logger1.info("How quickly daft jumping zebras vex.")
51 logger2.warning("Jail zesty vixen who grabbed pay from quack.")
52 logger2.error("The five boxing wizards jump quickly.")
53
54
55 # Trace context correlation
56 tracer = trace.get_tracer(__name__)
57 with tracer.start_as_current_span("foo"):
58 # Do something
59 logger2.error("Hyderabad, we have a major problem.")
60
61 logger_provider.shutdown()
62
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/examples/logs/example.py b/docs/examples/logs/example.py
--- a/docs/examples/logs/example.py
+++ b/docs/examples/logs/example.py
@@ -1,14 +1,11 @@
import logging
from opentelemetry import trace
+from opentelemetry._logs import set_logger_provider
from opentelemetry.exporter.otlp.proto.grpc._log_exporter import (
OTLPLogExporter,
)
-from opentelemetry.sdk._logs import (
- LoggerProvider,
- LoggingHandler,
- set_logger_provider,
-)
+from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
| {"golden_diff": "diff --git a/docs/examples/logs/example.py b/docs/examples/logs/example.py\n--- a/docs/examples/logs/example.py\n+++ b/docs/examples/logs/example.py\n@@ -1,14 +1,11 @@\n import logging\n \n from opentelemetry import trace\n+from opentelemetry._logs import set_logger_provider\n from opentelemetry.exporter.otlp.proto.grpc._log_exporter import (\n OTLPLogExporter,\n )\n-from opentelemetry.sdk._logs import (\n- LoggerProvider,\n- LoggingHandler,\n- set_logger_provider,\n-)\n+from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler\n from opentelemetry.sdk._logs.export import BatchLogRecordProcessor\n from opentelemetry.sdk.resources import Resource\n from opentelemetry.sdk.trace import TracerProvider\n", "issue": "'set_logger_provider' no longer importable from 'opentelemetry.sdk._logs'\nA week ago I was able to run the example for logging in https://github.com/open-telemetry/opentelemetry-python/tree/main/docs/examples/logs.\r\n\r\n**Steps to reproduce**\r\nI install either version via:\r\n```bash\r\npip install opentelemetry-exporter-otlp==1.15.0\r\n```\r\nor\r\n```bash\r\npip install opentelemetry-exporter-otlp==1.14.0\r\n```\r\nThen run the following code:\r\n```python\r\nfrom opentelemetry.sdk._logs import set_logger_provider\r\n```\r\n\r\n**What is the expected behavior?**\r\nShould import without any issues.\r\n\r\n**What is the actual behavior?**\r\n```python\r\nImportError: cannot import name 'set_logger_provider' from 'opentelemetry.sdk._logs' (/usr/local/lib/python3.10/dist-packages/opentelemetry/sdk/_logs/__init__.py)\r\n```\r\n\r\n\r\n**Additional context**\r\nI found that when I installed `opentelemetry-exporter-otlp==1.14.0`, `opentelemetry-sdk==1.15.0` was installed as well. When I installed `opentelemetry-sdk==1.14.0`, everything works fine again.\r\n\n", "before_files": [{"content": "import logging\n\nfrom opentelemetry import trace\nfrom opentelemetry.exporter.otlp.proto.grpc._log_exporter import (\n OTLPLogExporter,\n)\nfrom opentelemetry.sdk._logs import (\n LoggerProvider,\n LoggingHandler,\n set_logger_provider,\n)\nfrom opentelemetry.sdk._logs.export import BatchLogRecordProcessor\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import (\n BatchSpanProcessor,\n ConsoleSpanExporter,\n)\n\ntrace.set_tracer_provider(TracerProvider())\ntrace.get_tracer_provider().add_span_processor(\n BatchSpanProcessor(ConsoleSpanExporter())\n)\n\nlogger_provider = LoggerProvider(\n resource=Resource.create(\n {\n \"service.name\": \"shoppingcart\",\n \"service.instance.id\": \"instance-12\",\n }\n ),\n)\nset_logger_provider(logger_provider)\n\nexporter = OTLPLogExporter(insecure=True)\nlogger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))\nhandler = LoggingHandler(level=logging.NOTSET, logger_provider=logger_provider)\n\n# Attach OTLP handler to root logger\nlogging.getLogger().addHandler(handler)\n\n# Log directly\nlogging.info(\"Jackdaws love my big sphinx of quartz.\")\n\n# Create different namespaced loggers\nlogger1 = logging.getLogger(\"myapp.area1\")\nlogger2 = logging.getLogger(\"myapp.area2\")\n\nlogger1.debug(\"Quick zephyrs blow, vexing daft Jim.\")\nlogger1.info(\"How quickly daft jumping zebras vex.\")\nlogger2.warning(\"Jail zesty vixen who grabbed pay from quack.\")\nlogger2.error(\"The five boxing wizards jump quickly.\")\n\n\n# Trace context correlation\ntracer = trace.get_tracer(__name__)\nwith tracer.start_as_current_span(\"foo\"):\n # Do something\n logger2.error(\"Hyderabad, we have a major problem.\")\n\nlogger_provider.shutdown()\n", "path": "docs/examples/logs/example.py"}], "after_files": [{"content": "import logging\n\nfrom opentelemetry import trace\nfrom opentelemetry._logs import set_logger_provider\nfrom opentelemetry.exporter.otlp.proto.grpc._log_exporter import (\n OTLPLogExporter,\n)\nfrom opentelemetry.sdk._logs import LoggerProvider, LoggingHandler\nfrom opentelemetry.sdk._logs.export import BatchLogRecordProcessor\nfrom opentelemetry.sdk.resources import Resource\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import (\n BatchSpanProcessor,\n ConsoleSpanExporter,\n)\n\ntrace.set_tracer_provider(TracerProvider())\ntrace.get_tracer_provider().add_span_processor(\n BatchSpanProcessor(ConsoleSpanExporter())\n)\n\nlogger_provider = LoggerProvider(\n resource=Resource.create(\n {\n \"service.name\": \"shoppingcart\",\n \"service.instance.id\": \"instance-12\",\n }\n ),\n)\nset_logger_provider(logger_provider)\n\nexporter = OTLPLogExporter(insecure=True)\nlogger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))\nhandler = LoggingHandler(level=logging.NOTSET, logger_provider=logger_provider)\n\n# Attach OTLP handler to root logger\nlogging.getLogger().addHandler(handler)\n\n# Log directly\nlogging.info(\"Jackdaws love my big sphinx of quartz.\")\n\n# Create different namespaced loggers\nlogger1 = logging.getLogger(\"myapp.area1\")\nlogger2 = logging.getLogger(\"myapp.area2\")\n\nlogger1.debug(\"Quick zephyrs blow, vexing daft Jim.\")\nlogger1.info(\"How quickly daft jumping zebras vex.\")\nlogger2.warning(\"Jail zesty vixen who grabbed pay from quack.\")\nlogger2.error(\"The five boxing wizards jump quickly.\")\n\n\n# Trace context correlation\ntracer = trace.get_tracer(__name__)\nwith tracer.start_as_current_span(\"foo\"):\n # Do something\n logger2.error(\"Hyderabad, we have a major problem.\")\n\nlogger_provider.shutdown()\n", "path": "docs/examples/logs/example.py"}]} | 1,046 | 165 |
gh_patches_debug_31186 | rasdani/github-patches | git_diff | facebookresearch__habitat-lab-272 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix contains method in habitat space classes
## 🐛 Bug
The `contains` method for the `EmptySpace`, `ActionSpace`, and `ListSpace` classes in `habitat.core.spaces` all contain errors. Specifically:
1. `EmptySpace.contains(x)` should return `True` if `x is None`.
2. `ActionSpace.contains(x)` improperly handles invalid keys and invalid `x["action"]` values.
3. `ListSpace.contains(x)` improperly checks the length of `x`.
## Command
`EmptySpace.contains()`
`ActionSpace.contains()`
`ListSpace.contains()`
## To Reproduce
Code to reproduce the behavior:
```python
>>> import gym
>>> from habitat.core.spaces import EmptySpace, ActionSpace, ListSpace
>>>
>>> space = EmptySpace()
>>> print(space.contains(None)) # should be True
False
>>>
>>> space = ActionSpace({
... "move": gym.spaces.Dict({
... "position": gym.spaces.Discrete(2),
... "velocity": gym.spaces.Discrete(3)
... }),
... "move_forward": EmptySpace(),
... })
>>> space.contains({'action': 'move'}) # should be False
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/habitat/habitat-api/habitat/core/spaces.py", line 64, in contains
if not self.spaces[x["action"]].contains(x["action_args"]):
KeyError: 'action_args'
>>>
>>> space.contains({'action': None}) # should be false
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/habitat/habitat-api/habitat/core/spaces.py", line 64, in contains
if not self.spaces[x["action"]].contains(x["action_args"]):
KeyError: None
>>>
>>> space = ListSpace(gym.spaces.Discrete(2), 5, 10)
>>> print(space.contains([0, 1, 0, 1])) # should be True
False
```
## Expected behavior
The `contains` methods should properly return `True`, `False`, `False`, and `True` in the four cases above, respectively.
## Additional context
There is also an error in example code in the docs for the `ActionSpace` class.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `habitat/core/spaces.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7 from collections import OrderedDict
8 from typing import Sized
9
10 import gym
11 from gym import Space
12
13
14 class EmptySpace(Space):
15 """
16 A ``gym.Space`` that reflects arguments space for action that doesn't have
17 arguments. Needed for consistency ang always samples `None` value.
18 """
19
20 def sample(self):
21 return None
22
23 def contains(self, x):
24 return False
25
26
27 class ActionSpace(gym.spaces.Dict):
28 """
29 A dictionary of ``EmbodiedTask`` actions and their argument spaces.
30
31 .. code:: py
32
33 self.observation_space = spaces.ActionSpace(
34 "move": spaces.Dict({
35 "position": spaces.Discrete(2),
36 "velocity": spaces.Discrete(3)
37 },
38 "move_forward": EmptySpace,
39 )
40 )
41 """
42
43 def __init__(self, spaces):
44 if isinstance(spaces, dict):
45 self.spaces = OrderedDict(sorted(list(spaces.items())))
46 if isinstance(spaces, list):
47 self.spaces = OrderedDict(spaces)
48 self.actions_select = gym.spaces.Discrete(len(self.spaces))
49
50 @property
51 def n(self):
52 return len(self.spaces)
53
54 def sample(self):
55 action_index = self.actions_select.sample()
56 return {
57 "action": list(self.spaces.keys())[action_index],
58 "action_args": list(self.spaces.values())[action_index].sample(),
59 }
60
61 def contains(self, x):
62 if not isinstance(x, dict) and {"action", "action_args"} not in x:
63 return False
64 if not self.spaces[x["action"]].contains(x["action_args"]):
65 return False
66 return True
67
68 def __repr__(self):
69 return (
70 "ActionSpace("
71 + ", ".join([k + ":" + str(s) for k, s in self.spaces.items()])
72 + ")"
73 )
74
75
76 class ListSpace(Space):
77 """
78 A ``gym.Space`` that describes a list of other Space. Used to describe
79 list of tokens ids, vectors and etc.
80
81 .. code:: py
82
83 observation_space = ListSpace(spaces.Discrete(
84 dataset.question_vocab.get_size()))
85 """
86
87 def __init__(self, space, min_seq_length=0, max_seq_length=1 << 15):
88 self.min_seq_length = min_seq_length
89 self.max_seq_length = max_seq_length
90 self.space = space
91 self.length_select = gym.spaces.Discrete(
92 max_seq_length - min_seq_length
93 )
94
95 def sample(self):
96 seq_length = self.length_select.sample() + self.min_seq_length
97 return [self.space.sample() for _ in range(seq_length)]
98
99 def contains(self, x):
100 if not isinstance(x, Sized):
101 return False
102
103 if self.min_seq_length <= len(x) <= self.max_seq_length:
104 return False
105
106 return all([self.space.contains(el) for el in x])
107
108 def __repr__(self):
109 return (
110 f"ListSpace({self.space}, min_seq_length="
111 f"{self.min_seq_length}, max_seq_length={self.max_seq_length})"
112 )
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/habitat/core/spaces.py b/habitat/core/spaces.py
--- a/habitat/core/spaces.py
+++ b/habitat/core/spaces.py
@@ -21,8 +21,13 @@
return None
def contains(self, x):
+ if x is None:
+ return True
return False
+ def __repr__(self):
+ return "EmptySpace()"
+
class ActionSpace(gym.spaces.Dict):
"""
@@ -30,14 +35,13 @@
.. code:: py
- self.observation_space = spaces.ActionSpace(
+ self.observation_space = spaces.ActionSpace({
"move": spaces.Dict({
"position": spaces.Discrete(2),
"velocity": spaces.Discrete(3)
- },
- "move_forward": EmptySpace,
- )
- )
+ }),
+ "move_forward": EmptySpace(),
+ })
"""
def __init__(self, spaces):
@@ -59,9 +63,11 @@
}
def contains(self, x):
- if not isinstance(x, dict) and {"action", "action_args"} not in x:
+ if not isinstance(x, dict) or "action" not in x:
+ return False
+ if x["action"] not in self.spaces:
return False
- if not self.spaces[x["action"]].contains(x["action_args"]):
+ if not self.spaces[x["action"]].contains(x.get("action_args", None)):
return False
return True
@@ -100,7 +106,7 @@
if not isinstance(x, Sized):
return False
- if self.min_seq_length <= len(x) <= self.max_seq_length:
+ if not (self.min_seq_length <= len(x) <= self.max_seq_length):
return False
return all([self.space.contains(el) for el in x])
| {"golden_diff": "diff --git a/habitat/core/spaces.py b/habitat/core/spaces.py\n--- a/habitat/core/spaces.py\n+++ b/habitat/core/spaces.py\n@@ -21,8 +21,13 @@\n return None\n \n def contains(self, x):\n+ if x is None:\n+ return True\n return False\n \n+ def __repr__(self):\n+ return \"EmptySpace()\"\n+\n \n class ActionSpace(gym.spaces.Dict):\n \"\"\"\n@@ -30,14 +35,13 @@\n \n .. code:: py\n \n- self.observation_space = spaces.ActionSpace(\n+ self.observation_space = spaces.ActionSpace({\n \"move\": spaces.Dict({\n \"position\": spaces.Discrete(2),\n \"velocity\": spaces.Discrete(3)\n- },\n- \"move_forward\": EmptySpace,\n- )\n- )\n+ }),\n+ \"move_forward\": EmptySpace(),\n+ })\n \"\"\"\n \n def __init__(self, spaces):\n@@ -59,9 +63,11 @@\n }\n \n def contains(self, x):\n- if not isinstance(x, dict) and {\"action\", \"action_args\"} not in x:\n+ if not isinstance(x, dict) or \"action\" not in x:\n+ return False\n+ if x[\"action\"] not in self.spaces:\n return False\n- if not self.spaces[x[\"action\"]].contains(x[\"action_args\"]):\n+ if not self.spaces[x[\"action\"]].contains(x.get(\"action_args\", None)):\n return False\n return True\n \n@@ -100,7 +106,7 @@\n if not isinstance(x, Sized):\n return False\n \n- if self.min_seq_length <= len(x) <= self.max_seq_length:\n+ if not (self.min_seq_length <= len(x) <= self.max_seq_length):\n return False\n \n return all([self.space.contains(el) for el in x])\n", "issue": "Fix contains method in habitat space classes\n## \ud83d\udc1b Bug\r\n\r\nThe `contains` method for the `EmptySpace`, `ActionSpace`, and `ListSpace` classes in `habitat.core.spaces` all contain errors. Specifically:\r\n1. `EmptySpace.contains(x)` should return `True` if `x is None`.\r\n2. `ActionSpace.contains(x)` improperly handles invalid keys and invalid `x[\"action\"]` values.\r\n3. `ListSpace.contains(x)` improperly checks the length of `x`.\r\n\r\n## Command\r\n\r\n`EmptySpace.contains()`\r\n`ActionSpace.contains()`\r\n`ListSpace.contains()`\r\n\r\n## To Reproduce\r\n\r\nCode to reproduce the behavior:\r\n\r\n```python\r\n>>> import gym\r\n>>> from habitat.core.spaces import EmptySpace, ActionSpace, ListSpace\r\n>>>\r\n>>> space = EmptySpace()\r\n>>> print(space.contains(None)) # should be True\r\nFalse\r\n>>>\r\n>>> space = ActionSpace({\r\n... \"move\": gym.spaces.Dict({\r\n... \"position\": gym.spaces.Discrete(2),\r\n... \"velocity\": gym.spaces.Discrete(3)\r\n... }),\r\n... \"move_forward\": EmptySpace(),\r\n... })\r\n>>> space.contains({'action': 'move'}) # should be False\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/mnt/habitat/habitat-api/habitat/core/spaces.py\", line 64, in contains\r\n if not self.spaces[x[\"action\"]].contains(x[\"action_args\"]):\r\nKeyError: 'action_args'\r\n>>>\r\n>>> space.contains({'action': None}) # should be false\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/mnt/habitat/habitat-api/habitat/core/spaces.py\", line 64, in contains\r\n if not self.spaces[x[\"action\"]].contains(x[\"action_args\"]):\r\nKeyError: None\r\n>>>\r\n>>> space = ListSpace(gym.spaces.Discrete(2), 5, 10)\r\n>>> print(space.contains([0, 1, 0, 1])) # should be True\r\nFalse\r\n```\r\n\r\n## Expected behavior\r\n\r\nThe `contains` methods should properly return `True`, `False`, `False`, and `True` in the four cases above, respectively.\r\n\r\n## Additional context\r\n\r\nThere is also an error in example code in the docs for the `ActionSpace` class.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom collections import OrderedDict\nfrom typing import Sized\n\nimport gym\nfrom gym import Space\n\n\nclass EmptySpace(Space):\n \"\"\"\n A ``gym.Space`` that reflects arguments space for action that doesn't have\n arguments. Needed for consistency ang always samples `None` value.\n \"\"\"\n\n def sample(self):\n return None\n\n def contains(self, x):\n return False\n\n\nclass ActionSpace(gym.spaces.Dict):\n \"\"\"\n A dictionary of ``EmbodiedTask`` actions and their argument spaces.\n\n .. code:: py\n\n self.observation_space = spaces.ActionSpace(\n \"move\": spaces.Dict({\n \"position\": spaces.Discrete(2),\n \"velocity\": spaces.Discrete(3)\n },\n \"move_forward\": EmptySpace,\n )\n )\n \"\"\"\n\n def __init__(self, spaces):\n if isinstance(spaces, dict):\n self.spaces = OrderedDict(sorted(list(spaces.items())))\n if isinstance(spaces, list):\n self.spaces = OrderedDict(spaces)\n self.actions_select = gym.spaces.Discrete(len(self.spaces))\n\n @property\n def n(self):\n return len(self.spaces)\n\n def sample(self):\n action_index = self.actions_select.sample()\n return {\n \"action\": list(self.spaces.keys())[action_index],\n \"action_args\": list(self.spaces.values())[action_index].sample(),\n }\n\n def contains(self, x):\n if not isinstance(x, dict) and {\"action\", \"action_args\"} not in x:\n return False\n if not self.spaces[x[\"action\"]].contains(x[\"action_args\"]):\n return False\n return True\n\n def __repr__(self):\n return (\n \"ActionSpace(\"\n + \", \".join([k + \":\" + str(s) for k, s in self.spaces.items()])\n + \")\"\n )\n\n\nclass ListSpace(Space):\n \"\"\"\n A ``gym.Space`` that describes a list of other Space. Used to describe\n list of tokens ids, vectors and etc.\n\n .. code:: py\n\n observation_space = ListSpace(spaces.Discrete(\n dataset.question_vocab.get_size()))\n \"\"\"\n\n def __init__(self, space, min_seq_length=0, max_seq_length=1 << 15):\n self.min_seq_length = min_seq_length\n self.max_seq_length = max_seq_length\n self.space = space\n self.length_select = gym.spaces.Discrete(\n max_seq_length - min_seq_length\n )\n\n def sample(self):\n seq_length = self.length_select.sample() + self.min_seq_length\n return [self.space.sample() for _ in range(seq_length)]\n\n def contains(self, x):\n if not isinstance(x, Sized):\n return False\n\n if self.min_seq_length <= len(x) <= self.max_seq_length:\n return False\n\n return all([self.space.contains(el) for el in x])\n\n def __repr__(self):\n return (\n f\"ListSpace({self.space}, min_seq_length=\"\n f\"{self.min_seq_length}, max_seq_length={self.max_seq_length})\"\n )\n", "path": "habitat/core/spaces.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom collections import OrderedDict\nfrom typing import Sized\n\nimport gym\nfrom gym import Space\n\n\nclass EmptySpace(Space):\n \"\"\"\n A ``gym.Space`` that reflects arguments space for action that doesn't have\n arguments. Needed for consistency ang always samples `None` value.\n \"\"\"\n\n def sample(self):\n return None\n\n def contains(self, x):\n if x is None:\n return True\n return False\n\n def __repr__(self):\n return \"EmptySpace()\"\n\n\nclass ActionSpace(gym.spaces.Dict):\n \"\"\"\n A dictionary of ``EmbodiedTask`` actions and their argument spaces.\n\n .. code:: py\n\n self.observation_space = spaces.ActionSpace({\n \"move\": spaces.Dict({\n \"position\": spaces.Discrete(2),\n \"velocity\": spaces.Discrete(3)\n }),\n \"move_forward\": EmptySpace(),\n })\n \"\"\"\n\n def __init__(self, spaces):\n if isinstance(spaces, dict):\n self.spaces = OrderedDict(sorted(list(spaces.items())))\n if isinstance(spaces, list):\n self.spaces = OrderedDict(spaces)\n self.actions_select = gym.spaces.Discrete(len(self.spaces))\n\n @property\n def n(self):\n return len(self.spaces)\n\n def sample(self):\n action_index = self.actions_select.sample()\n return {\n \"action\": list(self.spaces.keys())[action_index],\n \"action_args\": list(self.spaces.values())[action_index].sample(),\n }\n\n def contains(self, x):\n if not isinstance(x, dict) or \"action\" not in x:\n return False\n if x[\"action\"] not in self.spaces:\n return False\n if not self.spaces[x[\"action\"]].contains(x.get(\"action_args\", None)):\n return False\n return True\n\n def __repr__(self):\n return (\n \"ActionSpace(\"\n + \", \".join([k + \":\" + str(s) for k, s in self.spaces.items()])\n + \")\"\n )\n\n\nclass ListSpace(Space):\n \"\"\"\n A ``gym.Space`` that describes a list of other Space. Used to describe\n list of tokens ids, vectors and etc.\n\n .. code:: py\n\n observation_space = ListSpace(spaces.Discrete(\n dataset.question_vocab.get_size()))\n \"\"\"\n\n def __init__(self, space, min_seq_length=0, max_seq_length=1 << 15):\n self.min_seq_length = min_seq_length\n self.max_seq_length = max_seq_length\n self.space = space\n self.length_select = gym.spaces.Discrete(\n max_seq_length - min_seq_length\n )\n\n def sample(self):\n seq_length = self.length_select.sample() + self.min_seq_length\n return [self.space.sample() for _ in range(seq_length)]\n\n def contains(self, x):\n if not isinstance(x, Sized):\n return False\n\n if not (self.min_seq_length <= len(x) <= self.max_seq_length):\n return False\n\n return all([self.space.contains(el) for el in x])\n\n def __repr__(self):\n return (\n f\"ListSpace({self.space}, min_seq_length=\"\n f\"{self.min_seq_length}, max_seq_length={self.max_seq_length})\"\n )\n", "path": "habitat/core/spaces.py"}]} | 1,742 | 434 |
gh_patches_debug_20970 | rasdani/github-patches | git_diff | fidals__shopelectro-453 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Убери категорию из фида
Из фида gm.yml - убирай категорию https://www.shopelectro.ru/catalog/categories/usiliteli-zvuka-dlia-slaboslyshashchikh/
Убрать только из этого фида.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shopelectro/management/commands/price.py`
Content:
```
1 import os
2
3 from django.conf import settings
4 from django.core.management.base import BaseCommand
5 from django.template.loader import render_to_string
6 from django.urls import reverse
7
8 from shopelectro.models import Product, Category
9
10
11 class Command(BaseCommand):
12 """Generate yml file for a given vendor (YM or price.ru)."""
13
14 # Online market services, that works with our prices.
15 # Dict keys - url targets for every service
16 TARGETS = {
17 'YM': 'yandex.yml',
18 'priceru': 'priceru.xml',
19 'GM': 'gm.yml',
20 'SE78': 'se78.yml',
21 }
22 # price files will be stored at this dir
23 BASE_DIR = settings.ASSETS_DIR
24
25 IGNORED_CATEGORIES = [
26 'Измерительные приборы', 'Новогодние вращающиеся светодиодные лампы',
27 'Новогодние лазерные проекторы', 'MP3- колонки', 'Беспроводные звонки',
28 'Радиоприёмники', 'Фонари', 'Отвертки', 'Весы электронные портативные',
29 ]
30
31 def create_prices(self):
32 for target in self.TARGETS.items():
33 self.generate_yml(*target)
34
35 def handle(self, *args, **options):
36 self.create_prices()
37
38 @classmethod
39 def get_context_for_yml(cls, utm):
40 """Create context dictionary for rendering files."""
41 def put_utm(product):
42 """Put UTM attribute to product."""
43 utm_marks = [
44 ('utm_source', utm),
45 ('utm_medium', 'cpc'),
46 ('utm_content', product.get_root_category().page.slug),
47 ('utm_term', str(product.vendor_code)),
48 ]
49
50 url = reverse('product', args=(product.vendor_code,))
51 utm_mark_query = '&'.join('{}={}'.format(k, v) for k, v in utm_marks)
52 product.utm_url = '{}{}?{}'.format(settings.BASE_URL, url, utm_mark_query)
53
54 product.prepared_params = list(
55 filter(
56 lambda x: x[0].name != 'Производитель',
57 product.params
58 )
59 )
60
61 return product
62
63 def put_crumbs(product): # Ignore PyDocStyleBear
64 """Crumbs for google merchant. https://goo.gl/b0UJQp"""
65 product.crumbs = ' > '.join(
66 product.page.get_ancestors_fields('h1', include_self=False)[1:]
67 )
68 return product
69
70 def filter_categories(utm):
71 categories_to_exclude = (
72 Category.objects
73 .filter(name__in=cls.IGNORED_CATEGORIES)
74 .get_descendants(include_self=True)
75 )
76
77 result_categories = Category.objects.exclude(id__in=categories_to_exclude)
78
79 if utm == 'YM':
80 """
81 Yandex Market feed requires items in some categories to have pictures
82 To simplify filtering we are excluding all categories
83 which don't contain at least one product with picture
84 """
85 result_categories = result_categories.get_categories_tree_with_pictures()
86
87 return result_categories
88
89 def prepare_products(categories_, utm):
90 """Filter product list and patch it for rendering."""
91 products_except_others = (
92 Product.objects
93 .select_related('page')
94 .prefetch_related('category')
95 .prefetch_related('page__images')
96 .filter(category__in=categories_, price__gt=0, page__is_active=True)
97 )
98
99 if utm == 'YM':
100 """
101 Yandex Market feed requires items in some categories to have pictures
102 To simplify filtering we are excluding all products without pictures
103 """
104 products_except_others = (
105 products_except_others
106 .filter(page__images__isnull=False)
107 .distinct()
108 )
109
110 result_products = [
111 put_crumbs(put_utm(product))
112 for product in products_except_others
113 ]
114
115 return result_products
116
117 categories = (
118 filter_categories(utm) if utm != 'SE78'
119 else Category.objects.all()
120 )
121
122 products = prepare_products(categories, utm)
123
124 return {
125 'base_url': settings.BASE_URL,
126 'categories': categories,
127 'products': products,
128 'shop': settings.SHOP,
129 'utm': utm,
130 }
131
132 @classmethod
133 def generate_yml(cls, utm, file_name):
134 """Generate yml file."""
135 file_to_write = os.path.join(cls.BASE_DIR, file_name)
136 context = cls.get_context_for_yml(utm)
137
138 with open(file_to_write, 'w', encoding='utf-8') as file:
139 file.write(render_to_string('prices/price.yml', context).strip())
140
141 return '{} generated...'.format(file_name)
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/shopelectro/management/commands/price.py b/shopelectro/management/commands/price.py
--- a/shopelectro/management/commands/price.py
+++ b/shopelectro/management/commands/price.py
@@ -1,3 +1,4 @@
+from collections import defaultdict
import os
from django.conf import settings
@@ -28,6 +29,10 @@
'Радиоприёмники', 'Фонари', 'Отвертки', 'Весы электронные портативные',
]
+ IGNORED_CATEGORIES_BY_TARGET = defaultdict(list, {
+ 'GM': ['Усилители звука для слабослышащих'],
+ })
+
def create_prices(self):
for target in self.TARGETS.items():
self.generate_yml(*target)
@@ -71,6 +76,7 @@
categories_to_exclude = (
Category.objects
.filter(name__in=cls.IGNORED_CATEGORIES)
+ .filter(name__in=cls.IGNORED_CATEGORIES_BY_TARGET[utm])
.get_descendants(include_self=True)
)
| {"golden_diff": "diff --git a/shopelectro/management/commands/price.py b/shopelectro/management/commands/price.py\n--- a/shopelectro/management/commands/price.py\n+++ b/shopelectro/management/commands/price.py\n@@ -1,3 +1,4 @@\n+from collections import defaultdict\n import os\n \n from django.conf import settings\n@@ -28,6 +29,10 @@\n '\u0420\u0430\u0434\u0438\u043e\u043f\u0440\u0438\u0451\u043c\u043d\u0438\u043a\u0438', '\u0424\u043e\u043d\u0430\u0440\u0438', '\u041e\u0442\u0432\u0435\u0440\u0442\u043a\u0438', '\u0412\u0435\u0441\u044b \u044d\u043b\u0435\u043a\u0442\u0440\u043e\u043d\u043d\u044b\u0435 \u043f\u043e\u0440\u0442\u0430\u0442\u0438\u0432\u043d\u044b\u0435',\n ]\n \n+ IGNORED_CATEGORIES_BY_TARGET = defaultdict(list, {\n+ 'GM': ['\u0423\u0441\u0438\u043b\u0438\u0442\u0435\u043b\u0438 \u0437\u0432\u0443\u043a\u0430 \u0434\u043b\u044f \u0441\u043b\u0430\u0431\u043e\u0441\u043b\u044b\u0448\u0430\u0449\u0438\u0445'],\n+ })\n+\n def create_prices(self):\n for target in self.TARGETS.items():\n self.generate_yml(*target)\n@@ -71,6 +76,7 @@\n categories_to_exclude = (\n Category.objects\n .filter(name__in=cls.IGNORED_CATEGORIES)\n+ .filter(name__in=cls.IGNORED_CATEGORIES_BY_TARGET[utm])\n .get_descendants(include_self=True)\n )\n", "issue": "\u0423\u0431\u0435\u0440\u0438 \u043a\u0430\u0442\u0435\u0433\u043e\u0440\u0438\u044e \u0438\u0437 \u0444\u0438\u0434\u0430\n\u0418\u0437 \u0444\u0438\u0434\u0430 gm.yml - \u0443\u0431\u0438\u0440\u0430\u0439 \u043a\u0430\u0442\u0435\u0433\u043e\u0440\u0438\u044e https://www.shopelectro.ru/catalog/categories/usiliteli-zvuka-dlia-slaboslyshashchikh/\r\n\u0423\u0431\u0440\u0430\u0442\u044c \u0442\u043e\u043b\u044c\u043a\u043e \u0438\u0437 \u044d\u0442\u043e\u0433\u043e \u0444\u0438\u0434\u0430. \n", "before_files": [{"content": "import os\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\nfrom django.template.loader import render_to_string\nfrom django.urls import reverse\n\nfrom shopelectro.models import Product, Category\n\n\nclass Command(BaseCommand):\n \"\"\"Generate yml file for a given vendor (YM or price.ru).\"\"\"\n\n # Online market services, that works with our prices.\n # Dict keys - url targets for every service\n TARGETS = {\n 'YM': 'yandex.yml',\n 'priceru': 'priceru.xml',\n 'GM': 'gm.yml',\n 'SE78': 'se78.yml',\n }\n # price files will be stored at this dir\n BASE_DIR = settings.ASSETS_DIR\n\n IGNORED_CATEGORIES = [\n '\u0418\u0437\u043c\u0435\u0440\u0438\u0442\u0435\u043b\u044c\u043d\u044b\u0435 \u043f\u0440\u0438\u0431\u043e\u0440\u044b', '\u041d\u043e\u0432\u043e\u0433\u043e\u0434\u043d\u0438\u0435 \u0432\u0440\u0430\u0449\u0430\u044e\u0449\u0438\u0435\u0441\u044f \u0441\u0432\u0435\u0442\u043e\u0434\u0438\u043e\u0434\u043d\u044b\u0435 \u043b\u0430\u043c\u043f\u044b',\n '\u041d\u043e\u0432\u043e\u0433\u043e\u0434\u043d\u0438\u0435 \u043b\u0430\u0437\u0435\u0440\u043d\u044b\u0435 \u043f\u0440\u043e\u0435\u043a\u0442\u043e\u0440\u044b', 'MP3- \u043a\u043e\u043b\u043e\u043d\u043a\u0438', '\u0411\u0435\u0441\u043f\u0440\u043e\u0432\u043e\u0434\u043d\u044b\u0435 \u0437\u0432\u043e\u043d\u043a\u0438',\n '\u0420\u0430\u0434\u0438\u043e\u043f\u0440\u0438\u0451\u043c\u043d\u0438\u043a\u0438', '\u0424\u043e\u043d\u0430\u0440\u0438', '\u041e\u0442\u0432\u0435\u0440\u0442\u043a\u0438', '\u0412\u0435\u0441\u044b \u044d\u043b\u0435\u043a\u0442\u0440\u043e\u043d\u043d\u044b\u0435 \u043f\u043e\u0440\u0442\u0430\u0442\u0438\u0432\u043d\u044b\u0435',\n ]\n\n def create_prices(self):\n for target in self.TARGETS.items():\n self.generate_yml(*target)\n\n def handle(self, *args, **options):\n self.create_prices()\n\n @classmethod\n def get_context_for_yml(cls, utm):\n \"\"\"Create context dictionary for rendering files.\"\"\"\n def put_utm(product):\n \"\"\"Put UTM attribute to product.\"\"\"\n utm_marks = [\n ('utm_source', utm),\n ('utm_medium', 'cpc'),\n ('utm_content', product.get_root_category().page.slug),\n ('utm_term', str(product.vendor_code)),\n ]\n\n url = reverse('product', args=(product.vendor_code,))\n utm_mark_query = '&'.join('{}={}'.format(k, v) for k, v in utm_marks)\n product.utm_url = '{}{}?{}'.format(settings.BASE_URL, url, utm_mark_query)\n\n product.prepared_params = list(\n filter(\n lambda x: x[0].name != '\u041f\u0440\u043e\u0438\u0437\u0432\u043e\u0434\u0438\u0442\u0435\u043b\u044c',\n product.params\n )\n )\n\n return product\n\n def put_crumbs(product): # Ignore PyDocStyleBear\n \"\"\"Crumbs for google merchant. https://goo.gl/b0UJQp\"\"\"\n product.crumbs = ' > '.join(\n product.page.get_ancestors_fields('h1', include_self=False)[1:]\n )\n return product\n\n def filter_categories(utm):\n categories_to_exclude = (\n Category.objects\n .filter(name__in=cls.IGNORED_CATEGORIES)\n .get_descendants(include_self=True)\n )\n\n result_categories = Category.objects.exclude(id__in=categories_to_exclude)\n\n if utm == 'YM':\n \"\"\"\n Yandex Market feed requires items in some categories to have pictures\n To simplify filtering we are excluding all categories\n which don't contain at least one product with picture\n \"\"\"\n result_categories = result_categories.get_categories_tree_with_pictures()\n\n return result_categories\n\n def prepare_products(categories_, utm):\n \"\"\"Filter product list and patch it for rendering.\"\"\"\n products_except_others = (\n Product.objects\n .select_related('page')\n .prefetch_related('category')\n .prefetch_related('page__images')\n .filter(category__in=categories_, price__gt=0, page__is_active=True)\n )\n\n if utm == 'YM':\n \"\"\"\n Yandex Market feed requires items in some categories to have pictures\n To simplify filtering we are excluding all products without pictures\n \"\"\"\n products_except_others = (\n products_except_others\n .filter(page__images__isnull=False)\n .distinct()\n )\n\n result_products = [\n put_crumbs(put_utm(product))\n for product in products_except_others\n ]\n\n return result_products\n\n categories = (\n filter_categories(utm) if utm != 'SE78'\n else Category.objects.all()\n )\n\n products = prepare_products(categories, utm)\n\n return {\n 'base_url': settings.BASE_URL,\n 'categories': categories,\n 'products': products,\n 'shop': settings.SHOP,\n 'utm': utm,\n }\n\n @classmethod\n def generate_yml(cls, utm, file_name):\n \"\"\"Generate yml file.\"\"\"\n file_to_write = os.path.join(cls.BASE_DIR, file_name)\n context = cls.get_context_for_yml(utm)\n\n with open(file_to_write, 'w', encoding='utf-8') as file:\n file.write(render_to_string('prices/price.yml', context).strip())\n\n return '{} generated...'.format(file_name)\n", "path": "shopelectro/management/commands/price.py"}], "after_files": [{"content": "from collections import defaultdict\nimport os\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\nfrom django.template.loader import render_to_string\nfrom django.urls import reverse\n\nfrom shopelectro.models import Product, Category\n\n\nclass Command(BaseCommand):\n \"\"\"Generate yml file for a given vendor (YM or price.ru).\"\"\"\n\n # Online market services, that works with our prices.\n # Dict keys - url targets for every service\n TARGETS = {\n 'YM': 'yandex.yml',\n 'priceru': 'priceru.xml',\n 'GM': 'gm.yml',\n 'SE78': 'se78.yml',\n }\n # price files will be stored at this dir\n BASE_DIR = settings.ASSETS_DIR\n\n IGNORED_CATEGORIES = [\n '\u0418\u0437\u043c\u0435\u0440\u0438\u0442\u0435\u043b\u044c\u043d\u044b\u0435 \u043f\u0440\u0438\u0431\u043e\u0440\u044b', '\u041d\u043e\u0432\u043e\u0433\u043e\u0434\u043d\u0438\u0435 \u0432\u0440\u0430\u0449\u0430\u044e\u0449\u0438\u0435\u0441\u044f \u0441\u0432\u0435\u0442\u043e\u0434\u0438\u043e\u0434\u043d\u044b\u0435 \u043b\u0430\u043c\u043f\u044b',\n '\u041d\u043e\u0432\u043e\u0433\u043e\u0434\u043d\u0438\u0435 \u043b\u0430\u0437\u0435\u0440\u043d\u044b\u0435 \u043f\u0440\u043e\u0435\u043a\u0442\u043e\u0440\u044b', 'MP3- \u043a\u043e\u043b\u043e\u043d\u043a\u0438', '\u0411\u0435\u0441\u043f\u0440\u043e\u0432\u043e\u0434\u043d\u044b\u0435 \u0437\u0432\u043e\u043d\u043a\u0438',\n '\u0420\u0430\u0434\u0438\u043e\u043f\u0440\u0438\u0451\u043c\u043d\u0438\u043a\u0438', '\u0424\u043e\u043d\u0430\u0440\u0438', '\u041e\u0442\u0432\u0435\u0440\u0442\u043a\u0438', '\u0412\u0435\u0441\u044b \u044d\u043b\u0435\u043a\u0442\u0440\u043e\u043d\u043d\u044b\u0435 \u043f\u043e\u0440\u0442\u0430\u0442\u0438\u0432\u043d\u044b\u0435',\n ]\n\n IGNORED_CATEGORIES_BY_TARGET = defaultdict(list, {\n 'GM': ['\u0423\u0441\u0438\u043b\u0438\u0442\u0435\u043b\u0438 \u0437\u0432\u0443\u043a\u0430 \u0434\u043b\u044f \u0441\u043b\u0430\u0431\u043e\u0441\u043b\u044b\u0448\u0430\u0449\u0438\u0445'],\n })\n\n def create_prices(self):\n for target in self.TARGETS.items():\n self.generate_yml(*target)\n\n def handle(self, *args, **options):\n self.create_prices()\n\n @classmethod\n def get_context_for_yml(cls, utm):\n \"\"\"Create context dictionary for rendering files.\"\"\"\n def put_utm(product):\n \"\"\"Put UTM attribute to product.\"\"\"\n utm_marks = [\n ('utm_source', utm),\n ('utm_medium', 'cpc'),\n ('utm_content', product.get_root_category().page.slug),\n ('utm_term', str(product.vendor_code)),\n ]\n\n url = reverse('product', args=(product.vendor_code,))\n utm_mark_query = '&'.join('{}={}'.format(k, v) for k, v in utm_marks)\n product.utm_url = '{}{}?{}'.format(settings.BASE_URL, url, utm_mark_query)\n\n product.prepared_params = list(\n filter(\n lambda x: x[0].name != '\u041f\u0440\u043e\u0438\u0437\u0432\u043e\u0434\u0438\u0442\u0435\u043b\u044c',\n product.params\n )\n )\n\n return product\n\n def put_crumbs(product): # Ignore PyDocStyleBear\n \"\"\"Crumbs for google merchant. https://goo.gl/b0UJQp\"\"\"\n product.crumbs = ' > '.join(\n product.page.get_ancestors_fields('h1', include_self=False)[1:]\n )\n return product\n\n def filter_categories(utm):\n categories_to_exclude = (\n Category.objects\n .filter(name__in=cls.IGNORED_CATEGORIES)\n .filter(name__in=cls.IGNORED_CATEGORIES_BY_TARGET[utm])\n .get_descendants(include_self=True)\n )\n\n result_categories = Category.objects.exclude(id__in=categories_to_exclude)\n\n if utm == 'YM':\n \"\"\"\n Yandex Market feed requires items in some categories to have pictures\n To simplify filtering we are excluding all categories\n which don't contain at least one product with picture\n \"\"\"\n result_categories = result_categories.get_categories_tree_with_pictures()\n\n return result_categories\n\n def prepare_products(categories_, utm):\n \"\"\"Filter product list and patch it for rendering.\"\"\"\n products_except_others = (\n Product.objects\n .select_related('page')\n .prefetch_related('category')\n .prefetch_related('page__images')\n .filter(category__in=categories_, price__gt=0, page__is_active=True)\n )\n\n if utm == 'YM':\n \"\"\"\n Yandex Market feed requires items in some categories to have pictures\n To simplify filtering we are excluding all products without pictures\n \"\"\"\n products_except_others = (\n products_except_others\n .filter(page__images__isnull=False)\n .distinct()\n )\n\n result_products = [\n put_crumbs(put_utm(product))\n for product in products_except_others\n ]\n\n return result_products\n\n categories = (\n filter_categories(utm) if utm != 'SE78'\n else Category.objects.all()\n )\n\n products = prepare_products(categories, utm)\n\n return {\n 'base_url': settings.BASE_URL,\n 'categories': categories,\n 'products': products,\n 'shop': settings.SHOP,\n 'utm': utm,\n }\n\n @classmethod\n def generate_yml(cls, utm, file_name):\n \"\"\"Generate yml file.\"\"\"\n file_to_write = os.path.join(cls.BASE_DIR, file_name)\n context = cls.get_context_for_yml(utm)\n\n with open(file_to_write, 'w', encoding='utf-8') as file:\n file.write(render_to_string('prices/price.yml', context).strip())\n\n return '{} generated...'.format(file_name)\n", "path": "shopelectro/management/commands/price.py"}]} | 1,712 | 258 |
gh_patches_debug_16962 | rasdani/github-patches | git_diff | cal-itp__benefits-921 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use the common base Docker image
This is a corollary to work done in cal-itp/eligibility-server#123.
We have a new repository [`cal-itp/docker-python-web`](https://github.com/cal-itp/docker-python-web) that publishes a Docker image to GitHub Container Registry at [`ghcr.io/cal-itp/docker-python-web`](https://github.com/cal-itp/docker-python-web/pkgs/container/docker-python-web).
This image contains a baseline `nginx` and `gunicorn` setup and configuration. We can replace the Benefits app container's base image with this image to simplify the repository and build process for Benefits.
## Acceptance Criteria
<!-- Remember to consider edge cases -->
- [ ] the Benefits app container builds `FROM ghcr.io/cal-itp/docker-python-web`
- [ ] duplicate `nginx` and `gunicorn` setup in this repository is removed
- [ ] the app runs locally as a container `docker compose up client`
- [ ] the devcontainer starts up
- [ ] the app runs in DEBUG mode in the devcontainer with `F5`
## Additional context
See the [`docker-python-web` Docs](https://docs.calitp.org/docker-python-web/) for more information.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `appcontainer/gunicorn.conf.py`
Content:
```
1 """
2 The Gunicorn configuration file
3 More information: https://docs.gunicorn.org/en/stable/settings.html
4 """
5
6 import multiprocessing
7
8 # the unix socket defined in nginx.conf
9 bind = "unix:/home/calitp/app/run/gunicorn.sock"
10
11 # Recommend (2 x $num_cores) + 1 as the number of workers to start off with
12 workers = multiprocessing.cpu_count() * 2 + 1
13
14 # send logs to stdout and stderr
15 accesslog = "-"
16 errorlog = "-"
17
18 # Preloading can save some RAM resources as well as speed up server boot times,
19 # at the cost of not being able to reload app code by restarting workers
20 # (in an ECS Fargate environment, this isn't possible anyway)
21 preload_app = True
22
```
Path: `.devcontainer/server/settings.py`
Content:
```
1 # App settings
2
3 LOG_LEVEL = "DEBUG"
4
5 # Eligibility Verification settings
6
7 CLIENT_KEY_PATH = "https://raw.githubusercontent.com/cal-itp/eligibility-server/main/keys/client.pub"
8 SERVER_PRIVATE_KEY_PATH = "https://raw.githubusercontent.com/cal-itp/eligibility-server/main/keys/server.key"
9 SERVER_PUBLIC_KEY_PATH = "https://raw.githubusercontent.com/cal-itp/eligibility-server/main/keys/server.pub"
10 SUB_FORMAT_REGEX = r".+"
11
12 # Data settings
13
14 IMPORT_FILE_PATH = "/.devcontainer/server/data.csv"
15 INPUT_HASH_ALGO = ""
16
17 # CSV-specific settings
18
19 CSV_DELIMITER = ";"
20 CSV_NEWLINE = ""
21 CSV_QUOTING = 3
22 CSV_QUOTECHAR = ""
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/.devcontainer/server/settings.py b/.devcontainer/server/settings.py
--- a/.devcontainer/server/settings.py
+++ b/.devcontainer/server/settings.py
@@ -11,7 +11,7 @@
# Data settings
-IMPORT_FILE_PATH = "/.devcontainer/server/data.csv"
+IMPORT_FILE_PATH = "https://raw.githubusercontent.com/cal-itp/eligibility-server/main/data/server.csv"
INPUT_HASH_ALGO = ""
# CSV-specific settings
diff --git a/appcontainer/gunicorn.conf.py b/appcontainer/gunicorn.conf.py
deleted file mode 100644
--- a/appcontainer/gunicorn.conf.py
+++ /dev/null
@@ -1,21 +0,0 @@
-"""
-The Gunicorn configuration file
-More information: https://docs.gunicorn.org/en/stable/settings.html
-"""
-
-import multiprocessing
-
-# the unix socket defined in nginx.conf
-bind = "unix:/home/calitp/app/run/gunicorn.sock"
-
-# Recommend (2 x $num_cores) + 1 as the number of workers to start off with
-workers = multiprocessing.cpu_count() * 2 + 1
-
-# send logs to stdout and stderr
-accesslog = "-"
-errorlog = "-"
-
-# Preloading can save some RAM resources as well as speed up server boot times,
-# at the cost of not being able to reload app code by restarting workers
-# (in an ECS Fargate environment, this isn't possible anyway)
-preload_app = True
| {"golden_diff": "diff --git a/.devcontainer/server/settings.py b/.devcontainer/server/settings.py\n--- a/.devcontainer/server/settings.py\n+++ b/.devcontainer/server/settings.py\n@@ -11,7 +11,7 @@\n \n # Data settings\n \n-IMPORT_FILE_PATH = \"/.devcontainer/server/data.csv\"\n+IMPORT_FILE_PATH = \"https://raw.githubusercontent.com/cal-itp/eligibility-server/main/data/server.csv\"\n INPUT_HASH_ALGO = \"\"\n \n # CSV-specific settings\ndiff --git a/appcontainer/gunicorn.conf.py b/appcontainer/gunicorn.conf.py\ndeleted file mode 100644\n--- a/appcontainer/gunicorn.conf.py\n+++ /dev/null\n@@ -1,21 +0,0 @@\n-\"\"\"\n-The Gunicorn configuration file\n-More information: https://docs.gunicorn.org/en/stable/settings.html\n-\"\"\"\n-\n-import multiprocessing\n-\n-# the unix socket defined in nginx.conf\n-bind = \"unix:/home/calitp/app/run/gunicorn.sock\"\n-\n-# Recommend (2 x $num_cores) + 1 as the number of workers to start off with\n-workers = multiprocessing.cpu_count() * 2 + 1\n-\n-# send logs to stdout and stderr\n-accesslog = \"-\"\n-errorlog = \"-\"\n-\n-# Preloading can save some RAM resources as well as speed up server boot times,\n-# at the cost of not being able to reload app code by restarting workers\n-# (in an ECS Fargate environment, this isn't possible anyway)\n-preload_app = True\n", "issue": "Use the common base Docker image\nThis is a corollary to work done in cal-itp/eligibility-server#123.\r\n\r\nWe have a new repository [`cal-itp/docker-python-web`](https://github.com/cal-itp/docker-python-web) that publishes a Docker image to GitHub Container Registry at [`ghcr.io/cal-itp/docker-python-web`](https://github.com/cal-itp/docker-python-web/pkgs/container/docker-python-web).\r\n\r\nThis image contains a baseline `nginx` and `gunicorn` setup and configuration. We can replace the Benefits app container's base image with this image to simplify the repository and build process for Benefits.\r\n\r\n## Acceptance Criteria\r\n\r\n<!-- Remember to consider edge cases -->\r\n\r\n- [ ] the Benefits app container builds `FROM ghcr.io/cal-itp/docker-python-web`\r\n- [ ] duplicate `nginx` and `gunicorn` setup in this repository is removed\r\n- [ ] the app runs locally as a container `docker compose up client`\r\n- [ ] the devcontainer starts up\r\n- [ ] the app runs in DEBUG mode in the devcontainer with `F5`\r\n\r\n## Additional context\r\n\r\nSee the [`docker-python-web` Docs](https://docs.calitp.org/docker-python-web/) for more information.\n", "before_files": [{"content": "\"\"\"\nThe Gunicorn configuration file\nMore information: https://docs.gunicorn.org/en/stable/settings.html\n\"\"\"\n\nimport multiprocessing\n\n# the unix socket defined in nginx.conf\nbind = \"unix:/home/calitp/app/run/gunicorn.sock\"\n\n# Recommend (2 x $num_cores) + 1 as the number of workers to start off with\nworkers = multiprocessing.cpu_count() * 2 + 1\n\n# send logs to stdout and stderr\naccesslog = \"-\"\nerrorlog = \"-\"\n\n# Preloading can save some RAM resources as well as speed up server boot times,\n# at the cost of not being able to reload app code by restarting workers\n# (in an ECS Fargate environment, this isn't possible anyway)\npreload_app = True\n", "path": "appcontainer/gunicorn.conf.py"}, {"content": "# App settings\n\nLOG_LEVEL = \"DEBUG\"\n\n# Eligibility Verification settings\n\nCLIENT_KEY_PATH = \"https://raw.githubusercontent.com/cal-itp/eligibility-server/main/keys/client.pub\"\nSERVER_PRIVATE_KEY_PATH = \"https://raw.githubusercontent.com/cal-itp/eligibility-server/main/keys/server.key\"\nSERVER_PUBLIC_KEY_PATH = \"https://raw.githubusercontent.com/cal-itp/eligibility-server/main/keys/server.pub\"\nSUB_FORMAT_REGEX = r\".+\"\n\n# Data settings\n\nIMPORT_FILE_PATH = \"/.devcontainer/server/data.csv\"\nINPUT_HASH_ALGO = \"\"\n\n# CSV-specific settings\n\nCSV_DELIMITER = \";\"\nCSV_NEWLINE = \"\"\nCSV_QUOTING = 3\nCSV_QUOTECHAR = \"\"\n", "path": ".devcontainer/server/settings.py"}], "after_files": [{"content": null, "path": "appcontainer/gunicorn.conf.py"}, {"content": "# App settings\n\nLOG_LEVEL = \"DEBUG\"\n\n# Eligibility Verification settings\n\nCLIENT_KEY_PATH = \"https://raw.githubusercontent.com/cal-itp/eligibility-server/main/keys/client.pub\"\nSERVER_PRIVATE_KEY_PATH = \"https://raw.githubusercontent.com/cal-itp/eligibility-server/main/keys/server.key\"\nSERVER_PUBLIC_KEY_PATH = \"https://raw.githubusercontent.com/cal-itp/eligibility-server/main/keys/server.pub\"\nSUB_FORMAT_REGEX = r\".+\"\n\n# Data settings\n\nIMPORT_FILE_PATH = \"https://raw.githubusercontent.com/cal-itp/eligibility-server/main/data/server.csv\"\nINPUT_HASH_ALGO = \"\"\n\n# CSV-specific settings\n\nCSV_DELIMITER = \";\"\nCSV_NEWLINE = \"\"\nCSV_QUOTING = 3\nCSV_QUOTECHAR = \"\"\n", "path": ".devcontainer/server/settings.py"}]} | 918 | 325 |
gh_patches_debug_17419 | rasdani/github-patches | git_diff | HypothesisWorks__hypothesis-1386 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Avoid "impossible" ZeroDivisionError we keep seeing
https://github.com/HypothesisWorks/hypothesis/blob/592bc7cd2c642bf273113083457444b4823021c9/hypothesis-python/src/hypothesis/statistics.py#L88-L95
This snippet occasionally leads to problems, when `total_drawtime` is nonzero but `total_runtime` *is* zero, causing test failures [like this one](https://ci.appveyor.com/project/DRMacIver/hypothesis/build/1.0.458/job/hl2d74810901ttf5#L926). I'm not sure why this happens - it's been observed on Python 2 (without) and Python 3 (with) a monotonic clock, so there might be multiple causes - but the solution is pretty simple. All you need to do to fix this is to change the first conditional to
```python
if total_drawtime <= 0.0 or total_runtime <= 0.0:
```
Then write up a short changelog (see guides/documentation.rst), add your name to the contributors list (in - and read! - contributing.rst), and open a pull request! If you have any questions, just let me know and I'd be happy to help out.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hypothesis-python/src/hypothesis/statistics.py`
Content:
```
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis, which may be found at
4 # https://github.com/HypothesisWorks/hypothesis-python
5 #
6 # Most of this work is copyright (C) 2013-2018 David R. MacIver
7 # ([email protected]), but it contains contributions by others. See
8 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
9 # consult the git log if you need to determine who owns an individual
10 # contribution.
11 #
12 # This Source Code Form is subject to the terms of the Mozilla Public License,
13 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
14 # obtain one at http://mozilla.org/MPL/2.0/.
15 #
16 # END HEADER
17
18 from __future__ import division, print_function, absolute_import
19
20 import math
21
22 from hypothesis.utils.dynamicvariables import DynamicVariable
23 from hypothesis.internal.conjecture.data import Status
24 from hypothesis.internal.conjecture.engine import MAX_SHRINKS, ExitReason
25
26 collector = DynamicVariable(None)
27
28
29 class Statistics(object):
30
31 def __init__(self, engine):
32 self.passing_examples = len(
33 engine.status_runtimes.get(Status.VALID, ()))
34 self.invalid_examples = len(
35 engine.status_runtimes.get(Status.INVALID, []) +
36 engine.status_runtimes.get(Status.OVERRUN, [])
37 )
38 self.failing_examples = len(engine.status_runtimes.get(
39 Status.INTERESTING, ()))
40
41 runtimes = sorted(
42 engine.status_runtimes.get(Status.VALID, []) +
43 engine.status_runtimes.get(Status.INVALID, []) +
44 engine.status_runtimes.get(Status.INTERESTING, [])
45 )
46
47 self.has_runs = bool(runtimes)
48 if not self.has_runs:
49 return
50
51 n = max(0, len(runtimes) - 1)
52 lower = int(runtimes[int(math.floor(n * 0.05))] * 1000)
53 upper = int(runtimes[int(math.ceil(n * 0.95))] * 1000)
54 if upper == 0:
55 self.runtimes = '< 1ms'
56 elif lower == upper:
57 self.runtimes = '~ %dms' % (lower,)
58 else:
59 self.runtimes = '%d-%d ms' % (lower, upper)
60
61 if engine.exit_reason == ExitReason.finished:
62 self.exit_reason = 'nothing left to do'
63 elif engine.exit_reason == ExitReason.flaky:
64 self.exit_reason = 'test was flaky'
65 elif engine.exit_reason == ExitReason.max_shrinks:
66 self.exit_reason = 'shrunk example %s times' % (MAX_SHRINKS,)
67 elif engine.exit_reason == ExitReason.max_iterations:
68 self.exit_reason = ((
69 'settings.max_examples={}, but < 10% of examples satisfied '
70 'assumptions').format(engine.settings.max_examples)
71 )
72 else:
73 self.exit_reason = (
74 'settings.%s=%r' % (
75 engine.exit_reason.name,
76 getattr(engine.settings, engine.exit_reason.name)
77 )
78 )
79
80 self.events = [
81 '%.2f%%, %s' % (
82 c / engine.call_count * 100, e
83 ) for e, c in sorted(
84 engine.event_call_counts.items(), key=lambda x: -x[1])
85 ]
86
87 total_runtime = math.fsum(engine.all_runtimes)
88 total_drawtime = math.fsum(engine.all_drawtimes)
89
90 if total_drawtime == 0.0:
91 self.draw_time_percentage = '~ 0%'
92 else:
93 draw_time_percentage = 100.0 * min(
94 1, total_drawtime / total_runtime)
95
96 self.draw_time_percentage = '~ %d%%' % (
97 round(draw_time_percentage),)
98
99
100 def note_engine_for_statistics(engine):
101 callback = collector.value
102 if callback is not None:
103 callback(Statistics(engine))
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hypothesis-python/src/hypothesis/statistics.py b/hypothesis-python/src/hypothesis/statistics.py
--- a/hypothesis-python/src/hypothesis/statistics.py
+++ b/hypothesis-python/src/hypothesis/statistics.py
@@ -87,8 +87,15 @@
total_runtime = math.fsum(engine.all_runtimes)
total_drawtime = math.fsum(engine.all_drawtimes)
- if total_drawtime == 0.0:
+ if total_drawtime == 0.0 and total_runtime >= 0.0:
self.draw_time_percentage = '~ 0%'
+ elif total_drawtime < 0.0 or total_runtime <= 0.0:
+ # This weird condition is possible in two ways:
+ # 1. drawtime and/or runtime are negative, due to clock changes
+ # on Python 2 or old OSs (we use monotonic() where available)
+ # 2. floating-point issues *very rarely* cause math.fsum to be
+ # off by the lowest bit, so drawtime==0 and runtime!=0, eek!
+ self.draw_time_percentage = 'NaN'
else:
draw_time_percentage = 100.0 * min(
1, total_drawtime / total_runtime)
| {"golden_diff": "diff --git a/hypothesis-python/src/hypothesis/statistics.py b/hypothesis-python/src/hypothesis/statistics.py\n--- a/hypothesis-python/src/hypothesis/statistics.py\n+++ b/hypothesis-python/src/hypothesis/statistics.py\n@@ -87,8 +87,15 @@\n total_runtime = math.fsum(engine.all_runtimes)\n total_drawtime = math.fsum(engine.all_drawtimes)\n \n- if total_drawtime == 0.0:\n+ if total_drawtime == 0.0 and total_runtime >= 0.0:\n self.draw_time_percentage = '~ 0%'\n+ elif total_drawtime < 0.0 or total_runtime <= 0.0:\n+ # This weird condition is possible in two ways:\n+ # 1. drawtime and/or runtime are negative, due to clock changes\n+ # on Python 2 or old OSs (we use monotonic() where available)\n+ # 2. floating-point issues *very rarely* cause math.fsum to be\n+ # off by the lowest bit, so drawtime==0 and runtime!=0, eek!\n+ self.draw_time_percentage = 'NaN'\n else:\n draw_time_percentage = 100.0 * min(\n 1, total_drawtime / total_runtime)\n", "issue": "Avoid \"impossible\" ZeroDivisionError we keep seeing\nhttps://github.com/HypothesisWorks/hypothesis/blob/592bc7cd2c642bf273113083457444b4823021c9/hypothesis-python/src/hypothesis/statistics.py#L88-L95\r\n\r\nThis snippet occasionally leads to problems, when `total_drawtime` is nonzero but `total_runtime` *is* zero, causing test failures [like this one](https://ci.appveyor.com/project/DRMacIver/hypothesis/build/1.0.458/job/hl2d74810901ttf5#L926). I'm not sure why this happens - it's been observed on Python 2 (without) and Python 3 (with) a monotonic clock, so there might be multiple causes - but the solution is pretty simple. All you need to do to fix this is to change the first conditional to\r\n\r\n```python\r\nif total_drawtime <= 0.0 or total_runtime <= 0.0:\r\n```\r\n\r\nThen write up a short changelog (see guides/documentation.rst), add your name to the contributors list (in - and read! - contributing.rst), and open a pull request! If you have any questions, just let me know and I'd be happy to help out.\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport math\n\nfrom hypothesis.utils.dynamicvariables import DynamicVariable\nfrom hypothesis.internal.conjecture.data import Status\nfrom hypothesis.internal.conjecture.engine import MAX_SHRINKS, ExitReason\n\ncollector = DynamicVariable(None)\n\n\nclass Statistics(object):\n\n def __init__(self, engine):\n self.passing_examples = len(\n engine.status_runtimes.get(Status.VALID, ()))\n self.invalid_examples = len(\n engine.status_runtimes.get(Status.INVALID, []) +\n engine.status_runtimes.get(Status.OVERRUN, [])\n )\n self.failing_examples = len(engine.status_runtimes.get(\n Status.INTERESTING, ()))\n\n runtimes = sorted(\n engine.status_runtimes.get(Status.VALID, []) +\n engine.status_runtimes.get(Status.INVALID, []) +\n engine.status_runtimes.get(Status.INTERESTING, [])\n )\n\n self.has_runs = bool(runtimes)\n if not self.has_runs:\n return\n\n n = max(0, len(runtimes) - 1)\n lower = int(runtimes[int(math.floor(n * 0.05))] * 1000)\n upper = int(runtimes[int(math.ceil(n * 0.95))] * 1000)\n if upper == 0:\n self.runtimes = '< 1ms'\n elif lower == upper:\n self.runtimes = '~ %dms' % (lower,)\n else:\n self.runtimes = '%d-%d ms' % (lower, upper)\n\n if engine.exit_reason == ExitReason.finished:\n self.exit_reason = 'nothing left to do'\n elif engine.exit_reason == ExitReason.flaky:\n self.exit_reason = 'test was flaky'\n elif engine.exit_reason == ExitReason.max_shrinks:\n self.exit_reason = 'shrunk example %s times' % (MAX_SHRINKS,)\n elif engine.exit_reason == ExitReason.max_iterations:\n self.exit_reason = ((\n 'settings.max_examples={}, but < 10% of examples satisfied '\n 'assumptions').format(engine.settings.max_examples)\n )\n else:\n self.exit_reason = (\n 'settings.%s=%r' % (\n engine.exit_reason.name,\n getattr(engine.settings, engine.exit_reason.name)\n )\n )\n\n self.events = [\n '%.2f%%, %s' % (\n c / engine.call_count * 100, e\n ) for e, c in sorted(\n engine.event_call_counts.items(), key=lambda x: -x[1])\n ]\n\n total_runtime = math.fsum(engine.all_runtimes)\n total_drawtime = math.fsum(engine.all_drawtimes)\n\n if total_drawtime == 0.0:\n self.draw_time_percentage = '~ 0%'\n else:\n draw_time_percentage = 100.0 * min(\n 1, total_drawtime / total_runtime)\n\n self.draw_time_percentage = '~ %d%%' % (\n round(draw_time_percentage),)\n\n\ndef note_engine_for_statistics(engine):\n callback = collector.value\n if callback is not None:\n callback(Statistics(engine))\n", "path": "hypothesis-python/src/hypothesis/statistics.py"}], "after_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis-python\n#\n# Most of this work is copyright (C) 2013-2018 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport math\n\nfrom hypothesis.utils.dynamicvariables import DynamicVariable\nfrom hypothesis.internal.conjecture.data import Status\nfrom hypothesis.internal.conjecture.engine import MAX_SHRINKS, ExitReason\n\ncollector = DynamicVariable(None)\n\n\nclass Statistics(object):\n\n def __init__(self, engine):\n self.passing_examples = len(\n engine.status_runtimes.get(Status.VALID, ()))\n self.invalid_examples = len(\n engine.status_runtimes.get(Status.INVALID, []) +\n engine.status_runtimes.get(Status.OVERRUN, [])\n )\n self.failing_examples = len(engine.status_runtimes.get(\n Status.INTERESTING, ()))\n\n runtimes = sorted(\n engine.status_runtimes.get(Status.VALID, []) +\n engine.status_runtimes.get(Status.INVALID, []) +\n engine.status_runtimes.get(Status.INTERESTING, [])\n )\n\n self.has_runs = bool(runtimes)\n if not self.has_runs:\n return\n\n n = max(0, len(runtimes) - 1)\n lower = int(runtimes[int(math.floor(n * 0.05))] * 1000)\n upper = int(runtimes[int(math.ceil(n * 0.95))] * 1000)\n if upper == 0:\n self.runtimes = '< 1ms'\n elif lower == upper:\n self.runtimes = '~ %dms' % (lower,)\n else:\n self.runtimes = '%d-%d ms' % (lower, upper)\n\n if engine.exit_reason == ExitReason.finished:\n self.exit_reason = 'nothing left to do'\n elif engine.exit_reason == ExitReason.flaky:\n self.exit_reason = 'test was flaky'\n elif engine.exit_reason == ExitReason.max_shrinks:\n self.exit_reason = 'shrunk example %s times' % (MAX_SHRINKS,)\n elif engine.exit_reason == ExitReason.max_iterations:\n self.exit_reason = ((\n 'settings.max_examples={}, but < 10% of examples satisfied '\n 'assumptions').format(engine.settings.max_examples)\n )\n else:\n self.exit_reason = (\n 'settings.%s=%r' % (\n engine.exit_reason.name,\n getattr(engine.settings, engine.exit_reason.name)\n )\n )\n\n self.events = [\n '%.2f%%, %s' % (\n c / engine.call_count * 100, e\n ) for e, c in sorted(\n engine.event_call_counts.items(), key=lambda x: -x[1])\n ]\n\n total_runtime = math.fsum(engine.all_runtimes)\n total_drawtime = math.fsum(engine.all_drawtimes)\n\n if total_drawtime == 0.0 and total_runtime >= 0.0:\n self.draw_time_percentage = '~ 0%'\n elif total_drawtime < 0.0 or total_runtime <= 0.0:\n # This weird condition is possible in two ways:\n # 1. drawtime and/or runtime are negative, due to clock changes\n # on Python 2 or old OSs (we use monotonic() where available)\n # 2. floating-point issues *very rarely* cause math.fsum to be\n # off by the lowest bit, so drawtime==0 and runtime!=0, eek!\n self.draw_time_percentage = 'NaN'\n else:\n draw_time_percentage = 100.0 * min(\n 1, total_drawtime / total_runtime)\n\n self.draw_time_percentage = '~ %d%%' % (\n round(draw_time_percentage),)\n\n\ndef note_engine_for_statistics(engine):\n callback = collector.value\n if callback is not None:\n callback(Statistics(engine))\n", "path": "hypothesis-python/src/hypothesis/statistics.py"}]} | 1,653 | 293 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.