problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_12370 | rasdani/github-patches | git_diff | google__openhtf-181 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add ability to pass in dut_serial via command line config
Add ability to pass in dut_serial via command line. This would be a useful feature when doing loop test
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openhtf/exe/triggers.py`
Content:
```
1 # Copyright 2014 Google Inc. All Rights Reserved.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Module for handling the triggering of test start/stop.
16
17 In order for the TestExecutor (see exe/__init__.py) to know when to start a
18 test, it needs a way to know when a DUT has been connected. Also, the test
19 can't restart until the DUT is removed and re-appears. The serial for the
20 TestRun can be read from the DUT, or from the frontend.
21
22 This module provides some built-in triggers. Custom implementations of test
23 start and stop triggers must follow the following interface:
24
25 TestStart:
26 Args:
27 None
28 Returns:
29 DUT identifier, or None if it is not known at test start time.
30
31 TestStop:
32 Args:
33 dut_id: DUT identifier of the test that is stopping.
34 Returns:
35 Blocks until the test can re-start, then returns None.
36 """
37
38 import logging
39 import time
40
41 from openhtf.io import user_input
42
43 _LOG = logging.getLogger(__name__)
44
45
46 def AutoStart(): # pylint: disable=invalid-name
47 """Start the test immediately with a dummy DUT ID."""
48 return 'UNKNOWN_DUT_ID'
49
50
51 def AutoStop(dummy_dut_id): # pylint: disable=invalid-name
52 """Stop the test immediately regardless of DUT ID given."""
53 pass
54
55
56 # pylint: disable=invalid-name
57 def PromptForTestStart(message='Provide a DUT ID in order to start the test.',
58 text_input=True):
59 """Make a test start trigger based on prompting the user for input."""
60 def trigger(): # pylint: disable=missing-docstring
61 prompt_manager = user_input.get_prompt_manager()
62 return prompt_manager.DisplayPrompt(message, text_input=text_input)
63 return trigger
64
65
66 def PromptForTestStop(message='Hit ENTER to complete the test.',
67 text_input=False):
68 """Make a test stop trigger based on prompting the user for a response."""
69 def trigger(dummy_dut_id): # pylint: disable=missing-docstring
70 prompt_manager = user_input.get_prompt_manager()
71 return prompt_manager.DisplayPrompt(message, text_input=text_input)
72 return trigger
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openhtf/exe/triggers.py b/openhtf/exe/triggers.py
--- a/openhtf/exe/triggers.py
+++ b/openhtf/exe/triggers.py
@@ -38,14 +38,20 @@
import logging
import time
+import gflags
+
from openhtf.io import user_input
-_LOG = logging.getLogger(__name__)
+gflags.DEFINE_string('dut_serial', 'UNKNOWN_DUT_ID',
+ 'DUT serial to start the test with. '
+ 'Only use if using the AutoStart trigger.')
+FLAGS = gflags.FLAGS
+_LOG = logging.getLogger(__name__)
def AutoStart(): # pylint: disable=invalid-name
"""Start the test immediately with a dummy DUT ID."""
- return 'UNKNOWN_DUT_ID'
+ return FLAGS.dut_serial
def AutoStop(dummy_dut_id): # pylint: disable=invalid-name
| {"golden_diff": "diff --git a/openhtf/exe/triggers.py b/openhtf/exe/triggers.py\n--- a/openhtf/exe/triggers.py\n+++ b/openhtf/exe/triggers.py\n@@ -38,14 +38,20 @@\n import logging\n import time\n \n+import gflags\n+\n from openhtf.io import user_input\n \n-_LOG = logging.getLogger(__name__)\n+gflags.DEFINE_string('dut_serial', 'UNKNOWN_DUT_ID',\n+ 'DUT serial to start the test with. '\n+ 'Only use if using the AutoStart trigger.')\n \n+FLAGS = gflags.FLAGS\n+_LOG = logging.getLogger(__name__)\n \n def AutoStart(): # pylint: disable=invalid-name\n \"\"\"Start the test immediately with a dummy DUT ID.\"\"\"\n- return 'UNKNOWN_DUT_ID'\n+ return FLAGS.dut_serial\n \n \n def AutoStop(dummy_dut_id): # pylint: disable=invalid-name\n", "issue": "Add ability to pass in dut_serial via command line config\nAdd ability to pass in dut_serial via command line. This would be a useful feature when doing loop test\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Module for handling the triggering of test start/stop.\n\nIn order for the TestExecutor (see exe/__init__.py) to know when to start a\ntest, it needs a way to know when a DUT has been connected. Also, the test\ncan't restart until the DUT is removed and re-appears. The serial for the\nTestRun can be read from the DUT, or from the frontend.\n\nThis module provides some built-in triggers. Custom implementations of test\nstart and stop triggers must follow the following interface:\n\nTestStart:\n Args:\n None\n Returns:\n DUT identifier, or None if it is not known at test start time.\n\nTestStop:\n Args:\n dut_id: DUT identifier of the test that is stopping.\n Returns:\n Blocks until the test can re-start, then returns None.\n\"\"\"\n\nimport logging\nimport time\n\nfrom openhtf.io import user_input\n\n_LOG = logging.getLogger(__name__)\n\n\ndef AutoStart(): # pylint: disable=invalid-name\n \"\"\"Start the test immediately with a dummy DUT ID.\"\"\"\n return 'UNKNOWN_DUT_ID'\n\n\ndef AutoStop(dummy_dut_id): # pylint: disable=invalid-name\n \"\"\"Stop the test immediately regardless of DUT ID given.\"\"\"\n pass\n\n\n# pylint: disable=invalid-name\ndef PromptForTestStart(message='Provide a DUT ID in order to start the test.',\n text_input=True):\n \"\"\"Make a test start trigger based on prompting the user for input.\"\"\"\n def trigger(): # pylint: disable=missing-docstring\n prompt_manager = user_input.get_prompt_manager()\n return prompt_manager.DisplayPrompt(message, text_input=text_input)\n return trigger\n\n\ndef PromptForTestStop(message='Hit ENTER to complete the test.',\n text_input=False):\n \"\"\"Make a test stop trigger based on prompting the user for a response.\"\"\"\n def trigger(dummy_dut_id): # pylint: disable=missing-docstring\n prompt_manager = user_input.get_prompt_manager()\n return prompt_manager.DisplayPrompt(message, text_input=text_input)\n return trigger\n", "path": "openhtf/exe/triggers.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Module for handling the triggering of test start/stop.\n\nIn order for the TestExecutor (see exe/__init__.py) to know when to start a\ntest, it needs a way to know when a DUT has been connected. Also, the test\ncan't restart until the DUT is removed and re-appears. The serial for the\nTestRun can be read from the DUT, or from the frontend.\n\nThis module provides some built-in triggers. Custom implementations of test\nstart and stop triggers must follow the following interface:\n\nTestStart:\n Args:\n None\n Returns:\n DUT identifier, or None if it is not known at test start time.\n\nTestStop:\n Args:\n dut_id: DUT identifier of the test that is stopping.\n Returns:\n Blocks until the test can re-start, then returns None.\n\"\"\"\n\nimport logging\nimport time\n\nimport gflags\n\nfrom openhtf.io import user_input\n\ngflags.DEFINE_string('dut_serial', 'UNKNOWN_DUT_ID',\n 'DUT serial to start the test with. '\n 'Only use if using the AutoStart trigger.')\n\nFLAGS = gflags.FLAGS\n_LOG = logging.getLogger(__name__)\n\ndef AutoStart(): # pylint: disable=invalid-name\n \"\"\"Start the test immediately with a dummy DUT ID.\"\"\"\n return FLAGS.dut_serial\n\n\ndef AutoStop(dummy_dut_id): # pylint: disable=invalid-name\n \"\"\"Stop the test immediately regardless of DUT ID given.\"\"\"\n pass\n\n\n# pylint: disable=invalid-name\ndef PromptForTestStart(message='Provide a DUT ID in order to start the test.',\n text_input=True):\n \"\"\"Make a test start trigger based on prompting the user for input.\"\"\"\n def trigger(): # pylint: disable=missing-docstring\n prompt_manager = user_input.get_prompt_manager()\n return prompt_manager.DisplayPrompt(message, text_input=text_input)\n return trigger\n\n\ndef PromptForTestStop(message='Hit ENTER to complete the test.',\n text_input=False):\n \"\"\"Make a test stop trigger based on prompting the user for a response.\"\"\"\n def trigger(dummy_dut_id): # pylint: disable=missing-docstring\n prompt_manager = user_input.get_prompt_manager()\n return prompt_manager.DisplayPrompt(message, text_input=text_input)\n return trigger\n", "path": "openhtf/exe/triggers.py"}]} | 1,022 | 207 |
gh_patches_debug_12079 | rasdani/github-patches | git_diff | pretalx__pretalx-185 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Restructure CSS
- [x] Use `$brand_color`
- [ ] Break SCSS up into more files
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pretalx/common/tasks.py`
Content:
```
1 import hashlib
2 import os
3
4 import django_libsass
5 import sass
6 from django.conf import settings
7 from django.core.files.base import ContentFile
8 from django.core.files.storage import default_storage
9 from django.templatetags.static import static
10
11 from pretalx.celery_app import app
12 from pretalx.event.models import Event
13
14
15 @app.task()
16 def regenerate_css(event_id: int):
17 event = Event.objects.get(pk=event_id)
18 local_apps = ['cfp', 'orga']
19
20 if not event.primary_color:
21 for local_app in local_apps:
22 event.settings.delete(f'{local_app}_css_file')
23 return
24
25 for local_app in local_apps:
26 sassrules = []
27 if event.primary_color:
28 sassrules.append('$brand-primary: {};'.format(event.primary_color))
29
30 path = os.path.join(settings.STATIC_ROOT, local_app, 'scss/main.scss')
31 sassrules.append(f'@import "{path}";')
32
33 cf = dict(django_libsass.CUSTOM_FUNCTIONS)
34 cf['static'] = static
35 css = sass.compile(
36 string="\n".join(sassrules),
37 output_style='compressed',
38 custom_functions=cf
39 )
40 checksum = hashlib.sha1(css.encode('utf-8')).hexdigest()
41 fname = f'{event.slug}/{local_app}.{checksum[:16]}.css'
42
43 if event.settings.get(f'{local_app}_css_checksum', '') != checksum:
44 newname = default_storage.save(fname, ContentFile(css.encode('utf-8')))
45 event.settings.set(f'{local_app}_css_file', f'/media/{newname}')
46 event.settings.set(f'{local_app}_css_checksum', checksum)
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pretalx/common/tasks.py b/src/pretalx/common/tasks.py
--- a/src/pretalx/common/tasks.py
+++ b/src/pretalx/common/tasks.py
@@ -23,12 +23,12 @@
return
for local_app in local_apps:
+ path = os.path.join(settings.STATIC_ROOT, local_app, 'scss/main.scss')
sassrules = []
+
if event.primary_color:
sassrules.append('$brand-primary: {};'.format(event.primary_color))
-
- path = os.path.join(settings.STATIC_ROOT, local_app, 'scss/main.scss')
- sassrules.append(f'@import "{path}";')
+ sassrules.append(f'@import "{path}";')
cf = dict(django_libsass.CUSTOM_FUNCTIONS)
cf['static'] = static
| {"golden_diff": "diff --git a/src/pretalx/common/tasks.py b/src/pretalx/common/tasks.py\n--- a/src/pretalx/common/tasks.py\n+++ b/src/pretalx/common/tasks.py\n@@ -23,12 +23,12 @@\n return\n \n for local_app in local_apps:\n+ path = os.path.join(settings.STATIC_ROOT, local_app, 'scss/main.scss')\n sassrules = []\n+\n if event.primary_color:\n sassrules.append('$brand-primary: {};'.format(event.primary_color))\n-\n- path = os.path.join(settings.STATIC_ROOT, local_app, 'scss/main.scss')\n- sassrules.append(f'@import \"{path}\";')\n+ sassrules.append(f'@import \"{path}\";')\n \n cf = dict(django_libsass.CUSTOM_FUNCTIONS)\n cf['static'] = static\n", "issue": "Restructure CSS\n- [x] Use `$brand_color`\r\n- [ ] Break SCSS up into more files \n", "before_files": [{"content": "import hashlib\nimport os\n\nimport django_libsass\nimport sass\nfrom django.conf import settings\nfrom django.core.files.base import ContentFile\nfrom django.core.files.storage import default_storage\nfrom django.templatetags.static import static\n\nfrom pretalx.celery_app import app\nfrom pretalx.event.models import Event\n\n\[email protected]()\ndef regenerate_css(event_id: int):\n event = Event.objects.get(pk=event_id)\n local_apps = ['cfp', 'orga']\n\n if not event.primary_color:\n for local_app in local_apps:\n event.settings.delete(f'{local_app}_css_file')\n return\n\n for local_app in local_apps:\n sassrules = []\n if event.primary_color:\n sassrules.append('$brand-primary: {};'.format(event.primary_color))\n\n path = os.path.join(settings.STATIC_ROOT, local_app, 'scss/main.scss')\n sassrules.append(f'@import \"{path}\";')\n\n cf = dict(django_libsass.CUSTOM_FUNCTIONS)\n cf['static'] = static\n css = sass.compile(\n string=\"\\n\".join(sassrules),\n output_style='compressed',\n custom_functions=cf\n )\n checksum = hashlib.sha1(css.encode('utf-8')).hexdigest()\n fname = f'{event.slug}/{local_app}.{checksum[:16]}.css'\n\n if event.settings.get(f'{local_app}_css_checksum', '') != checksum:\n newname = default_storage.save(fname, ContentFile(css.encode('utf-8')))\n event.settings.set(f'{local_app}_css_file', f'/media/{newname}')\n event.settings.set(f'{local_app}_css_checksum', checksum)\n", "path": "src/pretalx/common/tasks.py"}], "after_files": [{"content": "import hashlib\nimport os\n\nimport django_libsass\nimport sass\nfrom django.conf import settings\nfrom django.core.files.base import ContentFile\nfrom django.core.files.storage import default_storage\nfrom django.templatetags.static import static\n\nfrom pretalx.celery_app import app\nfrom pretalx.event.models import Event\n\n\[email protected]()\ndef regenerate_css(event_id: int):\n event = Event.objects.get(pk=event_id)\n local_apps = ['cfp', 'orga']\n\n if not event.primary_color:\n for local_app in local_apps:\n event.settings.delete(f'{local_app}_css_file')\n return\n\n for local_app in local_apps:\n path = os.path.join(settings.STATIC_ROOT, local_app, 'scss/main.scss')\n sassrules = []\n\n if event.primary_color:\n sassrules.append('$brand-primary: {};'.format(event.primary_color))\n sassrules.append(f'@import \"{path}\";')\n\n cf = dict(django_libsass.CUSTOM_FUNCTIONS)\n cf['static'] = static\n css = sass.compile(\n string=\"\\n\".join(sassrules),\n output_style='compressed',\n custom_functions=cf\n )\n checksum = hashlib.sha1(css.encode('utf-8')).hexdigest()\n fname = f'{event.slug}/{local_app}.{checksum[:16]}.css'\n\n if event.settings.get(f'{local_app}_css_checksum', '') != checksum:\n newname = default_storage.save(fname, ContentFile(css.encode('utf-8')))\n event.settings.set(f'{local_app}_css_file', f'/media/{newname}')\n event.settings.set(f'{local_app}_css_checksum', checksum)\n", "path": "src/pretalx/common/tasks.py"}]} | 732 | 186 |
gh_patches_debug_35351 | rasdani/github-patches | git_diff | microsoft__onnxscript-392 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ghpage: use the Furo theme
https://pradyunsg.me/furo/. It’s pretty and more modern. Used by black https://black.readthedocs.io/en/stable/#
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # Configuration file for the Sphinx documentation builder.
2 # To run the documentation: python -m sphinx docs dist/html
3
4 import os
5 import re
6 import sys
7
8 import sphinx_gallery.sorting
9
10 import onnxscript
11
12 # -- Project information -----------------------------------------------------
13
14 project = "onnx-script"
15 copyright = "2022, onnx"
16 author = "onnx"
17 version = onnxscript.__version__
18 release = version
19
20 # -- General configuration ---------------------------------------------------
21
22 extensions = [
23 "sphinx.ext.intersphinx",
24 "sphinx.ext.imgmath",
25 "sphinx.ext.ifconfig",
26 "sphinx.ext.viewcode",
27 "sphinx.ext.autodoc",
28 "sphinx.ext.githubpages",
29 "sphinx_gallery.gen_gallery",
30 "sphinx.ext.autodoc",
31 "sphinx.ext.graphviz",
32 "sphinx.ext.doctest",
33 "sphinx.ext.napoleon",
34 ]
35
36 templates_path = ["_templates"]
37 source_suffix = [".rst"]
38
39 master_doc = "index"
40 language = "en"
41 exclude_patterns = []
42 pygments_style = "default"
43
44 # -- Options for HTML output -------------------------------------------------
45
46 html_static_path = ["_static"]
47 html_theme = "pydata_sphinx_theme"
48 html_theme_path = ["_static"]
49 html_theme_options = {
50 "logo": {
51 "image_light": "logo-light.png",
52 "image_dark": "logo-dark.png",
53 }
54 }
55 html_css_files = ["css/custom.css"]
56
57 # -- Options for graphviz ----------------------------------------------------
58
59 graphviz_output_format = "svg"
60
61 # -- Options for intersphinx extension ---------------------------------------
62
63 # Example configuration for intersphinx: refer to the Python standard library.
64 intersphinx_mapping = {"https://docs.python.org/": None}
65
66 # -- Options for Sphinx Gallery ----------------------------------------------
67
68 intersphinx_mapping = {
69 "python": (f"https://docs.python.org/{sys.version_info.major}", None),
70 "matplotlib": ("https://matplotlib.org/", None),
71 "numpy": ("https://docs.scipy.org/doc/numpy/", None),
72 "onnxruntime": ("https://onnxruntime.ai/docs/api/python/", None),
73 }
74
75 sphinx_gallery_conf = {
76 "examples_dirs": ["examples"],
77 "gallery_dirs": ["auto_examples"],
78 "capture_repr": ("_repr_html_", "__repr__"),
79 "ignore_repr_types": r"matplotlib.text|matplotlib.axes",
80 "filename_pattern": f"{re.escape(os.sep)}[0-9]*_?plot_",
81 "within_subsection_order": sphinx_gallery.sorting.FileNameSortKey,
82 }
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -12,7 +12,7 @@
# -- Project information -----------------------------------------------------
project = "onnx-script"
-copyright = "2022, onnx"
+copyright = "2023, onnx"
author = "onnx"
version = onnxscript.__version__
release = version
@@ -31,10 +31,11 @@
"sphinx.ext.graphviz",
"sphinx.ext.doctest",
"sphinx.ext.napoleon",
+ "sphinx_copybutton",
]
templates_path = ["_templates"]
-source_suffix = [".rst"]
+source_suffix = [".rst", ".md"]
master_doc = "index"
language = "en"
@@ -44,13 +45,12 @@
# -- Options for HTML output -------------------------------------------------
html_static_path = ["_static"]
-html_theme = "pydata_sphinx_theme"
+html_theme = "furo"
html_theme_path = ["_static"]
html_theme_options = {
- "logo": {
- "image_light": "logo-light.png",
- "image_dark": "logo-dark.png",
- }
+ "light_logo": "logo-light.png",
+ "dark_logo": "logo-dark.png",
+ "sidebar_hide_name": True,
}
html_css_files = ["css/custom.css"]
@@ -60,11 +60,6 @@
# -- Options for intersphinx extension ---------------------------------------
-# Example configuration for intersphinx: refer to the Python standard library.
-intersphinx_mapping = {"https://docs.python.org/": None}
-
-# -- Options for Sphinx Gallery ----------------------------------------------
-
intersphinx_mapping = {
"python": (f"https://docs.python.org/{sys.version_info.major}", None),
"matplotlib": ("https://matplotlib.org/", None),
@@ -72,6 +67,8 @@
"onnxruntime": ("https://onnxruntime.ai/docs/api/python/", None),
}
+# -- Options for Sphinx Gallery ----------------------------------------------
+
sphinx_gallery_conf = {
"examples_dirs": ["examples"],
"gallery_dirs": ["auto_examples"],
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -12,7 +12,7 @@\n # -- Project information -----------------------------------------------------\n \n project = \"onnx-script\"\n-copyright = \"2022, onnx\"\n+copyright = \"2023, onnx\"\n author = \"onnx\"\n version = onnxscript.__version__\n release = version\n@@ -31,10 +31,11 @@\n \"sphinx.ext.graphviz\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.napoleon\",\n+ \"sphinx_copybutton\",\n ]\n \n templates_path = [\"_templates\"]\n-source_suffix = [\".rst\"]\n+source_suffix = [\".rst\", \".md\"]\n \n master_doc = \"index\"\n language = \"en\"\n@@ -44,13 +45,12 @@\n # -- Options for HTML output -------------------------------------------------\n \n html_static_path = [\"_static\"]\n-html_theme = \"pydata_sphinx_theme\"\n+html_theme = \"furo\"\n html_theme_path = [\"_static\"]\n html_theme_options = {\n- \"logo\": {\n- \"image_light\": \"logo-light.png\",\n- \"image_dark\": \"logo-dark.png\",\n- }\n+ \"light_logo\": \"logo-light.png\",\n+ \"dark_logo\": \"logo-dark.png\",\n+ \"sidebar_hide_name\": True,\n }\n html_css_files = [\"css/custom.css\"]\n \n@@ -60,11 +60,6 @@\n \n # -- Options for intersphinx extension ---------------------------------------\n \n-# Example configuration for intersphinx: refer to the Python standard library.\n-intersphinx_mapping = {\"https://docs.python.org/\": None}\n-\n-# -- Options for Sphinx Gallery ----------------------------------------------\n-\n intersphinx_mapping = {\n \"python\": (f\"https://docs.python.org/{sys.version_info.major}\", None),\n \"matplotlib\": (\"https://matplotlib.org/\", None),\n@@ -72,6 +67,8 @@\n \"onnxruntime\": (\"https://onnxruntime.ai/docs/api/python/\", None),\n }\n \n+# -- Options for Sphinx Gallery ----------------------------------------------\n+\n sphinx_gallery_conf = {\n \"examples_dirs\": [\"examples\"],\n \"gallery_dirs\": [\"auto_examples\"],\n", "issue": "ghpage: use the Furo theme\nhttps://pradyunsg.me/furo/. It\u2019s pretty and more modern. Used by black https://black.readthedocs.io/en/stable/#\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n# To run the documentation: python -m sphinx docs dist/html\n\nimport os\nimport re\nimport sys\n\nimport sphinx_gallery.sorting\n\nimport onnxscript\n\n# -- Project information -----------------------------------------------------\n\nproject = \"onnx-script\"\ncopyright = \"2022, onnx\"\nauthor = \"onnx\"\nversion = onnxscript.__version__\nrelease = version\n\n# -- General configuration ---------------------------------------------------\n\nextensions = [\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.imgmath\",\n \"sphinx.ext.ifconfig\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.githubpages\",\n \"sphinx_gallery.gen_gallery\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.graphviz\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.napoleon\",\n]\n\ntemplates_path = [\"_templates\"]\nsource_suffix = [\".rst\"]\n\nmaster_doc = \"index\"\nlanguage = \"en\"\nexclude_patterns = []\npygments_style = \"default\"\n\n# -- Options for HTML output -------------------------------------------------\n\nhtml_static_path = [\"_static\"]\nhtml_theme = \"pydata_sphinx_theme\"\nhtml_theme_path = [\"_static\"]\nhtml_theme_options = {\n \"logo\": {\n \"image_light\": \"logo-light.png\",\n \"image_dark\": \"logo-dark.png\",\n }\n}\nhtml_css_files = [\"css/custom.css\"]\n\n# -- Options for graphviz ----------------------------------------------------\n\ngraphviz_output_format = \"svg\"\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\"https://docs.python.org/\": None}\n\n# -- Options for Sphinx Gallery ----------------------------------------------\n\nintersphinx_mapping = {\n \"python\": (f\"https://docs.python.org/{sys.version_info.major}\", None),\n \"matplotlib\": (\"https://matplotlib.org/\", None),\n \"numpy\": (\"https://docs.scipy.org/doc/numpy/\", None),\n \"onnxruntime\": (\"https://onnxruntime.ai/docs/api/python/\", None),\n}\n\nsphinx_gallery_conf = {\n \"examples_dirs\": [\"examples\"],\n \"gallery_dirs\": [\"auto_examples\"],\n \"capture_repr\": (\"_repr_html_\", \"__repr__\"),\n \"ignore_repr_types\": r\"matplotlib.text|matplotlib.axes\",\n \"filename_pattern\": f\"{re.escape(os.sep)}[0-9]*_?plot_\",\n \"within_subsection_order\": sphinx_gallery.sorting.FileNameSortKey,\n}\n", "path": "docs/conf.py"}], "after_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n# To run the documentation: python -m sphinx docs dist/html\n\nimport os\nimport re\nimport sys\n\nimport sphinx_gallery.sorting\n\nimport onnxscript\n\n# -- Project information -----------------------------------------------------\n\nproject = \"onnx-script\"\ncopyright = \"2023, onnx\"\nauthor = \"onnx\"\nversion = onnxscript.__version__\nrelease = version\n\n# -- General configuration ---------------------------------------------------\n\nextensions = [\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.imgmath\",\n \"sphinx.ext.ifconfig\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.githubpages\",\n \"sphinx_gallery.gen_gallery\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.graphviz\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.napoleon\",\n \"sphinx_copybutton\",\n]\n\ntemplates_path = [\"_templates\"]\nsource_suffix = [\".rst\", \".md\"]\n\nmaster_doc = \"index\"\nlanguage = \"en\"\nexclude_patterns = []\npygments_style = \"default\"\n\n# -- Options for HTML output -------------------------------------------------\n\nhtml_static_path = [\"_static\"]\nhtml_theme = \"furo\"\nhtml_theme_path = [\"_static\"]\nhtml_theme_options = {\n \"light_logo\": \"logo-light.png\",\n \"dark_logo\": \"logo-dark.png\",\n \"sidebar_hide_name\": True,\n}\nhtml_css_files = [\"css/custom.css\"]\n\n# -- Options for graphviz ----------------------------------------------------\n\ngraphviz_output_format = \"svg\"\n\n# -- Options for intersphinx extension ---------------------------------------\n\nintersphinx_mapping = {\n \"python\": (f\"https://docs.python.org/{sys.version_info.major}\", None),\n \"matplotlib\": (\"https://matplotlib.org/\", None),\n \"numpy\": (\"https://docs.scipy.org/doc/numpy/\", None),\n \"onnxruntime\": (\"https://onnxruntime.ai/docs/api/python/\", None),\n}\n\n# -- Options for Sphinx Gallery ----------------------------------------------\n\nsphinx_gallery_conf = {\n \"examples_dirs\": [\"examples\"],\n \"gallery_dirs\": [\"auto_examples\"],\n \"capture_repr\": (\"_repr_html_\", \"__repr__\"),\n \"ignore_repr_types\": r\"matplotlib.text|matplotlib.axes\",\n \"filename_pattern\": f\"{re.escape(os.sep)}[0-9]*_?plot_\",\n \"within_subsection_order\": sphinx_gallery.sorting.FileNameSortKey,\n}\n", "path": "docs/conf.py"}]} | 998 | 479 |
gh_patches_debug_49854 | rasdani/github-patches | git_diff | urllib3__urllib3-678 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
setup.py extra_require for certifi, pyopenssl, other?
Try to setup as many of https://urllib3.readthedocs.org/en/latest/security.html#security's recommended dependencies as possible.
Maybe something like `pip install urllib3[secure]`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 from distutils.core import setup
4
5 import os
6 import re
7
8 try:
9 import setuptools
10 except ImportError:
11 pass # No 'develop' command, oh well.
12
13 base_path = os.path.dirname(__file__)
14
15 # Get the version (borrowed from SQLAlchemy)
16 fp = open(os.path.join(base_path, 'urllib3', '__init__.py'))
17 VERSION = re.compile(r".*__version__ = '(.*?)'",
18 re.S).match(fp.read()).group(1)
19 fp.close()
20
21
22 version = VERSION
23
24 setup(name='urllib3',
25 version=version,
26 description="HTTP library with thread-safe connection pooling, file post, and more.",
27 long_description=open('README.rst').read() + '\n\n' + open('CHANGES.rst').read(),
28 classifiers=[
29 'Environment :: Web Environment',
30 'Intended Audience :: Developers',
31 'License :: OSI Approved :: MIT License',
32 'Operating System :: OS Independent',
33 'Programming Language :: Python',
34 'Programming Language :: Python :: 2',
35 'Programming Language :: Python :: 3',
36 'Topic :: Internet :: WWW/HTTP',
37 'Topic :: Software Development :: Libraries',
38 ],
39 keywords='urllib httplib threadsafe filepost http https ssl pooling',
40 author='Andrey Petrov',
41 author_email='[email protected]',
42 url='http://urllib3.readthedocs.org/',
43 license='MIT',
44 packages=['urllib3',
45 'urllib3.packages', 'urllib3.packages.ssl_match_hostname',
46 'urllib3.contrib', 'urllib3.util',
47 ],
48 requires=[],
49 tests_require=[
50 # These are a less-specific subset of dev-requirements.txt, for the
51 # convenience of distro package maintainers.
52 'nose',
53 'mock',
54 'tornado',
55 ],
56 test_suite='test',
57 )
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -54,4 +54,15 @@
'tornado',
],
test_suite='test',
+ extras_require={
+ 'secure;python_version<="2.7"': [
+ 'pyOpenSSL',
+ 'ndg-httpsclient',
+ 'pyasn1',
+ 'certifi',
+ ],
+ 'secure;python_version>"2.7"': [
+ 'certifi',
+ ],
+ },
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -54,4 +54,15 @@\n 'tornado',\n ],\n test_suite='test',\n+ extras_require={\n+ 'secure;python_version<=\"2.7\"': [\n+ 'pyOpenSSL',\n+ 'ndg-httpsclient',\n+ 'pyasn1',\n+ 'certifi',\n+ ],\n+ 'secure;python_version>\"2.7\"': [\n+ 'certifi',\n+ ],\n+ },\n )\n", "issue": "setup.py extra_require for certifi, pyopenssl, other?\nTry to setup as many of https://urllib3.readthedocs.org/en/latest/security.html#security's recommended dependencies as possible.\n\nMaybe something like `pip install urllib3[secure]`\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom distutils.core import setup\n\nimport os\nimport re\n\ntry:\n import setuptools\nexcept ImportError:\n pass # No 'develop' command, oh well.\n\nbase_path = os.path.dirname(__file__)\n\n# Get the version (borrowed from SQLAlchemy)\nfp = open(os.path.join(base_path, 'urllib3', '__init__.py'))\nVERSION = re.compile(r\".*__version__ = '(.*?)'\",\n re.S).match(fp.read()).group(1)\nfp.close()\n\n\nversion = VERSION\n\nsetup(name='urllib3',\n version=version,\n description=\"HTTP library with thread-safe connection pooling, file post, and more.\",\n long_description=open('README.rst').read() + '\\n\\n' + open('CHANGES.rst').read(),\n classifiers=[\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries',\n ],\n keywords='urllib httplib threadsafe filepost http https ssl pooling',\n author='Andrey Petrov',\n author_email='[email protected]',\n url='http://urllib3.readthedocs.org/',\n license='MIT',\n packages=['urllib3',\n 'urllib3.packages', 'urllib3.packages.ssl_match_hostname',\n 'urllib3.contrib', 'urllib3.util',\n ],\n requires=[],\n tests_require=[\n # These are a less-specific subset of dev-requirements.txt, for the\n # convenience of distro package maintainers.\n 'nose',\n 'mock',\n 'tornado',\n ],\n test_suite='test',\n )\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom distutils.core import setup\n\nimport os\nimport re\n\ntry:\n import setuptools\nexcept ImportError:\n pass # No 'develop' command, oh well.\n\nbase_path = os.path.dirname(__file__)\n\n# Get the version (borrowed from SQLAlchemy)\nfp = open(os.path.join(base_path, 'urllib3', '__init__.py'))\nVERSION = re.compile(r\".*__version__ = '(.*?)'\",\n re.S).match(fp.read()).group(1)\nfp.close()\n\n\nversion = VERSION\n\nsetup(name='urllib3',\n version=version,\n description=\"HTTP library with thread-safe connection pooling, file post, and more.\",\n long_description=open('README.rst').read() + '\\n\\n' + open('CHANGES.rst').read(),\n classifiers=[\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries',\n ],\n keywords='urllib httplib threadsafe filepost http https ssl pooling',\n author='Andrey Petrov',\n author_email='[email protected]',\n url='http://urllib3.readthedocs.org/',\n license='MIT',\n packages=['urllib3',\n 'urllib3.packages', 'urllib3.packages.ssl_match_hostname',\n 'urllib3.contrib', 'urllib3.util',\n ],\n requires=[],\n tests_require=[\n # These are a less-specific subset of dev-requirements.txt, for the\n # convenience of distro package maintainers.\n 'nose',\n 'mock',\n 'tornado',\n ],\n test_suite='test',\n extras_require={\n 'secure;python_version<=\"2.7\"': [\n 'pyOpenSSL',\n 'ndg-httpsclient',\n 'pyasn1',\n 'certifi',\n ],\n 'secure;python_version>\"2.7\"': [\n 'certifi',\n ],\n },\n )\n", "path": "setup.py"}]} | 831 | 126 |
gh_patches_debug_2284 | rasdani/github-patches | git_diff | OCA__social-91 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to send email on new template
module: mail_notification_email_template
version: 8.0
**Steps to reproduce**
- Install Sales module and activate technical settings
-configure and test one outgoing server
- Create a new template for Quotation sent ( can keep most values the same as per the Discussions example) , although my final goal is to utilize fields from the sales Quotation
- Create a new draft quotation , add new followers (internal users and one customer)
- make sure that followers is part of the Quotation sent sub type (eg they will receive emails)
- Print the quotation
**Current behavior**
- Emails fails most of the time to all followers, although i have had cases where it succeeds in sending to one.
-remainder of emails is stuck showing failed status.
Log file shows :
**odoo attributeerror: 'mail.notification' object has no attribute '_get_access_link'**
**Expected behavior**
-All emails are supposed to go out using the configured template.
(i have had success where i manually configure the to Email field with email addresses, thus the issue is possibly with the formatting of the to(partners) field, which is currently
${object.partner_id.ids|join(',')}
I currently don't have access to my server logs, but will attach clean logs asap.
Is there any verbose mode i might need to activate? (there is debugging on the outgoing servers, but it doesn't seem to generate more data)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mail_notification_email_template/models/mail_notification.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # © 2016 Therp BV <http://therp.nl>
3 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).
4 from lxml import etree
5 from openerp import api, fields, models
6
7
8 class MailNotification(models.Model):
9 _inherit = 'mail.notification'
10
11 record = fields.Reference(
12 selection=lambda self: [
13 (m.model, m.name) for m in self.env['ir.model'].search([])
14 ],
15 compute='_compute_record')
16 record_access_link = fields.Char(compute='_compute_record')
17
18 @api.multi
19 def _notify_email(self, message_id, force_send=False, user_signature=True):
20 if not self.mapped('message_id.subtype_id.template_id'):
21 return super(MailNotification, self)._notify_email(
22 message_id, force_send=force_send,
23 user_signature=user_signature)
24 message_ids = []
25 for this in self:
26 if not this.mapped('message_id.subtype_id.template_id'):
27 super(MailNotification, this)._notify_email(
28 message_id, force_send=force_send,
29 user_signature=user_signature)
30 continue
31 message = this.message_id
32 if not this.get_partners_to_email(message):
33 continue
34 custom_values = {
35 'references': message.parent_id.message_id,
36 }
37 if message.res_id and hasattr(
38 self.env[message.model], 'message_get_email_values'
39 ):
40 message_values = self.env[message.model].browse(
41 message.res_id
42 ).message_get_email_values(message)
43 # message_get_email_values is guessed to @api.one
44 if message_values and isinstance(message_values, list):
45 message_values = message_values[0]
46 custom_values.update(message_values)
47 message_id = message.subtype_id.template_id.send_mail(this.id)
48 if 'mail_message_id' in custom_values:
49 custom_values.pop('mail_message_id')
50 self.env['mail.mail'].browse(message_id).write(custom_values)
51 message_ids.append(message_id)
52 return message_ids or True
53
54 @api.multi
55 def _compute_record(self):
56 for this in self:
57 if not this.message_id.model or not this.message_id.res_id:
58 continue
59 this.record = self.env[this.message_id.model].browse(
60 this.message_id.res_id)
61 link_html = self.env['mail.mail']._get_partner_access_link(
62 self.env['mail.mail'].new({
63 'notification': True,
64 'mail_message_id': this.message_id.id,
65 }),
66 this.partner_id
67 )
68 for a in etree.HTML(link_html).xpath('//a[@href]'):
69 this.record_access_link = a.get('href')
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mail_notification_email_template/models/mail_notification.py b/mail_notification_email_template/models/mail_notification.py
--- a/mail_notification_email_template/models/mail_notification.py
+++ b/mail_notification_email_template/models/mail_notification.py
@@ -65,5 +65,5 @@
}),
this.partner_id
)
- for a in etree.HTML(link_html).xpath('//a[@href]'):
+ for a in etree.HTML(link_html or '<html/>').xpath('//a[@href]'):
this.record_access_link = a.get('href')
| {"golden_diff": "diff --git a/mail_notification_email_template/models/mail_notification.py b/mail_notification_email_template/models/mail_notification.py\n--- a/mail_notification_email_template/models/mail_notification.py\n+++ b/mail_notification_email_template/models/mail_notification.py\n@@ -65,5 +65,5 @@\n }),\n this.partner_id\n )\n- for a in etree.HTML(link_html).xpath('//a[@href]'):\n+ for a in etree.HTML(link_html or '<html/>').xpath('//a[@href]'):\n this.record_access_link = a.get('href')\n", "issue": "Unable to send email on new template\nmodule: mail_notification_email_template\nversion: 8.0\n\n**Steps to reproduce**\n- Install Sales module and activate technical settings\n -configure and test one outgoing server\n- Create a new template for Quotation sent ( can keep most values the same as per the Discussions example) , although my final goal is to utilize fields from the sales Quotation\n- Create a new draft quotation , add new followers (internal users and one customer)\n- make sure that followers is part of the Quotation sent sub type (eg they will receive emails)\n- Print the quotation\n\n**Current behavior**\n- Emails fails most of the time to all followers, although i have had cases where it succeeds in sending to one.\n -remainder of emails is stuck showing failed status.\n Log file shows :\n **odoo attributeerror: 'mail.notification' object has no attribute '_get_access_link'**\n\n**Expected behavior**\n-All emails are supposed to go out using the configured template.\n\n(i have had success where i manually configure the to Email field with email addresses, thus the issue is possibly with the formatting of the to(partners) field, which is currently\n${object.partner_id.ids|join(',')}\n\nI currently don't have access to my server logs, but will attach clean logs asap. \nIs there any verbose mode i might need to activate? (there is debugging on the outgoing servers, but it doesn't seem to generate more data)\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# \u00a9 2016 Therp BV <http://therp.nl>\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).\nfrom lxml import etree\nfrom openerp import api, fields, models\n\n\nclass MailNotification(models.Model):\n _inherit = 'mail.notification'\n\n record = fields.Reference(\n selection=lambda self: [\n (m.model, m.name) for m in self.env['ir.model'].search([])\n ],\n compute='_compute_record')\n record_access_link = fields.Char(compute='_compute_record')\n\n @api.multi\n def _notify_email(self, message_id, force_send=False, user_signature=True):\n if not self.mapped('message_id.subtype_id.template_id'):\n return super(MailNotification, self)._notify_email(\n message_id, force_send=force_send,\n user_signature=user_signature)\n message_ids = []\n for this in self:\n if not this.mapped('message_id.subtype_id.template_id'):\n super(MailNotification, this)._notify_email(\n message_id, force_send=force_send,\n user_signature=user_signature)\n continue\n message = this.message_id\n if not this.get_partners_to_email(message):\n continue\n custom_values = {\n 'references': message.parent_id.message_id,\n }\n if message.res_id and hasattr(\n self.env[message.model], 'message_get_email_values'\n ):\n message_values = self.env[message.model].browse(\n message.res_id\n ).message_get_email_values(message)\n # message_get_email_values is guessed to @api.one\n if message_values and isinstance(message_values, list):\n message_values = message_values[0]\n custom_values.update(message_values)\n message_id = message.subtype_id.template_id.send_mail(this.id)\n if 'mail_message_id' in custom_values:\n custom_values.pop('mail_message_id')\n self.env['mail.mail'].browse(message_id).write(custom_values)\n message_ids.append(message_id)\n return message_ids or True\n\n @api.multi\n def _compute_record(self):\n for this in self:\n if not this.message_id.model or not this.message_id.res_id:\n continue\n this.record = self.env[this.message_id.model].browse(\n this.message_id.res_id)\n link_html = self.env['mail.mail']._get_partner_access_link(\n self.env['mail.mail'].new({\n 'notification': True,\n 'mail_message_id': this.message_id.id,\n }),\n this.partner_id\n )\n for a in etree.HTML(link_html).xpath('//a[@href]'):\n this.record_access_link = a.get('href')\n", "path": "mail_notification_email_template/models/mail_notification.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# \u00a9 2016 Therp BV <http://therp.nl>\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).\nfrom lxml import etree\nfrom openerp import api, fields, models\n\n\nclass MailNotification(models.Model):\n _inherit = 'mail.notification'\n\n record = fields.Reference(\n selection=lambda self: [\n (m.model, m.name) for m in self.env['ir.model'].search([])\n ],\n compute='_compute_record')\n record_access_link = fields.Char(compute='_compute_record')\n\n @api.multi\n def _notify_email(self, message_id, force_send=False, user_signature=True):\n if not self.mapped('message_id.subtype_id.template_id'):\n return super(MailNotification, self)._notify_email(\n message_id, force_send=force_send,\n user_signature=user_signature)\n message_ids = []\n for this in self:\n if not this.mapped('message_id.subtype_id.template_id'):\n super(MailNotification, this)._notify_email(\n message_id, force_send=force_send,\n user_signature=user_signature)\n continue\n message = this.message_id\n if not this.get_partners_to_email(message):\n continue\n custom_values = {\n 'references': message.parent_id.message_id,\n }\n if message.res_id and hasattr(\n self.env[message.model], 'message_get_email_values'\n ):\n message_values = self.env[message.model].browse(\n message.res_id\n ).message_get_email_values(message)\n # message_get_email_values is guessed to @api.one\n if message_values and isinstance(message_values, list):\n message_values = message_values[0]\n custom_values.update(message_values)\n message_id = message.subtype_id.template_id.send_mail(this.id)\n if 'mail_message_id' in custom_values:\n custom_values.pop('mail_message_id')\n self.env['mail.mail'].browse(message_id).write(custom_values)\n message_ids.append(message_id)\n return message_ids or True\n\n @api.multi\n def _compute_record(self):\n for this in self:\n if not this.message_id.model or not this.message_id.res_id:\n continue\n this.record = self.env[this.message_id.model].browse(\n this.message_id.res_id)\n link_html = self.env['mail.mail']._get_partner_access_link(\n self.env['mail.mail'].new({\n 'notification': True,\n 'mail_message_id': this.message_id.id,\n }),\n this.partner_id\n )\n for a in etree.HTML(link_html or '<html/>').xpath('//a[@href]'):\n this.record_access_link = a.get('href')\n", "path": "mail_notification_email_template/models/mail_notification.py"}]} | 1,274 | 114 |
gh_patches_debug_22969 | rasdani/github-patches | git_diff | TencentBlueKing__bk-user-935 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
【安全】安全漏洞扫描,生产环境下能访问后台swagger
**用文字描述你遇到的问题**
生产环境,swagger未屏蔽可访问
**重现方法**

对应源码未屏蔽
https://github.com/TencentBlueKing/bk-user/blob/40ee56ed90d47b214274819394aa21ec7459b970/src/api/bkuser_core/apis/urls.py#L45
**预期行为**
预期的正常行为
**版本**
- 2.5.2
- 是否是企业版问题?
是
**如果是 SaaS 页面问题,请提供使用的操作系统和浏览器信息**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**额外信息**
任何你觉得有助于问题解决的内容
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/api/bkuser_core/apis/urls.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 TencentBlueKing is pleased to support the open source community by making 蓝鲸智云-用户管理(Bk-User) available.
4 Copyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.
5 Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at http://opensource.org/licenses/MIT
7 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
8 an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
9 specific language governing permissions and limitations under the License.
10 """
11 from django.conf.urls import include, url
12 from drf_yasg import openapi
13 from drf_yasg.generators import OpenAPISchemaGenerator
14 from drf_yasg.views import get_schema_view
15 from rest_framework import permissions
16
17
18 class CustomOpenAPISchemaGenerator(OpenAPISchemaGenerator):
19 def get_schema(self, request=None, public=False):
20 schema = super().get_schema(request, public)
21 return schema
22
23
24 schema_view = get_schema_view(
25 openapi.Info(
26 title="蓝鲸用户管理 API",
27 default_version="v2",
28 description="蓝鲸用户管理后台服务 API",
29 license=openapi.License(name="MIT License"),
30 ),
31 public=True,
32 permission_classes=(permissions.AllowAny,),
33 generator_class=CustomOpenAPISchemaGenerator,
34 patterns=[
35 url(r"^", include("bkuser_core.monitoring.urls")),
36 url(r"^", include("bkuser_core.departments.urls")),
37 url(r"^", include("bkuser_core.profiles.urls")),
38 url(r"^", include("bkuser_core.categories.urls")),
39 # url(r"^", include("bkuser_core.user_settings.urls")),
40 # url(r"^", include("bkuser_core.audit.urls")),
41 ],
42 )
43
44
45 urlpatterns = [
46 url(r"^swagger(?P<format>\.json|\.yaml)$", schema_view.without_ui(cache_timeout=0), name="schema-json"),
47 url(r"^swagger/$", schema_view.with_ui("swagger", cache_timeout=0), name="schema-swagger-ui"),
48 url(r"^redoc/$", schema_view.with_ui("redoc", cache_timeout=0), name="schema-redoc"),
49 ]
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/api/bkuser_core/apis/urls.py b/src/api/bkuser_core/apis/urls.py
--- a/src/api/bkuser_core/apis/urls.py
+++ b/src/api/bkuser_core/apis/urls.py
@@ -8,6 +8,7 @@
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
+from django.conf import settings
from django.conf.urls import include, url
from drf_yasg import openapi
from drf_yasg.generators import OpenAPISchemaGenerator
@@ -41,9 +42,10 @@
],
)
-
-urlpatterns = [
- url(r"^swagger(?P<format>\.json|\.yaml)$", schema_view.without_ui(cache_timeout=0), name="schema-json"),
- url(r"^swagger/$", schema_view.with_ui("swagger", cache_timeout=0), name="schema-swagger-ui"),
- url(r"^redoc/$", schema_view.with_ui("redoc", cache_timeout=0), name="schema-redoc"),
-]
+urlpatterns = []
+if settings.DEBUG:
+ urlpatterns += [
+ url(r"^swagger(?P<format>\.json|\.yaml)$", schema_view.without_ui(cache_timeout=0), name="schema-json"),
+ url(r"^swagger/$", schema_view.with_ui("swagger", cache_timeout=0), name="schema-swagger-ui"),
+ url(r"^redoc/$", schema_view.with_ui("redoc", cache_timeout=0), name="schema-redoc"),
+ ]
| {"golden_diff": "diff --git a/src/api/bkuser_core/apis/urls.py b/src/api/bkuser_core/apis/urls.py\n--- a/src/api/bkuser_core/apis/urls.py\n+++ b/src/api/bkuser_core/apis/urls.py\n@@ -8,6 +8,7 @@\n an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\n specific language governing permissions and limitations under the License.\n \"\"\"\n+from django.conf import settings\n from django.conf.urls import include, url\n from drf_yasg import openapi\n from drf_yasg.generators import OpenAPISchemaGenerator\n@@ -41,9 +42,10 @@\n ],\n )\n \n-\n-urlpatterns = [\n- url(r\"^swagger(?P<format>\\.json|\\.yaml)$\", schema_view.without_ui(cache_timeout=0), name=\"schema-json\"),\n- url(r\"^swagger/$\", schema_view.with_ui(\"swagger\", cache_timeout=0), name=\"schema-swagger-ui\"),\n- url(r\"^redoc/$\", schema_view.with_ui(\"redoc\", cache_timeout=0), name=\"schema-redoc\"),\n-]\n+urlpatterns = []\n+if settings.DEBUG:\n+ urlpatterns += [\n+ url(r\"^swagger(?P<format>\\.json|\\.yaml)$\", schema_view.without_ui(cache_timeout=0), name=\"schema-json\"),\n+ url(r\"^swagger/$\", schema_view.with_ui(\"swagger\", cache_timeout=0), name=\"schema-swagger-ui\"),\n+ url(r\"^redoc/$\", schema_view.with_ui(\"redoc\", cache_timeout=0), name=\"schema-redoc\"),\n+ ]\n", "issue": "\u3010\u5b89\u5168\u3011\u5b89\u5168\u6f0f\u6d1e\u626b\u63cf\uff0c\u751f\u4ea7\u73af\u5883\u4e0b\u80fd\u8bbf\u95ee\u540e\u53f0swagger\n**\u7528\u6587\u5b57\u63cf\u8ff0\u4f60\u9047\u5230\u7684\u95ee\u9898**\r\n\r\n\u751f\u4ea7\u73af\u5883\uff0cswagger\u672a\u5c4f\u853d\u53ef\u8bbf\u95ee\r\n\r\n**\u91cd\u73b0\u65b9\u6cd5**\r\n\r\n\r\n\u5bf9\u5e94\u6e90\u7801\u672a\u5c4f\u853d\r\nhttps://github.com/TencentBlueKing/bk-user/blob/40ee56ed90d47b214274819394aa21ec7459b970/src/api/bkuser_core/apis/urls.py#L45\r\n\r\n**\u9884\u671f\u884c\u4e3a**\r\n\r\n\u9884\u671f\u7684\u6b63\u5e38\u884c\u4e3a\r\n\r\n**\u7248\u672c**\r\n- 2.5.2\r\n- \u662f\u5426\u662f\u4f01\u4e1a\u7248\u95ee\u9898\uff1f\r\n\u662f\r\n\r\n**\u5982\u679c\u662f SaaS \u9875\u9762\u95ee\u9898\uff0c\u8bf7\u63d0\u4f9b\u4f7f\u7528\u7684\u64cd\u4f5c\u7cfb\u7edf\u548c\u6d4f\u89c8\u5668\u4fe1\u606f**\r\n - OS: [e.g. iOS]\r\n - Browser [e.g. chrome, safari]\r\n - Version [e.g. 22]\r\n\r\n**\u989d\u5916\u4fe1\u606f**\r\n\r\n\u4efb\u4f55\u4f60\u89c9\u5f97\u6709\u52a9\u4e8e\u95ee\u9898\u89e3\u51b3\u7684\u5185\u5bb9\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencentBlueKing is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91-\u7528\u6237\u7ba1\u7406(Bk-User) available.\nCopyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\nfrom django.conf.urls import include, url\nfrom drf_yasg import openapi\nfrom drf_yasg.generators import OpenAPISchemaGenerator\nfrom drf_yasg.views import get_schema_view\nfrom rest_framework import permissions\n\n\nclass CustomOpenAPISchemaGenerator(OpenAPISchemaGenerator):\n def get_schema(self, request=None, public=False):\n schema = super().get_schema(request, public)\n return schema\n\n\nschema_view = get_schema_view(\n openapi.Info(\n title=\"\u84dd\u9cb8\u7528\u6237\u7ba1\u7406 API\",\n default_version=\"v2\",\n description=\"\u84dd\u9cb8\u7528\u6237\u7ba1\u7406\u540e\u53f0\u670d\u52a1 API\",\n license=openapi.License(name=\"MIT License\"),\n ),\n public=True,\n permission_classes=(permissions.AllowAny,),\n generator_class=CustomOpenAPISchemaGenerator,\n patterns=[\n url(r\"^\", include(\"bkuser_core.monitoring.urls\")),\n url(r\"^\", include(\"bkuser_core.departments.urls\")),\n url(r\"^\", include(\"bkuser_core.profiles.urls\")),\n url(r\"^\", include(\"bkuser_core.categories.urls\")),\n # url(r\"^\", include(\"bkuser_core.user_settings.urls\")),\n # url(r\"^\", include(\"bkuser_core.audit.urls\")),\n ],\n)\n\n\nurlpatterns = [\n url(r\"^swagger(?P<format>\\.json|\\.yaml)$\", schema_view.without_ui(cache_timeout=0), name=\"schema-json\"),\n url(r\"^swagger/$\", schema_view.with_ui(\"swagger\", cache_timeout=0), name=\"schema-swagger-ui\"),\n url(r\"^redoc/$\", schema_view.with_ui(\"redoc\", cache_timeout=0), name=\"schema-redoc\"),\n]\n", "path": "src/api/bkuser_core/apis/urls.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencentBlueKing is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91-\u7528\u6237\u7ba1\u7406(Bk-User) available.\nCopyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\nfrom django.conf import settings\nfrom django.conf.urls import include, url\nfrom drf_yasg import openapi\nfrom drf_yasg.generators import OpenAPISchemaGenerator\nfrom drf_yasg.views import get_schema_view\nfrom rest_framework import permissions\n\n\nclass CustomOpenAPISchemaGenerator(OpenAPISchemaGenerator):\n def get_schema(self, request=None, public=False):\n schema = super().get_schema(request, public)\n return schema\n\n\nschema_view = get_schema_view(\n openapi.Info(\n title=\"\u84dd\u9cb8\u7528\u6237\u7ba1\u7406 API\",\n default_version=\"v2\",\n description=\"\u84dd\u9cb8\u7528\u6237\u7ba1\u7406\u540e\u53f0\u670d\u52a1 API\",\n license=openapi.License(name=\"MIT License\"),\n ),\n public=True,\n permission_classes=(permissions.AllowAny,),\n generator_class=CustomOpenAPISchemaGenerator,\n patterns=[\n url(r\"^\", include(\"bkuser_core.monitoring.urls\")),\n url(r\"^\", include(\"bkuser_core.departments.urls\")),\n url(r\"^\", include(\"bkuser_core.profiles.urls\")),\n url(r\"^\", include(\"bkuser_core.categories.urls\")),\n # url(r\"^\", include(\"bkuser_core.user_settings.urls\")),\n # url(r\"^\", include(\"bkuser_core.audit.urls\")),\n ],\n)\n\nurlpatterns = []\nif settings.DEBUG:\n urlpatterns += [\n url(r\"^swagger(?P<format>\\.json|\\.yaml)$\", schema_view.without_ui(cache_timeout=0), name=\"schema-json\"),\n url(r\"^swagger/$\", schema_view.with_ui(\"swagger\", cache_timeout=0), name=\"schema-swagger-ui\"),\n url(r\"^redoc/$\", schema_view.with_ui(\"redoc\", cache_timeout=0), name=\"schema-redoc\"),\n ]\n", "path": "src/api/bkuser_core/apis/urls.py"}]} | 1,127 | 357 |
gh_patches_debug_40156 | rasdani/github-patches | git_diff | learningequality__kolibri-4999 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MultipleObjectsReturned: get() returned more than one Lesson -- it returned 2!
Sentry Issue: [KOLIBRI-BACKEND-7](https://sentry.io/learningequality/kolibri-backend/issues/877453921/?referrer=github_integration)
```
MultipleObjectsReturned: get() returned more than one Lesson -- it returned 2!
(8 additional frame(s) were not displayed)
...
File "rest_framework/mixins.py", line 56, in retrieve
instance = self.get_object()
File "rest_framework/generics.py", line 98, in get_object
obj = get_object_or_404(queryset, **filter_kwargs)
File "rest_framework/generics.py", line 21, in get_object_or_404
return _get_object_or_404(queryset, *filter_args, **filter_kwargs)
File "django/shortcuts.py", line 85, in get_object_or_404
return queryset.get(*args, **kwargs)
File "django/db/models/query.py", line 384, in get
(self.model._meta.object_name, num)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/plugins/learn/viewsets.py`
Content:
```
1 from django.db.models.query import F
2 from rest_framework.permissions import IsAuthenticated
3 from rest_framework.viewsets import ReadOnlyModelViewSet
4
5 from .serializers import LearnerClassroomSerializer
6 from kolibri.core.auth.api import KolibriAuthPermissionsFilter
7 from kolibri.core.auth.filters import HierarchyRelationsFilter
8 from kolibri.core.auth.models import Classroom
9 from kolibri.core.lessons.models import Lesson
10 from kolibri.core.lessons.models import LessonAssignment
11 from kolibri.core.lessons.serializers import LessonSerializer
12
13
14 class LearnerClassroomViewset(ReadOnlyModelViewSet):
15 """
16 Returns all Classrooms for which the requesting User is a member,
17 along with all associated assignments.
18 """
19 filter_backends = (KolibriAuthPermissionsFilter,)
20 permission_classes = (IsAuthenticated,)
21 serializer_class = LearnerClassroomSerializer
22
23 def get_queryset(self):
24 current_user = self.request.user
25 memberships = current_user.memberships.filter(
26 collection__kind='classroom',
27 ).values('collection_id')
28 return Classroom.objects.filter(id__in=memberships)
29
30
31 class LearnerLessonViewset(ReadOnlyModelViewSet):
32 """
33 Special Viewset for Learners to view Lessons to which they are assigned.
34 The core Lesson Viewset is locked down to Admin users only.
35 """
36 serializer_class = LessonSerializer
37 permission_classes = (IsAuthenticated,)
38
39 def get_queryset(self):
40 assignments = HierarchyRelationsFilter(LessonAssignment.objects.all()) \
41 .filter_by_hierarchy(
42 target_user=self.request.user,
43 ancestor_collection=F('collection')
44 )
45 return Lesson.objects.filter(
46 lesson_assignments__in=assignments,
47 is_active=True
48 )
49
```
Path: `kolibri/plugins/learn/serializers.py`
Content:
```
1 from django.db.models import Q
2 from django.db.models import Sum
3 from rest_framework.serializers import JSONField
4 from rest_framework.serializers import ModelSerializer
5 from rest_framework.serializers import SerializerMethodField
6
7 from kolibri.core.auth.models import Classroom
8 from kolibri.core.exams.models import Exam
9 from kolibri.core.lessons.models import Lesson
10 from kolibri.core.logger.models import ContentSummaryLog
11 from kolibri.core.logger.models import ExamLog
12
13
14 class ExamProgressSerializer(ModelSerializer):
15 """
16 Annotates an Exam with progress information based on logs generated
17 by the requesting User
18 """
19 class Meta:
20 model = Exam
21 fields = (
22 'active',
23 'id',
24 'progress',
25 'question_count',
26 'title',
27 )
28
29 progress = SerializerMethodField()
30
31 # Mostly copied from UserExamSerializer.to_representation, but working directly
32 # from Exam Model instead of ExamAssignment
33 def get_progress(self, instance):
34 try:
35 examlogs = instance.examlogs.get(user=self.context['user'])
36 return {
37 'score': examlogs.attemptlogs.aggregate(Sum('correct')).get('correct__sum'),
38 'answer_count': examlogs.attemptlogs.count(),
39 'closed': examlogs.closed,
40 }
41 except ExamLog.DoesNotExist:
42 return {
43 'score': None,
44 'answer_count': None,
45 'closed': None,
46 }
47
48
49 class LessonProgressSerializer(ModelSerializer):
50 """
51 Annotates a Lesson with progress information based on logs generated
52 by the requesting User
53 """
54 progress = SerializerMethodField()
55 resources = JSONField(default='[]')
56
57 class Meta:
58 model = Lesson
59 fields = (
60 'description',
61 'id',
62 'is_active',
63 'title',
64 'progress',
65 'resources',
66 )
67
68 def get_progress(self, instance):
69 content_ids = [resource['content_id'] for resource in instance.resources]
70 resource_progress = ContentSummaryLog.objects \
71 .filter(
72 user=self.context['user'],
73 content_id__in=content_ids
74 ) \
75 .aggregate(Sum('progress')).get('progress__sum')
76 return {
77 'resource_progress': resource_progress,
78 'total_resources': len(instance.resources),
79 }
80
81
82 class LearnerClassroomSerializer(ModelSerializer):
83 assignments = SerializerMethodField()
84
85 class Meta:
86 model = Classroom
87 fields = (
88 'id',
89 'name',
90 'assignments',
91 )
92
93 def get_assignments(self, instance):
94 """
95 Returns all Exams and Lessons (and progress) assigned to the requesting User
96 """
97 current_user = self.context['request'].user
98 memberships = current_user.memberships.all()
99 learner_groups = [m.collection for m in memberships]
100
101 # Return only active Lessons that are assigned to the requesting user's groups
102 # TODO move this to a permission_class on Lesson
103 filtered_lessons = Lesson.objects.filter(
104 lesson_assignments__collection__in=learner_groups,
105 collection=instance,
106 is_active=True,
107 ).distinct()
108
109 filtered_exams = Exam.objects.filter(
110 assignments__collection__in=learner_groups,
111 collection=instance,
112 ).filter(Q(active=True) | Q(examlogs__user=current_user)).distinct()
113
114 return {
115 'lessons': LessonProgressSerializer(
116 filtered_lessons,
117 many=True,
118 context={'user': current_user},
119 ).data,
120 'exams': ExamProgressSerializer(
121 filtered_exams,
122 many=True,
123 context={'user': current_user},
124 ).data,
125 }
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kolibri/plugins/learn/serializers.py b/kolibri/plugins/learn/serializers.py
--- a/kolibri/plugins/learn/serializers.py
+++ b/kolibri/plugins/learn/serializers.py
@@ -4,9 +4,12 @@
from rest_framework.serializers import ModelSerializer
from rest_framework.serializers import SerializerMethodField
+from kolibri.core.auth.filters import HierarchyRelationsFilter
from kolibri.core.auth.models import Classroom
from kolibri.core.exams.models import Exam
+from kolibri.core.exams.models import ExamAssignment
from kolibri.core.lessons.models import Lesson
+from kolibri.core.lessons.models import LessonAssignment
from kolibri.core.logger.models import ContentSummaryLog
from kolibri.core.logger.models import ExamLog
@@ -95,19 +98,27 @@
Returns all Exams and Lessons (and progress) assigned to the requesting User
"""
current_user = self.context['request'].user
- memberships = current_user.memberships.all()
- learner_groups = [m.collection for m in memberships]
# Return only active Lessons that are assigned to the requesting user's groups
# TODO move this to a permission_class on Lesson
+ lesson_assignments = HierarchyRelationsFilter(LessonAssignment.objects.all()) \
+ .filter_by_hierarchy(
+ target_user=current_user,
+ ancestor_collection=instance
+ )
filtered_lessons = Lesson.objects.filter(
- lesson_assignments__collection__in=learner_groups,
- collection=instance,
- is_active=True,
+ lesson_assignments__in=lesson_assignments,
+ is_active=True
).distinct()
+ exam_assignments = HierarchyRelationsFilter(ExamAssignment.objects.all()) \
+ .filter_by_hierarchy(
+ target_user=current_user,
+ ancestor_collection=instance
+ )
+
filtered_exams = Exam.objects.filter(
- assignments__collection__in=learner_groups,
+ assignments__in=exam_assignments,
collection=instance,
).filter(Q(active=True) | Q(examlogs__user=current_user)).distinct()
diff --git a/kolibri/plugins/learn/viewsets.py b/kolibri/plugins/learn/viewsets.py
--- a/kolibri/plugins/learn/viewsets.py
+++ b/kolibri/plugins/learn/viewsets.py
@@ -21,11 +21,10 @@
serializer_class = LearnerClassroomSerializer
def get_queryset(self):
- current_user = self.request.user
- memberships = current_user.memberships.filter(
- collection__kind='classroom',
- ).values('collection_id')
- return Classroom.objects.filter(id__in=memberships)
+ return HierarchyRelationsFilter(Classroom.objects.all()).filter_by_hierarchy(
+ target_user=self.request.user,
+ ancestor_collection=F('id')
+ )
class LearnerLessonViewset(ReadOnlyModelViewSet):
@@ -45,4 +44,4 @@
return Lesson.objects.filter(
lesson_assignments__in=assignments,
is_active=True
- )
+ ).distinct()
| {"golden_diff": "diff --git a/kolibri/plugins/learn/serializers.py b/kolibri/plugins/learn/serializers.py\n--- a/kolibri/plugins/learn/serializers.py\n+++ b/kolibri/plugins/learn/serializers.py\n@@ -4,9 +4,12 @@\n from rest_framework.serializers import ModelSerializer\n from rest_framework.serializers import SerializerMethodField\n \n+from kolibri.core.auth.filters import HierarchyRelationsFilter\n from kolibri.core.auth.models import Classroom\n from kolibri.core.exams.models import Exam\n+from kolibri.core.exams.models import ExamAssignment\n from kolibri.core.lessons.models import Lesson\n+from kolibri.core.lessons.models import LessonAssignment\n from kolibri.core.logger.models import ContentSummaryLog\n from kolibri.core.logger.models import ExamLog\n \n@@ -95,19 +98,27 @@\n Returns all Exams and Lessons (and progress) assigned to the requesting User\n \"\"\"\n current_user = self.context['request'].user\n- memberships = current_user.memberships.all()\n- learner_groups = [m.collection for m in memberships]\n \n # Return only active Lessons that are assigned to the requesting user's groups\n # TODO move this to a permission_class on Lesson\n+ lesson_assignments = HierarchyRelationsFilter(LessonAssignment.objects.all()) \\\n+ .filter_by_hierarchy(\n+ target_user=current_user,\n+ ancestor_collection=instance\n+ )\n filtered_lessons = Lesson.objects.filter(\n- lesson_assignments__collection__in=learner_groups,\n- collection=instance,\n- is_active=True,\n+ lesson_assignments__in=lesson_assignments,\n+ is_active=True\n ).distinct()\n \n+ exam_assignments = HierarchyRelationsFilter(ExamAssignment.objects.all()) \\\n+ .filter_by_hierarchy(\n+ target_user=current_user,\n+ ancestor_collection=instance\n+ )\n+\n filtered_exams = Exam.objects.filter(\n- assignments__collection__in=learner_groups,\n+ assignments__in=exam_assignments,\n collection=instance,\n ).filter(Q(active=True) | Q(examlogs__user=current_user)).distinct()\n \ndiff --git a/kolibri/plugins/learn/viewsets.py b/kolibri/plugins/learn/viewsets.py\n--- a/kolibri/plugins/learn/viewsets.py\n+++ b/kolibri/plugins/learn/viewsets.py\n@@ -21,11 +21,10 @@\n serializer_class = LearnerClassroomSerializer\n \n def get_queryset(self):\n- current_user = self.request.user\n- memberships = current_user.memberships.filter(\n- collection__kind='classroom',\n- ).values('collection_id')\n- return Classroom.objects.filter(id__in=memberships)\n+ return HierarchyRelationsFilter(Classroom.objects.all()).filter_by_hierarchy(\n+ target_user=self.request.user,\n+ ancestor_collection=F('id')\n+ )\n \n \n class LearnerLessonViewset(ReadOnlyModelViewSet):\n@@ -45,4 +44,4 @@\n return Lesson.objects.filter(\n lesson_assignments__in=assignments,\n is_active=True\n- )\n+ ).distinct()\n", "issue": "MultipleObjectsReturned: get() returned more than one Lesson -- it returned 2!\nSentry Issue: [KOLIBRI-BACKEND-7](https://sentry.io/learningequality/kolibri-backend/issues/877453921/?referrer=github_integration)\n\n```\nMultipleObjectsReturned: get() returned more than one Lesson -- it returned 2!\n(8 additional frame(s) were not displayed)\n...\n File \"rest_framework/mixins.py\", line 56, in retrieve\n instance = self.get_object()\n File \"rest_framework/generics.py\", line 98, in get_object\n obj = get_object_or_404(queryset, **filter_kwargs)\n File \"rest_framework/generics.py\", line 21, in get_object_or_404\n return _get_object_or_404(queryset, *filter_args, **filter_kwargs)\n File \"django/shortcuts.py\", line 85, in get_object_or_404\n return queryset.get(*args, **kwargs)\n File \"django/db/models/query.py\", line 384, in get\n (self.model._meta.object_name, num)\n```\n", "before_files": [{"content": "from django.db.models.query import F\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.viewsets import ReadOnlyModelViewSet\n\nfrom .serializers import LearnerClassroomSerializer\nfrom kolibri.core.auth.api import KolibriAuthPermissionsFilter\nfrom kolibri.core.auth.filters import HierarchyRelationsFilter\nfrom kolibri.core.auth.models import Classroom\nfrom kolibri.core.lessons.models import Lesson\nfrom kolibri.core.lessons.models import LessonAssignment\nfrom kolibri.core.lessons.serializers import LessonSerializer\n\n\nclass LearnerClassroomViewset(ReadOnlyModelViewSet):\n \"\"\"\n Returns all Classrooms for which the requesting User is a member,\n along with all associated assignments.\n \"\"\"\n filter_backends = (KolibriAuthPermissionsFilter,)\n permission_classes = (IsAuthenticated,)\n serializer_class = LearnerClassroomSerializer\n\n def get_queryset(self):\n current_user = self.request.user\n memberships = current_user.memberships.filter(\n collection__kind='classroom',\n ).values('collection_id')\n return Classroom.objects.filter(id__in=memberships)\n\n\nclass LearnerLessonViewset(ReadOnlyModelViewSet):\n \"\"\"\n Special Viewset for Learners to view Lessons to which they are assigned.\n The core Lesson Viewset is locked down to Admin users only.\n \"\"\"\n serializer_class = LessonSerializer\n permission_classes = (IsAuthenticated,)\n\n def get_queryset(self):\n assignments = HierarchyRelationsFilter(LessonAssignment.objects.all()) \\\n .filter_by_hierarchy(\n target_user=self.request.user,\n ancestor_collection=F('collection')\n )\n return Lesson.objects.filter(\n lesson_assignments__in=assignments,\n is_active=True\n )\n", "path": "kolibri/plugins/learn/viewsets.py"}, {"content": "from django.db.models import Q\nfrom django.db.models import Sum\nfrom rest_framework.serializers import JSONField\nfrom rest_framework.serializers import ModelSerializer\nfrom rest_framework.serializers import SerializerMethodField\n\nfrom kolibri.core.auth.models import Classroom\nfrom kolibri.core.exams.models import Exam\nfrom kolibri.core.lessons.models import Lesson\nfrom kolibri.core.logger.models import ContentSummaryLog\nfrom kolibri.core.logger.models import ExamLog\n\n\nclass ExamProgressSerializer(ModelSerializer):\n \"\"\"\n Annotates an Exam with progress information based on logs generated\n by the requesting User\n \"\"\"\n class Meta:\n model = Exam\n fields = (\n 'active',\n 'id',\n 'progress',\n 'question_count',\n 'title',\n )\n\n progress = SerializerMethodField()\n\n # Mostly copied from UserExamSerializer.to_representation, but working directly\n # from Exam Model instead of ExamAssignment\n def get_progress(self, instance):\n try:\n examlogs = instance.examlogs.get(user=self.context['user'])\n return {\n 'score': examlogs.attemptlogs.aggregate(Sum('correct')).get('correct__sum'),\n 'answer_count': examlogs.attemptlogs.count(),\n 'closed': examlogs.closed,\n }\n except ExamLog.DoesNotExist:\n return {\n 'score': None,\n 'answer_count': None,\n 'closed': None,\n }\n\n\nclass LessonProgressSerializer(ModelSerializer):\n \"\"\"\n Annotates a Lesson with progress information based on logs generated\n by the requesting User\n \"\"\"\n progress = SerializerMethodField()\n resources = JSONField(default='[]')\n\n class Meta:\n model = Lesson\n fields = (\n 'description',\n 'id',\n 'is_active',\n 'title',\n 'progress',\n 'resources',\n )\n\n def get_progress(self, instance):\n content_ids = [resource['content_id'] for resource in instance.resources]\n resource_progress = ContentSummaryLog.objects \\\n .filter(\n user=self.context['user'],\n content_id__in=content_ids\n ) \\\n .aggregate(Sum('progress')).get('progress__sum')\n return {\n 'resource_progress': resource_progress,\n 'total_resources': len(instance.resources),\n }\n\n\nclass LearnerClassroomSerializer(ModelSerializer):\n assignments = SerializerMethodField()\n\n class Meta:\n model = Classroom\n fields = (\n 'id',\n 'name',\n 'assignments',\n )\n\n def get_assignments(self, instance):\n \"\"\"\n Returns all Exams and Lessons (and progress) assigned to the requesting User\n \"\"\"\n current_user = self.context['request'].user\n memberships = current_user.memberships.all()\n learner_groups = [m.collection for m in memberships]\n\n # Return only active Lessons that are assigned to the requesting user's groups\n # TODO move this to a permission_class on Lesson\n filtered_lessons = Lesson.objects.filter(\n lesson_assignments__collection__in=learner_groups,\n collection=instance,\n is_active=True,\n ).distinct()\n\n filtered_exams = Exam.objects.filter(\n assignments__collection__in=learner_groups,\n collection=instance,\n ).filter(Q(active=True) | Q(examlogs__user=current_user)).distinct()\n\n return {\n 'lessons': LessonProgressSerializer(\n filtered_lessons,\n many=True,\n context={'user': current_user},\n ).data,\n 'exams': ExamProgressSerializer(\n filtered_exams,\n many=True,\n context={'user': current_user},\n ).data,\n }\n", "path": "kolibri/plugins/learn/serializers.py"}], "after_files": [{"content": "from django.db.models.query import F\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.viewsets import ReadOnlyModelViewSet\n\nfrom .serializers import LearnerClassroomSerializer\nfrom kolibri.core.auth.api import KolibriAuthPermissionsFilter\nfrom kolibri.core.auth.filters import HierarchyRelationsFilter\nfrom kolibri.core.auth.models import Classroom\nfrom kolibri.core.lessons.models import Lesson\nfrom kolibri.core.lessons.models import LessonAssignment\nfrom kolibri.core.lessons.serializers import LessonSerializer\n\n\nclass LearnerClassroomViewset(ReadOnlyModelViewSet):\n \"\"\"\n Returns all Classrooms for which the requesting User is a member,\n along with all associated assignments.\n \"\"\"\n filter_backends = (KolibriAuthPermissionsFilter,)\n permission_classes = (IsAuthenticated,)\n serializer_class = LearnerClassroomSerializer\n\n def get_queryset(self):\n return HierarchyRelationsFilter(Classroom.objects.all()).filter_by_hierarchy(\n target_user=self.request.user,\n ancestor_collection=F('id')\n )\n\n\nclass LearnerLessonViewset(ReadOnlyModelViewSet):\n \"\"\"\n Special Viewset for Learners to view Lessons to which they are assigned.\n The core Lesson Viewset is locked down to Admin users only.\n \"\"\"\n serializer_class = LessonSerializer\n permission_classes = (IsAuthenticated,)\n\n def get_queryset(self):\n assignments = HierarchyRelationsFilter(LessonAssignment.objects.all()) \\\n .filter_by_hierarchy(\n target_user=self.request.user,\n ancestor_collection=F('collection')\n )\n return Lesson.objects.filter(\n lesson_assignments__in=assignments,\n is_active=True\n ).distinct()\n", "path": "kolibri/plugins/learn/viewsets.py"}, {"content": "from django.db.models import Q\nfrom django.db.models import Sum\nfrom rest_framework.serializers import JSONField\nfrom rest_framework.serializers import ModelSerializer\nfrom rest_framework.serializers import SerializerMethodField\n\nfrom kolibri.core.auth.filters import HierarchyRelationsFilter\nfrom kolibri.core.auth.models import Classroom\nfrom kolibri.core.exams.models import Exam\nfrom kolibri.core.exams.models import ExamAssignment\nfrom kolibri.core.lessons.models import Lesson\nfrom kolibri.core.lessons.models import LessonAssignment\nfrom kolibri.core.logger.models import ContentSummaryLog\nfrom kolibri.core.logger.models import ExamLog\n\n\nclass ExamProgressSerializer(ModelSerializer):\n \"\"\"\n Annotates an Exam with progress information based on logs generated\n by the requesting User\n \"\"\"\n class Meta:\n model = Exam\n fields = (\n 'active',\n 'id',\n 'progress',\n 'question_count',\n 'title',\n )\n\n progress = SerializerMethodField()\n\n # Mostly copied from UserExamSerializer.to_representation, but working directly\n # from Exam Model instead of ExamAssignment\n def get_progress(self, instance):\n try:\n examlogs = instance.examlogs.get(user=self.context['user'])\n return {\n 'score': examlogs.attemptlogs.aggregate(Sum('correct')).get('correct__sum'),\n 'answer_count': examlogs.attemptlogs.count(),\n 'closed': examlogs.closed,\n }\n except ExamLog.DoesNotExist:\n return {\n 'score': None,\n 'answer_count': None,\n 'closed': None,\n }\n\n\nclass LessonProgressSerializer(ModelSerializer):\n \"\"\"\n Annotates a Lesson with progress information based on logs generated\n by the requesting User\n \"\"\"\n progress = SerializerMethodField()\n resources = JSONField(default='[]')\n\n class Meta:\n model = Lesson\n fields = (\n 'description',\n 'id',\n 'is_active',\n 'title',\n 'progress',\n 'resources',\n )\n\n def get_progress(self, instance):\n content_ids = [resource['content_id'] for resource in instance.resources]\n resource_progress = ContentSummaryLog.objects \\\n .filter(\n user=self.context['user'],\n content_id__in=content_ids\n ) \\\n .aggregate(Sum('progress')).get('progress__sum')\n return {\n 'resource_progress': resource_progress,\n 'total_resources': len(instance.resources),\n }\n\n\nclass LearnerClassroomSerializer(ModelSerializer):\n assignments = SerializerMethodField()\n\n class Meta:\n model = Classroom\n fields = (\n 'id',\n 'name',\n 'assignments',\n )\n\n def get_assignments(self, instance):\n \"\"\"\n Returns all Exams and Lessons (and progress) assigned to the requesting User\n \"\"\"\n current_user = self.context['request'].user\n\n # Return only active Lessons that are assigned to the requesting user's groups\n # TODO move this to a permission_class on Lesson\n lesson_assignments = HierarchyRelationsFilter(LessonAssignment.objects.all()) \\\n .filter_by_hierarchy(\n target_user=current_user,\n ancestor_collection=instance\n )\n filtered_lessons = Lesson.objects.filter(\n lesson_assignments__in=lesson_assignments,\n is_active=True\n ).distinct()\n\n exam_assignments = HierarchyRelationsFilter(ExamAssignment.objects.all()) \\\n .filter_by_hierarchy(\n target_user=current_user,\n ancestor_collection=instance\n )\n\n filtered_exams = Exam.objects.filter(\n assignments__in=exam_assignments,\n collection=instance,\n ).filter(Q(active=True) | Q(examlogs__user=current_user)).distinct()\n\n return {\n 'lessons': LessonProgressSerializer(\n filtered_lessons,\n many=True,\n context={'user': current_user},\n ).data,\n 'exams': ExamProgressSerializer(\n filtered_exams,\n many=True,\n context={'user': current_user},\n ).data,\n }\n", "path": "kolibri/plugins/learn/serializers.py"}]} | 2,017 | 674 |
gh_patches_debug_4072 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3127 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement frontend flow: User sharing an entity
I've not added a description, since I'll be implementing this.
Refer [Product spec](https://wiki.mathesar.org/en/product/specs/publicly-shareable-links) for detailed info.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/api/ui/permissions/shares.py`
Content:
```
1 from rest_access_policy import AccessPolicy
2
3 from mathesar.api.utils import get_query_or_404
4 from mathesar.api.permission_utils import QueryAccessInspector
5
6
7 class SharedTableAccessPolicy(AccessPolicy):
8 statements = [
9 {
10 'action': ['list', 'retrieve'],
11 'principal': 'authenticated',
12 'effect': 'allow',
13 'condition_expression': 'is_atleast_viewer_nested_table_resource'
14 },
15 {
16 'action': ['create', 'destroy', 'update', 'partial_update'],
17 'principal': 'authenticated',
18 'effect': 'allow',
19 'condition_expression': 'is_atleast_manager_nested_table_resource'
20 },
21 ]
22
23
24 class SharedQueryAccessPolicy(AccessPolicy):
25 statements = [
26 {
27 'action': ['list', 'retrieve'],
28 'principal': 'authenticated',
29 'effect': 'allow',
30 'condition_expression': 'is_atleast_query_viewer'
31 },
32 {
33 'action': ['create', 'destroy', 'update', 'partial_update'],
34 'principal': 'authenticated',
35 'effect': 'allow',
36 'condition_expression': 'is_atleast_query_editor'
37 },
38 ]
39
40 def is_atleast_query_viewer(self, request, view, action):
41 query = get_query_or_404(view.kwargs['query_pk'])
42 return QueryAccessInspector(request.user, query).is_atleast_viewer()
43
44 def is_atleast_query_editor(self, request, view, action):
45 query = get_query_or_404(view.kwargs['query_pk'])
46 return QueryAccessInspector(request.user, query).is_atleast_editor()
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mathesar/api/ui/permissions/shares.py b/mathesar/api/ui/permissions/shares.py
--- a/mathesar/api/ui/permissions/shares.py
+++ b/mathesar/api/ui/permissions/shares.py
@@ -16,7 +16,7 @@
'action': ['create', 'destroy', 'update', 'partial_update'],
'principal': 'authenticated',
'effect': 'allow',
- 'condition_expression': 'is_atleast_manager_nested_table_resource'
+ 'condition_expression': 'is_atleast_editor_nested_table_resource'
},
]
| {"golden_diff": "diff --git a/mathesar/api/ui/permissions/shares.py b/mathesar/api/ui/permissions/shares.py\n--- a/mathesar/api/ui/permissions/shares.py\n+++ b/mathesar/api/ui/permissions/shares.py\n@@ -16,7 +16,7 @@\n 'action': ['create', 'destroy', 'update', 'partial_update'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n- 'condition_expression': 'is_atleast_manager_nested_table_resource'\n+ 'condition_expression': 'is_atleast_editor_nested_table_resource'\n },\n ]\n", "issue": "Implement frontend flow: User sharing an entity\nI've not added a description, since I'll be implementing this.\r\n\r\nRefer [Product spec](https://wiki.mathesar.org/en/product/specs/publicly-shareable-links) for detailed info.\n", "before_files": [{"content": "from rest_access_policy import AccessPolicy\n\nfrom mathesar.api.utils import get_query_or_404\nfrom mathesar.api.permission_utils import QueryAccessInspector\n\n\nclass SharedTableAccessPolicy(AccessPolicy):\n statements = [\n {\n 'action': ['list', 'retrieve'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n 'condition_expression': 'is_atleast_viewer_nested_table_resource'\n },\n {\n 'action': ['create', 'destroy', 'update', 'partial_update'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n 'condition_expression': 'is_atleast_manager_nested_table_resource'\n },\n ]\n\n\nclass SharedQueryAccessPolicy(AccessPolicy):\n statements = [\n {\n 'action': ['list', 'retrieve'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n 'condition_expression': 'is_atleast_query_viewer'\n },\n {\n 'action': ['create', 'destroy', 'update', 'partial_update'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n 'condition_expression': 'is_atleast_query_editor'\n },\n ]\n\n def is_atleast_query_viewer(self, request, view, action):\n query = get_query_or_404(view.kwargs['query_pk'])\n return QueryAccessInspector(request.user, query).is_atleast_viewer()\n\n def is_atleast_query_editor(self, request, view, action):\n query = get_query_or_404(view.kwargs['query_pk'])\n return QueryAccessInspector(request.user, query).is_atleast_editor()\n", "path": "mathesar/api/ui/permissions/shares.py"}], "after_files": [{"content": "from rest_access_policy import AccessPolicy\n\nfrom mathesar.api.utils import get_query_or_404\nfrom mathesar.api.permission_utils import QueryAccessInspector\n\n\nclass SharedTableAccessPolicy(AccessPolicy):\n statements = [\n {\n 'action': ['list', 'retrieve'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n 'condition_expression': 'is_atleast_viewer_nested_table_resource'\n },\n {\n 'action': ['create', 'destroy', 'update', 'partial_update'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n 'condition_expression': 'is_atleast_editor_nested_table_resource'\n },\n ]\n\n\nclass SharedQueryAccessPolicy(AccessPolicy):\n statements = [\n {\n 'action': ['list', 'retrieve'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n 'condition_expression': 'is_atleast_query_viewer'\n },\n {\n 'action': ['create', 'destroy', 'update', 'partial_update'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n 'condition_expression': 'is_atleast_query_editor'\n },\n ]\n\n def is_atleast_query_viewer(self, request, view, action):\n query = get_query_or_404(view.kwargs['query_pk'])\n return QueryAccessInspector(request.user, query).is_atleast_viewer()\n\n def is_atleast_query_editor(self, request, view, action):\n query = get_query_or_404(view.kwargs['query_pk'])\n return QueryAccessInspector(request.user, query).is_atleast_editor()\n", "path": "mathesar/api/ui/permissions/shares.py"}]} | 736 | 125 |
gh_patches_debug_24804 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-7262 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NewspowerAUSpider - can this be rewritten as a WPStoreLocator spider?
```
import re
from html import unescape
from chompjs import parse_js_object
from scrapy.spiders import SitemapSpider
from locations.hours import OpeningHours
from locations.items import Feature
class NewspowerAUSpider(SitemapSpider):
name = "newspower_au"
item_attributes = {"brand": "Newspower", "brand_wikidata": "Q120670137"}
allowed_domains = ["newspower.com.au"]
sitemap_urls = [
"https://newspower.com.au/wpsl_stores-sitemap1.xml",
"https://newspower.com.au/wpsl_stores-sitemap2.xml",
]
sitemap_rules = [("/stores/", "parse")]
def parse(self, response):
map_marker_js_blob = response.xpath('//script[contains(text(), "var wpslMap_0 = ")]/text()').get()
map_marker_js_blob = map_marker_js_blob.split("var wpslMap_0 = ", 1)[1].split("]};", 1)[0] + "]}"
map_marker_dict = parse_js_object(map_marker_js_blob)["locations"][0]
properties = {
"ref": map_marker_dict["id"],
"name": response.xpath('//div[@class="wpsl-locations-details"]/span/strong/text()').get().strip(),
"addr_full": unescape(
re.sub(
r"\s+",
" ",
", ".join(filter(None, response.xpath('//div[@class="wpsl-location-address"]//text()').getall())),
)
)
.replace(" ,", ",")
.strip(),
"street_address": ", ".join(filter(None, [map_marker_dict["address"], map_marker_dict["address2"]])),
"city": map_marker_dict["city"],
"state": map_marker_dict["state"],
"postcode": map_marker_dict["zip"],
"lat": map_marker_dict["lat"],
"lon": map_marker_dict["lng"],
"phone": response.xpath('//div[@class="wpsl-contact-details"]//a[contains(@href, "tel:")]/@href').get(),
"website": response.url,
"facebook": response.xpath(
'//div[@class="entry-content"]//a[contains(@href, "https://www.facebook.com/")]/@href'
).get(),
}
if properties.get("phone") and "tel:" in properties.get("phone"):
properties["phone"] = properties["phone"].replace("tel:", "")
hours_string = " ".join(filter(None, response.xpath('//table[@class="wpsl-opening-hours"]//text()').getall()))
properties["opening_hours"] = OpeningHours()
properties["opening_hours"].add_ranges_from_string(hours_string)
yield Feature(**properties)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/newspower_au.py`
Content:
```
1 import re
2 from html import unescape
3
4 from chompjs import parse_js_object
5 from scrapy.spiders import SitemapSpider
6
7 from locations.hours import OpeningHours
8 from locations.items import Feature
9
10
11 class NewspowerAUSpider(SitemapSpider):
12 name = "newspower_au"
13 item_attributes = {"brand": "Newspower", "brand_wikidata": "Q120670137"}
14 allowed_domains = ["newspower.com.au"]
15 sitemap_urls = [
16 "https://newspower.com.au/wpsl_stores-sitemap1.xml",
17 "https://newspower.com.au/wpsl_stores-sitemap2.xml",
18 ]
19 sitemap_rules = [("/stores/", "parse")]
20
21 def parse(self, response):
22 map_marker_js_blob = response.xpath('//script[contains(text(), "var wpslMap_0 = ")]/text()').get()
23 map_marker_js_blob = map_marker_js_blob.split("var wpslMap_0 = ", 1)[1].split("]};", 1)[0] + "]}"
24 map_marker_dict = parse_js_object(map_marker_js_blob)["locations"][0]
25 properties = {
26 "ref": map_marker_dict["id"],
27 "name": response.xpath('//div[@class="wpsl-locations-details"]/span/strong/text()').get().strip(),
28 "addr_full": unescape(
29 re.sub(
30 r"\s+",
31 " ",
32 ", ".join(filter(None, response.xpath('//div[@class="wpsl-location-address"]//text()').getall())),
33 )
34 )
35 .replace(" ,", ",")
36 .strip(),
37 "street_address": ", ".join(filter(None, [map_marker_dict["address"], map_marker_dict["address2"]])),
38 "city": map_marker_dict["city"],
39 "state": map_marker_dict["state"],
40 "postcode": map_marker_dict["zip"],
41 "lat": map_marker_dict["lat"],
42 "lon": map_marker_dict["lng"],
43 "phone": response.xpath('//div[@class="wpsl-contact-details"]//a[contains(@href, "tel:")]/@href').get(),
44 "website": response.url,
45 "facebook": response.xpath(
46 '//div[@class="entry-content"]//a[contains(@href, "https://www.facebook.com/")]/@href'
47 ).get(),
48 }
49 if properties.get("phone") and "tel:" in properties.get("phone"):
50 properties["phone"] = properties["phone"].replace("tel:", "")
51 hours_string = " ".join(filter(None, response.xpath('//table[@class="wpsl-opening-hours"]//text()').getall()))
52 properties["opening_hours"] = OpeningHours()
53 properties["opening_hours"].add_ranges_from_string(hours_string)
54 yield Feature(**properties)
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/newspower_au.py b/locations/spiders/newspower_au.py
--- a/locations/spiders/newspower_au.py
+++ b/locations/spiders/newspower_au.py
@@ -9,6 +9,11 @@
class NewspowerAUSpider(SitemapSpider):
+ # Whilst WP Store Locator is used for this brand, it is set to
+ # return at most the 5 closest points to a provided search
+ # coordinate. There is an impractical number of search requests
+ # thus required to use the WP Store Locator store finder API.
+ # A Sitemap spider is used instead.
name = "newspower_au"
item_attributes = {"brand": "Newspower", "brand_wikidata": "Q120670137"}
allowed_domains = ["newspower.com.au"]
@@ -16,7 +21,11 @@
"https://newspower.com.au/wpsl_stores-sitemap1.xml",
"https://newspower.com.au/wpsl_stores-sitemap2.xml",
]
- sitemap_rules = [("/stores/", "parse")]
+ sitemap_rules = [(r"^https:\/\/newspower\.com\.au\/stores/[^/]+\/$", "parse")]
+ # Server will redirect wpsl_stores-sitemap2.xml to
+ # https://newspower.com.au/store-locator/ if it doesn't like
+ # the country/netblock requesting the page.
+ requires_proxy = True
def parse(self, response):
map_marker_js_blob = response.xpath('//script[contains(text(), "var wpslMap_0 = ")]/text()').get()
| {"golden_diff": "diff --git a/locations/spiders/newspower_au.py b/locations/spiders/newspower_au.py\n--- a/locations/spiders/newspower_au.py\n+++ b/locations/spiders/newspower_au.py\n@@ -9,6 +9,11 @@\n \n \n class NewspowerAUSpider(SitemapSpider):\n+ # Whilst WP Store Locator is used for this brand, it is set to\n+ # return at most the 5 closest points to a provided search\n+ # coordinate. There is an impractical number of search requests\n+ # thus required to use the WP Store Locator store finder API.\n+ # A Sitemap spider is used instead.\n name = \"newspower_au\"\n item_attributes = {\"brand\": \"Newspower\", \"brand_wikidata\": \"Q120670137\"}\n allowed_domains = [\"newspower.com.au\"]\n@@ -16,7 +21,11 @@\n \"https://newspower.com.au/wpsl_stores-sitemap1.xml\",\n \"https://newspower.com.au/wpsl_stores-sitemap2.xml\",\n ]\n- sitemap_rules = [(\"/stores/\", \"parse\")]\n+ sitemap_rules = [(r\"^https:\\/\\/newspower\\.com\\.au\\/stores/[^/]+\\/$\", \"parse\")]\n+ # Server will redirect wpsl_stores-sitemap2.xml to\n+ # https://newspower.com.au/store-locator/ if it doesn't like\n+ # the country/netblock requesting the page.\n+ requires_proxy = True\n \n def parse(self, response):\n map_marker_js_blob = response.xpath('//script[contains(text(), \"var wpslMap_0 = \")]/text()').get()\n", "issue": "NewspowerAUSpider - can this be rewritten as a WPStoreLocator spider?\n```\r\nimport re\r\nfrom html import unescape\r\n\r\nfrom chompjs import parse_js_object\r\nfrom scrapy.spiders import SitemapSpider\r\n\r\nfrom locations.hours import OpeningHours\r\nfrom locations.items import Feature\r\n\r\n\r\nclass NewspowerAUSpider(SitemapSpider):\r\n name = \"newspower_au\"\r\n item_attributes = {\"brand\": \"Newspower\", \"brand_wikidata\": \"Q120670137\"}\r\n allowed_domains = [\"newspower.com.au\"]\r\n sitemap_urls = [\r\n \"https://newspower.com.au/wpsl_stores-sitemap1.xml\",\r\n \"https://newspower.com.au/wpsl_stores-sitemap2.xml\",\r\n ]\r\n sitemap_rules = [(\"/stores/\", \"parse\")]\r\n\r\n def parse(self, response):\r\n map_marker_js_blob = response.xpath('//script[contains(text(), \"var wpslMap_0 = \")]/text()').get()\r\n map_marker_js_blob = map_marker_js_blob.split(\"var wpslMap_0 = \", 1)[1].split(\"]};\", 1)[0] + \"]}\"\r\n map_marker_dict = parse_js_object(map_marker_js_blob)[\"locations\"][0]\r\n properties = {\r\n \"ref\": map_marker_dict[\"id\"],\r\n \"name\": response.xpath('//div[@class=\"wpsl-locations-details\"]/span/strong/text()').get().strip(),\r\n \"addr_full\": unescape(\r\n re.sub(\r\n r\"\\s+\",\r\n \" \",\r\n \", \".join(filter(None, response.xpath('//div[@class=\"wpsl-location-address\"]//text()').getall())),\r\n )\r\n )\r\n .replace(\" ,\", \",\")\r\n .strip(),\r\n \"street_address\": \", \".join(filter(None, [map_marker_dict[\"address\"], map_marker_dict[\"address2\"]])),\r\n \"city\": map_marker_dict[\"city\"],\r\n \"state\": map_marker_dict[\"state\"],\r\n \"postcode\": map_marker_dict[\"zip\"],\r\n \"lat\": map_marker_dict[\"lat\"],\r\n \"lon\": map_marker_dict[\"lng\"],\r\n \"phone\": response.xpath('//div[@class=\"wpsl-contact-details\"]//a[contains(@href, \"tel:\")]/@href').get(),\r\n \"website\": response.url,\r\n \"facebook\": response.xpath(\r\n '//div[@class=\"entry-content\"]//a[contains(@href, \"https://www.facebook.com/\")]/@href'\r\n ).get(),\r\n }\r\n if properties.get(\"phone\") and \"tel:\" in properties.get(\"phone\"):\r\n properties[\"phone\"] = properties[\"phone\"].replace(\"tel:\", \"\")\r\n hours_string = \" \".join(filter(None, response.xpath('//table[@class=\"wpsl-opening-hours\"]//text()').getall()))\r\n properties[\"opening_hours\"] = OpeningHours()\r\n properties[\"opening_hours\"].add_ranges_from_string(hours_string)\r\n yield Feature(**properties)\r\n```\n", "before_files": [{"content": "import re\nfrom html import unescape\n\nfrom chompjs import parse_js_object\nfrom scrapy.spiders import SitemapSpider\n\nfrom locations.hours import OpeningHours\nfrom locations.items import Feature\n\n\nclass NewspowerAUSpider(SitemapSpider):\n name = \"newspower_au\"\n item_attributes = {\"brand\": \"Newspower\", \"brand_wikidata\": \"Q120670137\"}\n allowed_domains = [\"newspower.com.au\"]\n sitemap_urls = [\n \"https://newspower.com.au/wpsl_stores-sitemap1.xml\",\n \"https://newspower.com.au/wpsl_stores-sitemap2.xml\",\n ]\n sitemap_rules = [(\"/stores/\", \"parse\")]\n\n def parse(self, response):\n map_marker_js_blob = response.xpath('//script[contains(text(), \"var wpslMap_0 = \")]/text()').get()\n map_marker_js_blob = map_marker_js_blob.split(\"var wpslMap_0 = \", 1)[1].split(\"]};\", 1)[0] + \"]}\"\n map_marker_dict = parse_js_object(map_marker_js_blob)[\"locations\"][0]\n properties = {\n \"ref\": map_marker_dict[\"id\"],\n \"name\": response.xpath('//div[@class=\"wpsl-locations-details\"]/span/strong/text()').get().strip(),\n \"addr_full\": unescape(\n re.sub(\n r\"\\s+\",\n \" \",\n \", \".join(filter(None, response.xpath('//div[@class=\"wpsl-location-address\"]//text()').getall())),\n )\n )\n .replace(\" ,\", \",\")\n .strip(),\n \"street_address\": \", \".join(filter(None, [map_marker_dict[\"address\"], map_marker_dict[\"address2\"]])),\n \"city\": map_marker_dict[\"city\"],\n \"state\": map_marker_dict[\"state\"],\n \"postcode\": map_marker_dict[\"zip\"],\n \"lat\": map_marker_dict[\"lat\"],\n \"lon\": map_marker_dict[\"lng\"],\n \"phone\": response.xpath('//div[@class=\"wpsl-contact-details\"]//a[contains(@href, \"tel:\")]/@href').get(),\n \"website\": response.url,\n \"facebook\": response.xpath(\n '//div[@class=\"entry-content\"]//a[contains(@href, \"https://www.facebook.com/\")]/@href'\n ).get(),\n }\n if properties.get(\"phone\") and \"tel:\" in properties.get(\"phone\"):\n properties[\"phone\"] = properties[\"phone\"].replace(\"tel:\", \"\")\n hours_string = \" \".join(filter(None, response.xpath('//table[@class=\"wpsl-opening-hours\"]//text()').getall()))\n properties[\"opening_hours\"] = OpeningHours()\n properties[\"opening_hours\"].add_ranges_from_string(hours_string)\n yield Feature(**properties)\n", "path": "locations/spiders/newspower_au.py"}], "after_files": [{"content": "import re\nfrom html import unescape\n\nfrom chompjs import parse_js_object\nfrom scrapy.spiders import SitemapSpider\n\nfrom locations.hours import OpeningHours\nfrom locations.items import Feature\n\n\nclass NewspowerAUSpider(SitemapSpider):\n # Whilst WP Store Locator is used for this brand, it is set to\n # return at most the 5 closest points to a provided search\n # coordinate. There is an impractical number of search requests\n # thus required to use the WP Store Locator store finder API.\n # A Sitemap spider is used instead.\n name = \"newspower_au\"\n item_attributes = {\"brand\": \"Newspower\", \"brand_wikidata\": \"Q120670137\"}\n allowed_domains = [\"newspower.com.au\"]\n sitemap_urls = [\n \"https://newspower.com.au/wpsl_stores-sitemap1.xml\",\n \"https://newspower.com.au/wpsl_stores-sitemap2.xml\",\n ]\n sitemap_rules = [(r\"^https:\\/\\/newspower\\.com\\.au\\/stores/[^/]+\\/$\", \"parse\")]\n # Server will redirect wpsl_stores-sitemap2.xml to\n # https://newspower.com.au/store-locator/ if it doesn't like\n # the country/netblock requesting the page.\n requires_proxy = True\n\n def parse(self, response):\n map_marker_js_blob = response.xpath('//script[contains(text(), \"var wpslMap_0 = \")]/text()').get()\n map_marker_js_blob = map_marker_js_blob.split(\"var wpslMap_0 = \", 1)[1].split(\"]};\", 1)[0] + \"]}\"\n map_marker_dict = parse_js_object(map_marker_js_blob)[\"locations\"][0]\n properties = {\n \"ref\": map_marker_dict[\"id\"],\n \"name\": response.xpath('//div[@class=\"wpsl-locations-details\"]/span/strong/text()').get().strip(),\n \"addr_full\": unescape(\n re.sub(\n r\"\\s+\",\n \" \",\n \", \".join(filter(None, response.xpath('//div[@class=\"wpsl-location-address\"]//text()').getall())),\n )\n )\n .replace(\" ,\", \",\")\n .strip(),\n \"street_address\": \", \".join(filter(None, [map_marker_dict[\"address\"], map_marker_dict[\"address2\"]])),\n \"city\": map_marker_dict[\"city\"],\n \"state\": map_marker_dict[\"state\"],\n \"postcode\": map_marker_dict[\"zip\"],\n \"lat\": map_marker_dict[\"lat\"],\n \"lon\": map_marker_dict[\"lng\"],\n \"phone\": response.xpath('//div[@class=\"wpsl-contact-details\"]//a[contains(@href, \"tel:\")]/@href').get(),\n \"website\": response.url,\n \"facebook\": response.xpath(\n '//div[@class=\"entry-content\"]//a[contains(@href, \"https://www.facebook.com/\")]/@href'\n ).get(),\n }\n if properties.get(\"phone\") and \"tel:\" in properties.get(\"phone\"):\n properties[\"phone\"] = properties[\"phone\"].replace(\"tel:\", \"\")\n hours_string = \" \".join(filter(None, response.xpath('//table[@class=\"wpsl-opening-hours\"]//text()').getall()))\n properties[\"opening_hours\"] = OpeningHours()\n properties[\"opening_hours\"].add_ranges_from_string(hours_string)\n yield Feature(**properties)\n", "path": "locations/spiders/newspower_au.py"}]} | 1,605 | 388 |
gh_patches_debug_8250 | rasdani/github-patches | git_diff | saleor__saleor-4824 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Users can create the same address multiple times
### Steps to reproduce the problem
1. Start checkout
1. Set a shipping address A
1. Set a billing address B
1. Place the order
1. Go to your account details
1. Edit the address B to have the same information than A
1. Checkout with A or B (or re-create it through a new address input?)
1. Place the order
1. The order should have failed because the same address exist two times: `get() returned more than one object`
### What I expected to happen
Have an error when updating the address, saying I already have that address.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/account/utils.py`
Content:
```
1 import os
2 import os.path
3 import random
4
5 from django.conf import settings
6 from django.core.files import File
7
8 from ..checkout import AddressType
9
10 AVATARS_PATH = os.path.join(
11 settings.PROJECT_ROOT, "saleor", "static", "images", "avatars"
12 )
13
14
15 def store_user_address(user, address, address_type):
16 """Add address to user address book and set as default one."""
17 address, _ = user.addresses.get_or_create(**address.as_data())
18
19 if address_type == AddressType.BILLING:
20 if not user.default_billing_address:
21 set_user_default_billing_address(user, address)
22 elif address_type == AddressType.SHIPPING:
23 if not user.default_shipping_address:
24 set_user_default_shipping_address(user, address)
25
26
27 def set_user_default_billing_address(user, address):
28 user.default_billing_address = address
29 user.save(update_fields=["default_billing_address"])
30
31
32 def set_user_default_shipping_address(user, address):
33 user.default_shipping_address = address
34 user.save(update_fields=["default_shipping_address"])
35
36
37 def change_user_default_address(user, address, address_type):
38 if address_type == AddressType.BILLING:
39 if user.default_billing_address:
40 user.addresses.add(user.default_billing_address)
41 set_user_default_billing_address(user, address)
42 elif address_type == AddressType.SHIPPING:
43 if user.default_shipping_address:
44 user.addresses.add(user.default_shipping_address)
45 set_user_default_shipping_address(user, address)
46
47
48 def get_user_first_name(user):
49 """Return a user's first name from their default belling address.
50
51 Return nothing if none where found.
52 """
53 if user.first_name:
54 return user.first_name
55 if user.default_billing_address:
56 return user.default_billing_address.first_name
57 return None
58
59
60 def get_user_last_name(user):
61 """Return a user's last name from their default belling address.
62
63 Return nothing if none where found.
64 """
65 if user.last_name:
66 return user.last_name
67 if user.default_billing_address:
68 return user.default_billing_address.last_name
69 return None
70
71
72 def get_random_avatar():
73 """Return random avatar picked from a pool of static avatars."""
74 avatar_name = random.choice(os.listdir(AVATARS_PATH))
75 avatar_path = os.path.join(AVATARS_PATH, avatar_name)
76 return File(open(avatar_path, "rb"), name=avatar_name)
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/account/utils.py b/saleor/account/utils.py
--- a/saleor/account/utils.py
+++ b/saleor/account/utils.py
@@ -14,7 +14,11 @@
def store_user_address(user, address, address_type):
"""Add address to user address book and set as default one."""
- address, _ = user.addresses.get_or_create(**address.as_data())
+ address_data = address.as_data()
+
+ address = user.addresses.filter(**address_data).first()
+ if address is None:
+ address = user.addresses.create(**address_data)
if address_type == AddressType.BILLING:
if not user.default_billing_address:
| {"golden_diff": "diff --git a/saleor/account/utils.py b/saleor/account/utils.py\n--- a/saleor/account/utils.py\n+++ b/saleor/account/utils.py\n@@ -14,7 +14,11 @@\n \n def store_user_address(user, address, address_type):\n \"\"\"Add address to user address book and set as default one.\"\"\"\n- address, _ = user.addresses.get_or_create(**address.as_data())\n+ address_data = address.as_data()\n+\n+ address = user.addresses.filter(**address_data).first()\n+ if address is None:\n+ address = user.addresses.create(**address_data)\n \n if address_type == AddressType.BILLING:\n if not user.default_billing_address:\n", "issue": "Users can create the same address multiple times\n### Steps to reproduce the problem\r\n1. Start checkout\r\n1. Set a shipping address A\r\n1. Set a billing address B\r\n1. Place the order\r\n1. Go to your account details\r\n1. Edit the address B to have the same information than A\r\n1. Checkout with A or B (or re-create it through a new address input?)\r\n1. Place the order\r\n1. The order should have failed because the same address exist two times: `get() returned more than one object`\r\n\r\n\r\n### What I expected to happen\r\nHave an error when updating the address, saying I already have that address.\r\n\n", "before_files": [{"content": "import os\nimport os.path\nimport random\n\nfrom django.conf import settings\nfrom django.core.files import File\n\nfrom ..checkout import AddressType\n\nAVATARS_PATH = os.path.join(\n settings.PROJECT_ROOT, \"saleor\", \"static\", \"images\", \"avatars\"\n)\n\n\ndef store_user_address(user, address, address_type):\n \"\"\"Add address to user address book and set as default one.\"\"\"\n address, _ = user.addresses.get_or_create(**address.as_data())\n\n if address_type == AddressType.BILLING:\n if not user.default_billing_address:\n set_user_default_billing_address(user, address)\n elif address_type == AddressType.SHIPPING:\n if not user.default_shipping_address:\n set_user_default_shipping_address(user, address)\n\n\ndef set_user_default_billing_address(user, address):\n user.default_billing_address = address\n user.save(update_fields=[\"default_billing_address\"])\n\n\ndef set_user_default_shipping_address(user, address):\n user.default_shipping_address = address\n user.save(update_fields=[\"default_shipping_address\"])\n\n\ndef change_user_default_address(user, address, address_type):\n if address_type == AddressType.BILLING:\n if user.default_billing_address:\n user.addresses.add(user.default_billing_address)\n set_user_default_billing_address(user, address)\n elif address_type == AddressType.SHIPPING:\n if user.default_shipping_address:\n user.addresses.add(user.default_shipping_address)\n set_user_default_shipping_address(user, address)\n\n\ndef get_user_first_name(user):\n \"\"\"Return a user's first name from their default belling address.\n\n Return nothing if none where found.\n \"\"\"\n if user.first_name:\n return user.first_name\n if user.default_billing_address:\n return user.default_billing_address.first_name\n return None\n\n\ndef get_user_last_name(user):\n \"\"\"Return a user's last name from their default belling address.\n\n Return nothing if none where found.\n \"\"\"\n if user.last_name:\n return user.last_name\n if user.default_billing_address:\n return user.default_billing_address.last_name\n return None\n\n\ndef get_random_avatar():\n \"\"\"Return random avatar picked from a pool of static avatars.\"\"\"\n avatar_name = random.choice(os.listdir(AVATARS_PATH))\n avatar_path = os.path.join(AVATARS_PATH, avatar_name)\n return File(open(avatar_path, \"rb\"), name=avatar_name)\n", "path": "saleor/account/utils.py"}], "after_files": [{"content": "import os\nimport os.path\nimport random\n\nfrom django.conf import settings\nfrom django.core.files import File\n\nfrom ..checkout import AddressType\n\nAVATARS_PATH = os.path.join(\n settings.PROJECT_ROOT, \"saleor\", \"static\", \"images\", \"avatars\"\n)\n\n\ndef store_user_address(user, address, address_type):\n \"\"\"Add address to user address book and set as default one.\"\"\"\n address_data = address.as_data()\n\n address = user.addresses.filter(**address_data).first()\n if address is None:\n address = user.addresses.create(**address_data)\n\n if address_type == AddressType.BILLING:\n if not user.default_billing_address:\n set_user_default_billing_address(user, address)\n elif address_type == AddressType.SHIPPING:\n if not user.default_shipping_address:\n set_user_default_shipping_address(user, address)\n\n\ndef set_user_default_billing_address(user, address):\n user.default_billing_address = address\n user.save(update_fields=[\"default_billing_address\"])\n\n\ndef set_user_default_shipping_address(user, address):\n user.default_shipping_address = address\n user.save(update_fields=[\"default_shipping_address\"])\n\n\ndef change_user_default_address(user, address, address_type):\n if address_type == AddressType.BILLING:\n if user.default_billing_address:\n user.addresses.add(user.default_billing_address)\n set_user_default_billing_address(user, address)\n elif address_type == AddressType.SHIPPING:\n if user.default_shipping_address:\n user.addresses.add(user.default_shipping_address)\n set_user_default_shipping_address(user, address)\n\n\ndef get_user_first_name(user):\n \"\"\"Return a user's first name from their default belling address.\n\n Return nothing if none where found.\n \"\"\"\n if user.first_name:\n return user.first_name\n if user.default_billing_address:\n return user.default_billing_address.first_name\n return None\n\n\ndef get_user_last_name(user):\n \"\"\"Return a user's last name from their default belling address.\n\n Return nothing if none where found.\n \"\"\"\n if user.last_name:\n return user.last_name\n if user.default_billing_address:\n return user.default_billing_address.last_name\n return None\n\n\ndef get_random_avatar():\n \"\"\"Return random avatar picked from a pool of static avatars.\"\"\"\n avatar_name = random.choice(os.listdir(AVATARS_PATH))\n avatar_path = os.path.join(AVATARS_PATH, avatar_name)\n return File(open(avatar_path, \"rb\"), name=avatar_name)\n", "path": "saleor/account/utils.py"}]} | 1,056 | 155 |
gh_patches_debug_12244 | rasdani/github-patches | git_diff | vega__altair-2106 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Options of `point` argument
Documentation of `point` argument in marks isn't very easily discoverable.
Recently I tried to create a chart with colored line and points. My first try was with `mark_line(color="red", point=True)`, similarly to [this example](https://altair-viz.github.io/gallery/line_chart_with_points.html), but it produced red line with blue points. This turned out to be current default behavior of Vega-lite (found in [this issue](https://github.com/vega/vega-lite/issues/6111)).
After some digging, I found that `point` can be "an object defining the properties of the overlayed points." (which seems to imply `OverlayMarkDef`, but simple dictionary showed to be enough). I think it would be really helpful to have this slightly easier to find. A "Line chart with points" example seems like a good place.
So, what do you think about adding some sort of variation of the following code snippet in that example?
```python
import altair as alt
import numpy as np
import pandas as pd
x = np.arange(100)
source = pd.DataFrame({"x": x, "f(x)": np.sin(x / 5)})
# Only this part will be added
alt.Chart(source).mark_line(
color="red", point={"color": "red"}
).encode(
x="x",
y="f(x)"
)
```

If this is OK, I would like to help with PR. Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `altair/examples/line_chart_with_points.py`
Content:
```
1 """
2 Line Chart with Points
3 ----------------------
4 This chart shows a simple line chart with points marking each value.
5 """
6 # category: line charts
7 import altair as alt
8 import numpy as np
9 import pandas as pd
10
11 x = np.arange(100)
12 source = pd.DataFrame({
13 'x': x,
14 'f(x)': np.sin(x / 5)
15 })
16
17 alt.Chart(source).mark_line(point=True).encode(
18 x='x',
19 y='f(x)'
20 )
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/altair/examples/line_chart_with_points.py b/altair/examples/line_chart_with_points.py
--- a/altair/examples/line_chart_with_points.py
+++ b/altair/examples/line_chart_with_points.py
@@ -1,7 +1,9 @@
"""
Line Chart with Points
----------------------
-This chart shows a simple line chart with points marking each value.
+This chart shows a simple line chart with points marking each value. Use
+``point=True`` for points with default appearance or customize it with
+``OverlayMarkDef()``.
"""
# category: line charts
import altair as alt
@@ -14,7 +16,9 @@
'f(x)': np.sin(x / 5)
})
-alt.Chart(source).mark_line(point=True).encode(
+alt.Chart(source).mark_line(
+ point=alt.OverlayMarkDef(color="red")
+).encode(
x='x',
y='f(x)'
)
| {"golden_diff": "diff --git a/altair/examples/line_chart_with_points.py b/altair/examples/line_chart_with_points.py\n--- a/altair/examples/line_chart_with_points.py\n+++ b/altair/examples/line_chart_with_points.py\n@@ -1,7 +1,9 @@\n \"\"\"\n Line Chart with Points\n ----------------------\n-This chart shows a simple line chart with points marking each value.\n+This chart shows a simple line chart with points marking each value. Use\n+``point=True`` for points with default appearance or customize it with\n+``OverlayMarkDef()``.\n \"\"\"\n # category: line charts\n import altair as alt\n@@ -14,7 +16,9 @@\n 'f(x)': np.sin(x / 5)\n })\n \n-alt.Chart(source).mark_line(point=True).encode(\n+alt.Chart(source).mark_line(\n+ point=alt.OverlayMarkDef(color=\"red\")\n+).encode(\n x='x',\n y='f(x)'\n )\n", "issue": "Options of `point` argument\nDocumentation of `point` argument in marks isn't very easily discoverable.\r\n\r\nRecently I tried to create a chart with colored line and points. My first try was with `mark_line(color=\"red\", point=True)`, similarly to [this example](https://altair-viz.github.io/gallery/line_chart_with_points.html), but it produced red line with blue points. This turned out to be current default behavior of Vega-lite (found in [this issue](https://github.com/vega/vega-lite/issues/6111)).\r\n\r\nAfter some digging, I found that `point` can be \"an object defining the properties of the overlayed points.\" (which seems to imply `OverlayMarkDef`, but simple dictionary showed to be enough). I think it would be really helpful to have this slightly easier to find. A \"Line chart with points\" example seems like a good place.\r\n\r\nSo, what do you think about adding some sort of variation of the following code snippet in that example?\r\n```python\r\nimport altair as alt\r\nimport numpy as np\r\nimport pandas as pd\r\n\r\nx = np.arange(100)\r\nsource = pd.DataFrame({\"x\": x, \"f(x)\": np.sin(x / 5)})\r\n\r\n# Only this part will be added\r\nalt.Chart(source).mark_line(\r\n color=\"red\", point={\"color\": \"red\"}\r\n).encode(\r\n x=\"x\",\r\n y=\"f(x)\"\r\n)\r\n```\r\n\r\n\r\nIf this is OK, I would like to help with PR. Thanks.\n", "before_files": [{"content": "\"\"\"\nLine Chart with Points\n----------------------\nThis chart shows a simple line chart with points marking each value.\n\"\"\"\n# category: line charts\nimport altair as alt\nimport numpy as np\nimport pandas as pd\n\nx = np.arange(100)\nsource = pd.DataFrame({\n 'x': x,\n 'f(x)': np.sin(x / 5)\n})\n\nalt.Chart(source).mark_line(point=True).encode(\n x='x',\n y='f(x)'\n)\n", "path": "altair/examples/line_chart_with_points.py"}], "after_files": [{"content": "\"\"\"\nLine Chart with Points\n----------------------\nThis chart shows a simple line chart with points marking each value. Use\n``point=True`` for points with default appearance or customize it with\n``OverlayMarkDef()``.\n\"\"\"\n# category: line charts\nimport altair as alt\nimport numpy as np\nimport pandas as pd\n\nx = np.arange(100)\nsource = pd.DataFrame({\n 'x': x,\n 'f(x)': np.sin(x / 5)\n})\n\nalt.Chart(source).mark_line(\n point=alt.OverlayMarkDef(color=\"red\")\n).encode(\n x='x',\n y='f(x)'\n)\n", "path": "altair/examples/line_chart_with_points.py"}]} | 787 | 210 |
gh_patches_debug_23834 | rasdani/github-patches | git_diff | NVIDIA__NVFlare-380 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Yaml loader should be replaced by safe_loader or other more secure loader
To load yaml files from unknown source, we should avoid using yaml's loader. A better way is to use either safe_loader or other mechanism.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nvflare/lighter/provision.py`
Content:
```
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16
17 import argparse
18 import os
19 import pathlib
20 import shutil
21 import sys
22 import webbrowser
23
24 import yaml
25
26 from nvflare.fuel.utils.class_utils import instantiate_class
27 from nvflare.lighter.spec import Participant, Project, Provisioner
28
29
30 def main():
31 parser = argparse.ArgumentParser()
32 parser.add_argument("-p", "--project_file", type=str, default="project.yml", help="file to describe FL project")
33 parser.add_argument("-w", "--workspace", type=str, default="workspace", help="directory used by provision")
34 parser.add_argument("-c", "--custom_folder", type=str, default=".", help="additional folder to load python codes")
35 parser.add_argument(
36 "-u",
37 "--ui_tool",
38 action="store_true",
39 help="Run provisioning UI tool to generate project.yml file",
40 )
41
42 args = parser.parse_args()
43
44 file_path = pathlib.Path(__file__).parent.absolute()
45 current_path = os.getcwd()
46 custom_folder_path = os.path.join(current_path, args.custom_folder)
47 sys.path.append(custom_folder_path)
48 print("Path list (sys.path) for python codes loading: {}".format(sys.path))
49
50 # main project file
51 project_file = args.project_file
52 current_project_yml = os.path.join(current_path, "project.yml")
53 if len(sys.argv) == 1 and not os.path.exists(current_project_yml):
54 answer = input(
55 f"No project.yml found in current folder. Is it OK to generate one at {current_project_yml} for you? (y/N) "
56 )
57 if answer.strip().upper() == "Y":
58 shutil.copyfile(os.path.join(file_path, "project.yml"), current_project_yml)
59 print(f"{current_project_yml} was created. Please edit it to fit your FL configuration.")
60 exit(0)
61
62 if args.ui_tool:
63 ui_helper_path = os.path.join(file_path, "provision_helper.html")
64 ui_helper_url = f"file://{ui_helper_path}"
65 webbrowser.open_new_tab(ui_helper_url)
66 print(
67 "\n******\n"
68 "Now launching provisioning UI tool.\n"
69 "After generating project.yml in the browser and saving it to your local folder,\n"
70 "please re-run provision with -p option, pointing to the generated project.yml, to generate all packages.\n******\n"
71 )
72 exit(0)
73
74 workspace = args.workspace
75 workspace_full_path = os.path.join(current_path, workspace)
76
77 project_full_path = os.path.join(current_path, project_file)
78 print(f"Project yaml file: {project_full_path}.")
79
80 project_dict = yaml.load(open(project_full_path, "r"), Loader=yaml.Loader)
81 api_version = project_dict.get("api_version")
82 if api_version not in [3]:
83 raise ValueError(f"API version expected 3 but found {api_version}")
84
85 project_name = project_dict.get("name")
86 project_description = project_dict.get("description", "")
87 participants = list()
88 for p in project_dict.get("participants"):
89 participants.append(Participant(**p))
90 project = Project(name=project_name, description=project_description, participants=participants)
91
92 builders = list()
93 for b in project_dict.get("builders"):
94 path = b.get("path")
95 args = b.get("args")
96 builders.append(instantiate_class(path, args))
97
98 provisioner = Provisioner(workspace_full_path, builders)
99
100 provisioner.provision(project)
101
102
103 if __name__ == "__main__":
104 main()
105
```
Path: `nvflare/lighter/utils.py`
Content:
```
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import random
17 from base64 import b64encode
18
19 import yaml
20 from cryptography.hazmat.primitives import hashes
21 from cryptography.hazmat.primitives.asymmetric import padding
22
23
24 def generate_password():
25 s = "abcdefghijklmnopqrstuvwxyz01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ"
26 passlen = 16
27 p = "".join(random.sample(s, passlen))
28 return p
29
30
31 def sign_all(content_folder, signing_pri_key):
32 signatures = dict()
33 for f in os.listdir(content_folder):
34 path = os.path.join(content_folder, f)
35 if os.path.isfile(path):
36 signature = signing_pri_key.sign(
37 data=open(path, "rb").read(),
38 padding=padding.PSS(
39 mgf=padding.MGF1(hashes.SHA256()),
40 salt_length=padding.PSS.MAX_LENGTH,
41 ),
42 algorithm=hashes.SHA256(),
43 )
44 signatures[f] = b64encode(signature).decode("utf-8")
45 return signatures
46
47
48 def load_yaml(file_name):
49 return yaml.load(open(file_name, "r"), Loader=yaml.Loader)
50
51
52 def sh_replace(src, mapping_dict):
53 result = src
54 for k, v in mapping_dict.items():
55 result = result.replace("{~~" + k + "~~}", str(v))
56 return result
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nvflare/lighter/provision.py b/nvflare/lighter/provision.py
--- a/nvflare/lighter/provision.py
+++ b/nvflare/lighter/provision.py
@@ -21,10 +21,9 @@
import sys
import webbrowser
-import yaml
-
from nvflare.fuel.utils.class_utils import instantiate_class
from nvflare.lighter.spec import Participant, Project, Provisioner
+from nvflare.lighter.utils import load_yaml
def main():
@@ -77,7 +76,7 @@
project_full_path = os.path.join(current_path, project_file)
print(f"Project yaml file: {project_full_path}.")
- project_dict = yaml.load(open(project_full_path, "r"), Loader=yaml.Loader)
+ project_dict = load_yaml(project_full_path)
api_version = project_dict.get("api_version")
if api_version not in [3]:
raise ValueError(f"API version expected 3 but found {api_version}")
diff --git a/nvflare/lighter/utils.py b/nvflare/lighter/utils.py
--- a/nvflare/lighter/utils.py
+++ b/nvflare/lighter/utils.py
@@ -46,7 +46,7 @@
def load_yaml(file_name):
- return yaml.load(open(file_name, "r"), Loader=yaml.Loader)
+ return yaml.safe_load(open(file_name, "r"))
def sh_replace(src, mapping_dict):
| {"golden_diff": "diff --git a/nvflare/lighter/provision.py b/nvflare/lighter/provision.py\n--- a/nvflare/lighter/provision.py\n+++ b/nvflare/lighter/provision.py\n@@ -21,10 +21,9 @@\n import sys\n import webbrowser\n \n-import yaml\n-\n from nvflare.fuel.utils.class_utils import instantiate_class\n from nvflare.lighter.spec import Participant, Project, Provisioner\n+from nvflare.lighter.utils import load_yaml\n \n \n def main():\n@@ -77,7 +76,7 @@\n project_full_path = os.path.join(current_path, project_file)\n print(f\"Project yaml file: {project_full_path}.\")\n \n- project_dict = yaml.load(open(project_full_path, \"r\"), Loader=yaml.Loader)\n+ project_dict = load_yaml(project_full_path)\n api_version = project_dict.get(\"api_version\")\n if api_version not in [3]:\n raise ValueError(f\"API version expected 3 but found {api_version}\")\ndiff --git a/nvflare/lighter/utils.py b/nvflare/lighter/utils.py\n--- a/nvflare/lighter/utils.py\n+++ b/nvflare/lighter/utils.py\n@@ -46,7 +46,7 @@\n \n \n def load_yaml(file_name):\n- return yaml.load(open(file_name, \"r\"), Loader=yaml.Loader)\n+ return yaml.safe_load(open(file_name, \"r\"))\n \n \n def sh_replace(src, mapping_dict):\n", "issue": "Yaml loader should be replaced by safe_loader or other more secure loader\nTo load yaml files from unknown source, we should avoid using yaml's loader. A better way is to use either safe_loader or other mechanism.\n", "before_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nimport argparse\nimport os\nimport pathlib\nimport shutil\nimport sys\nimport webbrowser\n\nimport yaml\n\nfrom nvflare.fuel.utils.class_utils import instantiate_class\nfrom nvflare.lighter.spec import Participant, Project, Provisioner\n\n\ndef main():\n parser = argparse.ArgumentParser()\n parser.add_argument(\"-p\", \"--project_file\", type=str, default=\"project.yml\", help=\"file to describe FL project\")\n parser.add_argument(\"-w\", \"--workspace\", type=str, default=\"workspace\", help=\"directory used by provision\")\n parser.add_argument(\"-c\", \"--custom_folder\", type=str, default=\".\", help=\"additional folder to load python codes\")\n parser.add_argument(\n \"-u\",\n \"--ui_tool\",\n action=\"store_true\",\n help=\"Run provisioning UI tool to generate project.yml file\",\n )\n\n args = parser.parse_args()\n\n file_path = pathlib.Path(__file__).parent.absolute()\n current_path = os.getcwd()\n custom_folder_path = os.path.join(current_path, args.custom_folder)\n sys.path.append(custom_folder_path)\n print(\"Path list (sys.path) for python codes loading: {}\".format(sys.path))\n\n # main project file\n project_file = args.project_file\n current_project_yml = os.path.join(current_path, \"project.yml\")\n if len(sys.argv) == 1 and not os.path.exists(current_project_yml):\n answer = input(\n f\"No project.yml found in current folder. Is it OK to generate one at {current_project_yml} for you? (y/N) \"\n )\n if answer.strip().upper() == \"Y\":\n shutil.copyfile(os.path.join(file_path, \"project.yml\"), current_project_yml)\n print(f\"{current_project_yml} was created. Please edit it to fit your FL configuration.\")\n exit(0)\n\n if args.ui_tool:\n ui_helper_path = os.path.join(file_path, \"provision_helper.html\")\n ui_helper_url = f\"file://{ui_helper_path}\"\n webbrowser.open_new_tab(ui_helper_url)\n print(\n \"\\n******\\n\"\n \"Now launching provisioning UI tool.\\n\"\n \"After generating project.yml in the browser and saving it to your local folder,\\n\"\n \"please re-run provision with -p option, pointing to the generated project.yml, to generate all packages.\\n******\\n\"\n )\n exit(0)\n\n workspace = args.workspace\n workspace_full_path = os.path.join(current_path, workspace)\n\n project_full_path = os.path.join(current_path, project_file)\n print(f\"Project yaml file: {project_full_path}.\")\n\n project_dict = yaml.load(open(project_full_path, \"r\"), Loader=yaml.Loader)\n api_version = project_dict.get(\"api_version\")\n if api_version not in [3]:\n raise ValueError(f\"API version expected 3 but found {api_version}\")\n\n project_name = project_dict.get(\"name\")\n project_description = project_dict.get(\"description\", \"\")\n participants = list()\n for p in project_dict.get(\"participants\"):\n participants.append(Participant(**p))\n project = Project(name=project_name, description=project_description, participants=participants)\n\n builders = list()\n for b in project_dict.get(\"builders\"):\n path = b.get(\"path\")\n args = b.get(\"args\")\n builders.append(instantiate_class(path, args))\n\n provisioner = Provisioner(workspace_full_path, builders)\n\n provisioner.provision(project)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "nvflare/lighter/provision.py"}, {"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport random\nfrom base64 import b64encode\n\nimport yaml\nfrom cryptography.hazmat.primitives import hashes\nfrom cryptography.hazmat.primitives.asymmetric import padding\n\n\ndef generate_password():\n s = \"abcdefghijklmnopqrstuvwxyz01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n passlen = 16\n p = \"\".join(random.sample(s, passlen))\n return p\n\n\ndef sign_all(content_folder, signing_pri_key):\n signatures = dict()\n for f in os.listdir(content_folder):\n path = os.path.join(content_folder, f)\n if os.path.isfile(path):\n signature = signing_pri_key.sign(\n data=open(path, \"rb\").read(),\n padding=padding.PSS(\n mgf=padding.MGF1(hashes.SHA256()),\n salt_length=padding.PSS.MAX_LENGTH,\n ),\n algorithm=hashes.SHA256(),\n )\n signatures[f] = b64encode(signature).decode(\"utf-8\")\n return signatures\n\n\ndef load_yaml(file_name):\n return yaml.load(open(file_name, \"r\"), Loader=yaml.Loader)\n\n\ndef sh_replace(src, mapping_dict):\n result = src\n for k, v in mapping_dict.items():\n result = result.replace(\"{~~\" + k + \"~~}\", str(v))\n return result\n", "path": "nvflare/lighter/utils.py"}], "after_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nimport argparse\nimport os\nimport pathlib\nimport shutil\nimport sys\nimport webbrowser\n\nfrom nvflare.fuel.utils.class_utils import instantiate_class\nfrom nvflare.lighter.spec import Participant, Project, Provisioner\nfrom nvflare.lighter.utils import load_yaml\n\n\ndef main():\n parser = argparse.ArgumentParser()\n parser.add_argument(\"-p\", \"--project_file\", type=str, default=\"project.yml\", help=\"file to describe FL project\")\n parser.add_argument(\"-w\", \"--workspace\", type=str, default=\"workspace\", help=\"directory used by provision\")\n parser.add_argument(\"-c\", \"--custom_folder\", type=str, default=\".\", help=\"additional folder to load python codes\")\n parser.add_argument(\n \"-u\",\n \"--ui_tool\",\n action=\"store_true\",\n help=\"Run provisioning UI tool to generate project.yml file\",\n )\n\n args = parser.parse_args()\n\n file_path = pathlib.Path(__file__).parent.absolute()\n current_path = os.getcwd()\n custom_folder_path = os.path.join(current_path, args.custom_folder)\n sys.path.append(custom_folder_path)\n print(\"Path list (sys.path) for python codes loading: {}\".format(sys.path))\n\n # main project file\n project_file = args.project_file\n current_project_yml = os.path.join(current_path, \"project.yml\")\n if len(sys.argv) == 1 and not os.path.exists(current_project_yml):\n answer = input(\n f\"No project.yml found in current folder. Is it OK to generate one at {current_project_yml} for you? (y/N) \"\n )\n if answer.strip().upper() == \"Y\":\n shutil.copyfile(os.path.join(file_path, \"project.yml\"), current_project_yml)\n print(f\"{current_project_yml} was created. Please edit it to fit your FL configuration.\")\n exit(0)\n\n if args.ui_tool:\n ui_helper_path = os.path.join(file_path, \"provision_helper.html\")\n ui_helper_url = f\"file://{ui_helper_path}\"\n webbrowser.open_new_tab(ui_helper_url)\n print(\n \"\\n******\\n\"\n \"Now launching provisioning UI tool.\\n\"\n \"After generating project.yml in the browser and saving it to your local folder,\\n\"\n \"please re-run provision with -p option, pointing to the generated project.yml, to generate all packages.\\n******\\n\"\n )\n exit(0)\n\n workspace = args.workspace\n workspace_full_path = os.path.join(current_path, workspace)\n\n project_full_path = os.path.join(current_path, project_file)\n print(f\"Project yaml file: {project_full_path}.\")\n\n project_dict = load_yaml(project_full_path)\n api_version = project_dict.get(\"api_version\")\n if api_version not in [3]:\n raise ValueError(f\"API version expected 3 but found {api_version}\")\n\n project_name = project_dict.get(\"name\")\n project_description = project_dict.get(\"description\", \"\")\n participants = list()\n for p in project_dict.get(\"participants\"):\n participants.append(Participant(**p))\n project = Project(name=project_name, description=project_description, participants=participants)\n\n builders = list()\n for b in project_dict.get(\"builders\"):\n path = b.get(\"path\")\n args = b.get(\"args\")\n builders.append(instantiate_class(path, args))\n\n provisioner = Provisioner(workspace_full_path, builders)\n\n provisioner.provision(project)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "nvflare/lighter/provision.py"}, {"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport random\nfrom base64 import b64encode\n\nimport yaml\nfrom cryptography.hazmat.primitives import hashes\nfrom cryptography.hazmat.primitives.asymmetric import padding\n\n\ndef generate_password():\n s = \"abcdefghijklmnopqrstuvwxyz01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n passlen = 16\n p = \"\".join(random.sample(s, passlen))\n return p\n\n\ndef sign_all(content_folder, signing_pri_key):\n signatures = dict()\n for f in os.listdir(content_folder):\n path = os.path.join(content_folder, f)\n if os.path.isfile(path):\n signature = signing_pri_key.sign(\n data=open(path, \"rb\").read(),\n padding=padding.PSS(\n mgf=padding.MGF1(hashes.SHA256()),\n salt_length=padding.PSS.MAX_LENGTH,\n ),\n algorithm=hashes.SHA256(),\n )\n signatures[f] = b64encode(signature).decode(\"utf-8\")\n return signatures\n\n\ndef load_yaml(file_name):\n return yaml.safe_load(open(file_name, \"r\"))\n\n\ndef sh_replace(src, mapping_dict):\n result = src\n for k, v in mapping_dict.items():\n result = result.replace(\"{~~\" + k + \"~~}\", str(v))\n return result\n", "path": "nvflare/lighter/utils.py"}]} | 1,980 | 315 |
gh_patches_debug_16051 | rasdani/github-patches | git_diff | docker__docker-py-812 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
requests 2.8.0 re-introduces bug #647
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/unixconn/unixconn.py`
Content:
```
1 # Copyright 2013 dotCloud inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import six
15 import requests.adapters
16 import socket
17
18 if six.PY3:
19 import http.client as httplib
20 else:
21 import httplib
22
23 try:
24 import requests.packages.urllib3 as urllib3
25 except ImportError:
26 import urllib3
27
28 RecentlyUsedContainer = urllib3._collections.RecentlyUsedContainer
29
30
31 class UnixHTTPConnection(httplib.HTTPConnection, object):
32 def __init__(self, base_url, unix_socket, timeout=60):
33 httplib.HTTPConnection.__init__(self, 'localhost', timeout=timeout)
34 self.base_url = base_url
35 self.unix_socket = unix_socket
36 self.timeout = timeout
37
38 def connect(self):
39 sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
40 sock.settimeout(self.timeout)
41 sock.connect(self.unix_socket)
42 self.sock = sock
43
44
45 class UnixHTTPConnectionPool(urllib3.connectionpool.HTTPConnectionPool):
46 def __init__(self, base_url, socket_path, timeout=60):
47 urllib3.connectionpool.HTTPConnectionPool.__init__(
48 self, 'localhost', timeout=timeout
49 )
50 self.base_url = base_url
51 self.socket_path = socket_path
52 self.timeout = timeout
53
54 def _new_conn(self):
55 return UnixHTTPConnection(self.base_url, self.socket_path,
56 self.timeout)
57
58
59 class UnixAdapter(requests.adapters.HTTPAdapter):
60 def __init__(self, socket_url, timeout=60):
61 socket_path = socket_url.replace('http+unix://', '')
62 if not socket_path.startswith('/'):
63 socket_path = '/' + socket_path
64 self.socket_path = socket_path
65 self.timeout = timeout
66 self.pools = RecentlyUsedContainer(10,
67 dispose_func=lambda p: p.close())
68 super(UnixAdapter, self).__init__()
69
70 def get_connection(self, url, proxies=None):
71 with self.pools.lock:
72 pool = self.pools.get(url)
73 if pool:
74 return pool
75
76 pool = UnixHTTPConnectionPool(url,
77 self.socket_path,
78 self.timeout)
79 self.pools[url] = pool
80
81 return pool
82
83 def close(self):
84 self.pools.clear()
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/unixconn/unixconn.py b/docker/unixconn/unixconn.py
--- a/docker/unixconn/unixconn.py
+++ b/docker/unixconn/unixconn.py
@@ -73,12 +73,20 @@
if pool:
return pool
- pool = UnixHTTPConnectionPool(url,
- self.socket_path,
- self.timeout)
+ pool = UnixHTTPConnectionPool(
+ url, self.socket_path, self.timeout
+ )
self.pools[url] = pool
return pool
+ def request_url(self, request, proxies):
+ # The select_proxy utility in requests errors out when the provided URL
+ # doesn't have a hostname, like is the case when using a UNIX socket.
+ # Since proxies are an irrelevant notion in the case of UNIX sockets
+ # anyway, we simply return the path URL directly.
+ # See also: https://github.com/docker/docker-py/issues/811
+ return request.path_url
+
def close(self):
self.pools.clear()
| {"golden_diff": "diff --git a/docker/unixconn/unixconn.py b/docker/unixconn/unixconn.py\n--- a/docker/unixconn/unixconn.py\n+++ b/docker/unixconn/unixconn.py\n@@ -73,12 +73,20 @@\n if pool:\n return pool\n \n- pool = UnixHTTPConnectionPool(url,\n- self.socket_path,\n- self.timeout)\n+ pool = UnixHTTPConnectionPool(\n+ url, self.socket_path, self.timeout\n+ )\n self.pools[url] = pool\n \n return pool\n \n+ def request_url(self, request, proxies):\n+ # The select_proxy utility in requests errors out when the provided URL\n+ # doesn't have a hostname, like is the case when using a UNIX socket.\n+ # Since proxies are an irrelevant notion in the case of UNIX sockets\n+ # anyway, we simply return the path URL directly.\n+ # See also: https://github.com/docker/docker-py/issues/811\n+ return request.path_url\n+\n def close(self):\n self.pools.clear()\n", "issue": "requests 2.8.0 re-introduces bug #647\n\n", "before_files": [{"content": "# Copyright 2013 dotCloud inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport six\nimport requests.adapters\nimport socket\n\nif six.PY3:\n import http.client as httplib\nelse:\n import httplib\n\ntry:\n import requests.packages.urllib3 as urllib3\nexcept ImportError:\n import urllib3\n\nRecentlyUsedContainer = urllib3._collections.RecentlyUsedContainer\n\n\nclass UnixHTTPConnection(httplib.HTTPConnection, object):\n def __init__(self, base_url, unix_socket, timeout=60):\n httplib.HTTPConnection.__init__(self, 'localhost', timeout=timeout)\n self.base_url = base_url\n self.unix_socket = unix_socket\n self.timeout = timeout\n\n def connect(self):\n sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)\n sock.settimeout(self.timeout)\n sock.connect(self.unix_socket)\n self.sock = sock\n\n\nclass UnixHTTPConnectionPool(urllib3.connectionpool.HTTPConnectionPool):\n def __init__(self, base_url, socket_path, timeout=60):\n urllib3.connectionpool.HTTPConnectionPool.__init__(\n self, 'localhost', timeout=timeout\n )\n self.base_url = base_url\n self.socket_path = socket_path\n self.timeout = timeout\n\n def _new_conn(self):\n return UnixHTTPConnection(self.base_url, self.socket_path,\n self.timeout)\n\n\nclass UnixAdapter(requests.adapters.HTTPAdapter):\n def __init__(self, socket_url, timeout=60):\n socket_path = socket_url.replace('http+unix://', '')\n if not socket_path.startswith('/'):\n socket_path = '/' + socket_path\n self.socket_path = socket_path\n self.timeout = timeout\n self.pools = RecentlyUsedContainer(10,\n dispose_func=lambda p: p.close())\n super(UnixAdapter, self).__init__()\n\n def get_connection(self, url, proxies=None):\n with self.pools.lock:\n pool = self.pools.get(url)\n if pool:\n return pool\n\n pool = UnixHTTPConnectionPool(url,\n self.socket_path,\n self.timeout)\n self.pools[url] = pool\n\n return pool\n\n def close(self):\n self.pools.clear()\n", "path": "docker/unixconn/unixconn.py"}], "after_files": [{"content": "# Copyright 2013 dotCloud inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport six\nimport requests.adapters\nimport socket\n\nif six.PY3:\n import http.client as httplib\nelse:\n import httplib\n\ntry:\n import requests.packages.urllib3 as urllib3\nexcept ImportError:\n import urllib3\n\nRecentlyUsedContainer = urllib3._collections.RecentlyUsedContainer\n\n\nclass UnixHTTPConnection(httplib.HTTPConnection, object):\n def __init__(self, base_url, unix_socket, timeout=60):\n httplib.HTTPConnection.__init__(self, 'localhost', timeout=timeout)\n self.base_url = base_url\n self.unix_socket = unix_socket\n self.timeout = timeout\n\n def connect(self):\n sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)\n sock.settimeout(self.timeout)\n sock.connect(self.unix_socket)\n self.sock = sock\n\n\nclass UnixHTTPConnectionPool(urllib3.connectionpool.HTTPConnectionPool):\n def __init__(self, base_url, socket_path, timeout=60):\n urllib3.connectionpool.HTTPConnectionPool.__init__(\n self, 'localhost', timeout=timeout\n )\n self.base_url = base_url\n self.socket_path = socket_path\n self.timeout = timeout\n\n def _new_conn(self):\n return UnixHTTPConnection(self.base_url, self.socket_path,\n self.timeout)\n\n\nclass UnixAdapter(requests.adapters.HTTPAdapter):\n def __init__(self, socket_url, timeout=60):\n socket_path = socket_url.replace('http+unix://', '')\n if not socket_path.startswith('/'):\n socket_path = '/' + socket_path\n self.socket_path = socket_path\n self.timeout = timeout\n self.pools = RecentlyUsedContainer(10,\n dispose_func=lambda p: p.close())\n super(UnixAdapter, self).__init__()\n\n def get_connection(self, url, proxies=None):\n with self.pools.lock:\n pool = self.pools.get(url)\n if pool:\n return pool\n\n pool = UnixHTTPConnectionPool(\n url, self.socket_path, self.timeout\n )\n self.pools[url] = pool\n\n return pool\n\n def request_url(self, request, proxies):\n # The select_proxy utility in requests errors out when the provided URL\n # doesn't have a hostname, like is the case when using a UNIX socket.\n # Since proxies are an irrelevant notion in the case of UNIX sockets\n # anyway, we simply return the path URL directly.\n # See also: https://github.com/docker/docker-py/issues/811\n return request.path_url\n\n def close(self):\n self.pools.clear()\n", "path": "docker/unixconn/unixconn.py"}]} | 1,047 | 237 |
gh_patches_debug_13689 | rasdani/github-patches | git_diff | conan-io__conan-2592 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
conan search gives AttributeError: 'UserIO' object has no attribute 'warn'
Version: 1.1.1
OS: Linux Ubuntu 14.04
conda: v4.2.7
Repro steps:
* `conda create -n conan python=2.7`
* `source activate conan`
* `pip install conan`
* `conan search zlib/1.2.11@conan/stable -r=conan-center`
Gives the following python stack:
```
(conan) ~ $ conan search zlib/1.2.11@conan/stable -r=conan-center
Traceback (most recent call last):
File "/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/command.py", line 1131, in run
method(args[0][1:])
File "/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/command.py", line 814, in search
outdated=args.outdated)
File "/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/conan_api.py", line 64, in wrapper
return f(*args, **kwargs)
File "/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/conan_api.py", line 595, in search_packages
outdated=outdated)
File "/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/cmd/search.py", line 44, in search_packages
remote = RemoteRegistry(self._client_cache.registry, self._user_io).remote(remote)
File "/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/remote_registry.py", line 95, in remote
remotes, _ = self._load()
File "/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/remote_registry.py", line 65, in _load
self._output.warn("Remotes registry file missing, creating default one in %s"
AttributeError: 'UserIO' object has no attribute 'warn'
ERROR: 'UserIO' object has no attribute 'warn'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/cmd/search.py`
Content:
```
1 from conans.search.search import DiskSearchManager, filter_outdated
2 from collections import OrderedDict
3 from conans.client.remote_registry import RemoteRegistry
4
5
6 class Search(object):
7 def __init__(self, client_cache, remote_manager, user_io):
8 self._client_cache = client_cache
9 self._remote_manager = remote_manager
10 self._user_io = user_io
11
12 def search_recipes(self, pattern, remote=None, case_sensitive=False):
13 ignorecase = not case_sensitive
14 if not remote:
15 return DiskSearchManager(self._client_cache).search_recipes(pattern, ignorecase)
16
17 registry = RemoteRegistry(self._client_cache.registry, self._user_io.out)
18 if remote == 'all':
19 remotes = registry.remotes
20 # We have to check if there is a remote called "all"
21 # Deprecate: 2.0 can remove this check
22 if 'all' not in (r.name for r in remotes):
23 references = {}
24 for remote in remotes:
25 result = self._remote_manager.search_recipes(remote, pattern, ignorecase)
26 if result:
27 references[remote.name] = result
28 return references
29 # single remote
30 remote = registry.remote(remote)
31 return self._remote_manager.search_recipes(remote, pattern, ignorecase)
32
33 def search_packages(self, reference=None, remote=None, query=None, outdated=False):
34 """ Return the single information saved in conan.vars about all the packages
35 or the packages which match with a pattern
36
37 Attributes:
38 pattern = string to match packages
39 remote = search on another origin to get packages info
40 packages_pattern = String query with binary
41 packages properties: "arch=x86 AND os=Windows"
42 """
43 if remote:
44 remote = RemoteRegistry(self._client_cache.registry, self._user_io).remote(remote)
45 packages_props = self._remote_manager.search_packages(remote, reference, query)
46 ordered_packages = OrderedDict(sorted(packages_props.items()))
47 manifest = self._remote_manager.get_conan_digest(reference, remote)
48 recipe_hash = manifest.summary_hash
49 else:
50 searcher = DiskSearchManager(self._client_cache)
51 packages_props = searcher.search_packages(reference, query)
52 ordered_packages = OrderedDict(sorted(packages_props.items()))
53 try:
54 recipe_hash = self._client_cache.load_manifest(reference).summary_hash
55 except IOError: # It could not exist in local
56 recipe_hash = None
57 if outdated and recipe_hash:
58 ordered_packages = filter_outdated(ordered_packages, recipe_hash)
59 return ordered_packages, reference, recipe_hash, query
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conans/client/cmd/search.py b/conans/client/cmd/search.py
--- a/conans/client/cmd/search.py
+++ b/conans/client/cmd/search.py
@@ -41,7 +41,7 @@
packages properties: "arch=x86 AND os=Windows"
"""
if remote:
- remote = RemoteRegistry(self._client_cache.registry, self._user_io).remote(remote)
+ remote = RemoteRegistry(self._client_cache.registry, self._user_io.out).remote(remote)
packages_props = self._remote_manager.search_packages(remote, reference, query)
ordered_packages = OrderedDict(sorted(packages_props.items()))
manifest = self._remote_manager.get_conan_digest(reference, remote)
| {"golden_diff": "diff --git a/conans/client/cmd/search.py b/conans/client/cmd/search.py\n--- a/conans/client/cmd/search.py\n+++ b/conans/client/cmd/search.py\n@@ -41,7 +41,7 @@\n packages properties: \"arch=x86 AND os=Windows\"\n \"\"\"\n if remote:\n- remote = RemoteRegistry(self._client_cache.registry, self._user_io).remote(remote)\n+ remote = RemoteRegistry(self._client_cache.registry, self._user_io.out).remote(remote)\n packages_props = self._remote_manager.search_packages(remote, reference, query)\n ordered_packages = OrderedDict(sorted(packages_props.items()))\n manifest = self._remote_manager.get_conan_digest(reference, remote)\n", "issue": "conan search gives AttributeError: 'UserIO' object has no attribute 'warn'\nVersion: 1.1.1\r\nOS: Linux Ubuntu 14.04 \r\nconda: v4.2.7\r\n\r\nRepro steps:\r\n* `conda create -n conan python=2.7`\r\n* `source activate conan`\r\n* `pip install conan`\r\n* `conan search zlib/1.2.11@conan/stable -r=conan-center`\r\n\r\nGives the following python stack:\r\n\r\n```\r\n(conan) ~ $ conan search zlib/1.2.11@conan/stable -r=conan-center\r\nTraceback (most recent call last):\r\n File \"/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/command.py\", line 1131, in run\r\n method(args[0][1:])\r\n File \"/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/command.py\", line 814, in search\r\n outdated=args.outdated)\r\n File \"/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/conan_api.py\", line 64, in wrapper\r\n return f(*args, **kwargs)\r\n File \"/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/conan_api.py\", line 595, in search_packages\r\n outdated=outdated)\r\n File \"/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/cmd/search.py\", line 44, in search_packages\r\n remote = RemoteRegistry(self._client_cache.registry, self._user_io).remote(remote)\r\n File \"/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/remote_registry.py\", line 95, in remote\r\n remotes, _ = self._load()\r\n File \"/home/mgodbolt/apps/miniconda/envs/conan/lib/python2.7/site-packages/conans/client/remote_registry.py\", line 65, in _load\r\n self._output.warn(\"Remotes registry file missing, creating default one in %s\"\r\nAttributeError: 'UserIO' object has no attribute 'warn'\r\n\r\nERROR: 'UserIO' object has no attribute 'warn'\r\n```\r\n\n", "before_files": [{"content": "from conans.search.search import DiskSearchManager, filter_outdated\nfrom collections import OrderedDict\nfrom conans.client.remote_registry import RemoteRegistry\n\n\nclass Search(object):\n def __init__(self, client_cache, remote_manager, user_io):\n self._client_cache = client_cache\n self._remote_manager = remote_manager\n self._user_io = user_io\n\n def search_recipes(self, pattern, remote=None, case_sensitive=False):\n ignorecase = not case_sensitive\n if not remote:\n return DiskSearchManager(self._client_cache).search_recipes(pattern, ignorecase)\n\n registry = RemoteRegistry(self._client_cache.registry, self._user_io.out)\n if remote == 'all':\n remotes = registry.remotes\n # We have to check if there is a remote called \"all\"\n # Deprecate: 2.0 can remove this check\n if 'all' not in (r.name for r in remotes):\n references = {}\n for remote in remotes:\n result = self._remote_manager.search_recipes(remote, pattern, ignorecase)\n if result:\n references[remote.name] = result\n return references\n # single remote\n remote = registry.remote(remote)\n return self._remote_manager.search_recipes(remote, pattern, ignorecase)\n\n def search_packages(self, reference=None, remote=None, query=None, outdated=False):\n \"\"\" Return the single information saved in conan.vars about all the packages\n or the packages which match with a pattern\n\n Attributes:\n pattern = string to match packages\n remote = search on another origin to get packages info\n packages_pattern = String query with binary\n packages properties: \"arch=x86 AND os=Windows\"\n \"\"\"\n if remote:\n remote = RemoteRegistry(self._client_cache.registry, self._user_io).remote(remote)\n packages_props = self._remote_manager.search_packages(remote, reference, query)\n ordered_packages = OrderedDict(sorted(packages_props.items()))\n manifest = self._remote_manager.get_conan_digest(reference, remote)\n recipe_hash = manifest.summary_hash\n else:\n searcher = DiskSearchManager(self._client_cache)\n packages_props = searcher.search_packages(reference, query)\n ordered_packages = OrderedDict(sorted(packages_props.items()))\n try:\n recipe_hash = self._client_cache.load_manifest(reference).summary_hash\n except IOError: # It could not exist in local\n recipe_hash = None\n if outdated and recipe_hash:\n ordered_packages = filter_outdated(ordered_packages, recipe_hash)\n return ordered_packages, reference, recipe_hash, query\n", "path": "conans/client/cmd/search.py"}], "after_files": [{"content": "from conans.search.search import DiskSearchManager, filter_outdated\nfrom collections import OrderedDict\nfrom conans.client.remote_registry import RemoteRegistry\n\n\nclass Search(object):\n def __init__(self, client_cache, remote_manager, user_io):\n self._client_cache = client_cache\n self._remote_manager = remote_manager\n self._user_io = user_io\n\n def search_recipes(self, pattern, remote=None, case_sensitive=False):\n ignorecase = not case_sensitive\n if not remote:\n return DiskSearchManager(self._client_cache).search_recipes(pattern, ignorecase)\n\n registry = RemoteRegistry(self._client_cache.registry, self._user_io.out)\n if remote == 'all':\n remotes = registry.remotes\n # We have to check if there is a remote called \"all\"\n # Deprecate: 2.0 can remove this check\n if 'all' not in (r.name for r in remotes):\n references = {}\n for remote in remotes:\n result = self._remote_manager.search_recipes(remote, pattern, ignorecase)\n if result:\n references[remote.name] = result\n return references\n # single remote\n remote = registry.remote(remote)\n return self._remote_manager.search_recipes(remote, pattern, ignorecase)\n\n def search_packages(self, reference=None, remote=None, query=None, outdated=False):\n \"\"\" Return the single information saved in conan.vars about all the packages\n or the packages which match with a pattern\n\n Attributes:\n pattern = string to match packages\n remote = search on another origin to get packages info\n packages_pattern = String query with binary\n packages properties: \"arch=x86 AND os=Windows\"\n \"\"\"\n if remote:\n remote = RemoteRegistry(self._client_cache.registry, self._user_io.out).remote(remote)\n packages_props = self._remote_manager.search_packages(remote, reference, query)\n ordered_packages = OrderedDict(sorted(packages_props.items()))\n manifest = self._remote_manager.get_conan_digest(reference, remote)\n recipe_hash = manifest.summary_hash\n else:\n searcher = DiskSearchManager(self._client_cache)\n packages_props = searcher.search_packages(reference, query)\n ordered_packages = OrderedDict(sorted(packages_props.items()))\n try:\n recipe_hash = self._client_cache.load_manifest(reference).summary_hash\n except IOError: # It could not exist in local\n recipe_hash = None\n if outdated and recipe_hash:\n ordered_packages = filter_outdated(ordered_packages, recipe_hash)\n return ordered_packages, reference, recipe_hash, query\n", "path": "conans/client/cmd/search.py"}]} | 1,453 | 151 |
gh_patches_debug_38054 | rasdani/github-patches | git_diff | translate__pootle-4613 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TP creation email should use BCC
Let's BCC by default when Pootle is communicating to lists of people. When you create a TP all relevant people are listed in the To field.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/core/mail.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django.core.mail import EmailMultiAlternatives, get_connection
10
11
12 def send_mail(subject, message, from_email, recipient_list,
13 fail_silently=False, auth_user=None, auth_password=None,
14 connection=None, html_message=None, headers=None):
15 """Override django send_mail function to allow use of custom email headers.
16 """
17
18 connection = connection or get_connection(username=auth_user,
19 password=auth_password,
20 fail_silently=fail_silently)
21
22 mail = EmailMultiAlternatives(subject, message,
23 from_email, recipient_list,
24 connection=connection, headers=headers)
25
26 if html_message:
27 mail.attach_alternative(html_message, 'text/html')
28
29 return mail.send()
30
```
Path: `pootle/apps/pootle_translationproject/receivers.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10 from django.contrib.auth import get_user_model
11 from django.core.mail import send_mail
12 from django.db.models import Q
13 from django.dispatch import receiver
14 from django.template.loader import render_to_string
15 from django.utils.translation import ugettext_lazy as _
16
17 from pootle.core.url_helpers import urljoin
18
19 from .models import TranslationProject
20 from .signals import tp_init_failed_async, tp_inited_async
21
22
23 def get_recipients(project):
24 User = get_user_model()
25 return list(set(User.objects.filter(
26 Q(permissionset__positive_permissions__codename="administrate",
27 permissionset__directory__pootle_path=project.pootle_path) |
28 Q(is_superuser=True)).values_list("email", flat=True)))
29
30
31 @receiver(tp_inited_async, sender=TranslationProject)
32 def tp_inited_async(instance, response_url, **kwargs):
33 ctx = {"tp": instance,
34 "url": urljoin(response_url, instance.get_absolute_url())}
35 message = render_to_string(
36 'projects/admin/email/translation_project_created.txt', ctx)
37 subject = _(u"Translation project (%s) created" % instance)
38 recipients = get_recipients(instance.project)
39 send_mail(subject, message, from_email=None,
40 recipient_list=recipients, fail_silently=True)
41
42
43 @receiver(tp_init_failed_async, sender=TranslationProject)
44 def tp_init_failed_async(instance, **kwargs):
45 ctx = {"tp": instance}
46 message = render_to_string(
47 'projects/admin/email/translation_project_creation_failed.txt', ctx)
48 subject = _(u"Translation project (%s) creation failed" % instance)
49 recipients = get_recipients(instance.project)
50 send_mail(subject, message, from_email=None,
51 recipient_list=recipients, fail_silently=True)
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/pootle_translationproject/receivers.py b/pootle/apps/pootle_translationproject/receivers.py
--- a/pootle/apps/pootle_translationproject/receivers.py
+++ b/pootle/apps/pootle_translationproject/receivers.py
@@ -8,12 +8,12 @@
# AUTHORS file for copyright and authorship information.
from django.contrib.auth import get_user_model
-from django.core.mail import send_mail
from django.db.models import Q
from django.dispatch import receiver
from django.template.loader import render_to_string
from django.utils.translation import ugettext_lazy as _
+from pootle.core.mail import send_mail
from pootle.core.url_helpers import urljoin
from .models import TranslationProject
@@ -37,7 +37,7 @@
subject = _(u"Translation project (%s) created" % instance)
recipients = get_recipients(instance.project)
send_mail(subject, message, from_email=None,
- recipient_list=recipients, fail_silently=True)
+ recipient_list=[], fail_silently=True, bcc=recipients)
@receiver(tp_init_failed_async, sender=TranslationProject)
@@ -48,4 +48,4 @@
subject = _(u"Translation project (%s) creation failed" % instance)
recipients = get_recipients(instance.project)
send_mail(subject, message, from_email=None,
- recipient_list=recipients, fail_silently=True)
+ recipient_list=[], fail_silently=True, bcc=recipients)
diff --git a/pootle/core/mail.py b/pootle/core/mail.py
--- a/pootle/core/mail.py
+++ b/pootle/core/mail.py
@@ -11,7 +11,8 @@
def send_mail(subject, message, from_email, recipient_list,
fail_silently=False, auth_user=None, auth_password=None,
- connection=None, html_message=None, headers=None):
+ connection=None, html_message=None, headers=None,
+ cc=None, bcc=None):
"""Override django send_mail function to allow use of custom email headers.
"""
@@ -21,7 +22,8 @@
mail = EmailMultiAlternatives(subject, message,
from_email, recipient_list,
- connection=connection, headers=headers)
+ connection=connection, headers=headers,
+ cc=cc, bcc=bcc)
if html_message:
mail.attach_alternative(html_message, 'text/html')
| {"golden_diff": "diff --git a/pootle/apps/pootle_translationproject/receivers.py b/pootle/apps/pootle_translationproject/receivers.py\n--- a/pootle/apps/pootle_translationproject/receivers.py\n+++ b/pootle/apps/pootle_translationproject/receivers.py\n@@ -8,12 +8,12 @@\n # AUTHORS file for copyright and authorship information.\n \n from django.contrib.auth import get_user_model\n-from django.core.mail import send_mail\n from django.db.models import Q\n from django.dispatch import receiver\n from django.template.loader import render_to_string\n from django.utils.translation import ugettext_lazy as _\n \n+from pootle.core.mail import send_mail\n from pootle.core.url_helpers import urljoin\n \n from .models import TranslationProject\n@@ -37,7 +37,7 @@\n subject = _(u\"Translation project (%s) created\" % instance)\n recipients = get_recipients(instance.project)\n send_mail(subject, message, from_email=None,\n- recipient_list=recipients, fail_silently=True)\n+ recipient_list=[], fail_silently=True, bcc=recipients)\n \n \n @receiver(tp_init_failed_async, sender=TranslationProject)\n@@ -48,4 +48,4 @@\n subject = _(u\"Translation project (%s) creation failed\" % instance)\n recipients = get_recipients(instance.project)\n send_mail(subject, message, from_email=None,\n- recipient_list=recipients, fail_silently=True)\n+ recipient_list=[], fail_silently=True, bcc=recipients)\ndiff --git a/pootle/core/mail.py b/pootle/core/mail.py\n--- a/pootle/core/mail.py\n+++ b/pootle/core/mail.py\n@@ -11,7 +11,8 @@\n \n def send_mail(subject, message, from_email, recipient_list,\n fail_silently=False, auth_user=None, auth_password=None,\n- connection=None, html_message=None, headers=None):\n+ connection=None, html_message=None, headers=None,\n+ cc=None, bcc=None):\n \"\"\"Override django send_mail function to allow use of custom email headers.\n \"\"\"\n \n@@ -21,7 +22,8 @@\n \n mail = EmailMultiAlternatives(subject, message,\n from_email, recipient_list,\n- connection=connection, headers=headers)\n+ connection=connection, headers=headers,\n+ cc=cc, bcc=bcc)\n \n if html_message:\n mail.attach_alternative(html_message, 'text/html')\n", "issue": "TP creation email should use BCC\nLet's BCC by default when Pootle is communicating to lists of people. When you create a TP all relevant people are listed in the To field.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.core.mail import EmailMultiAlternatives, get_connection\n\n\ndef send_mail(subject, message, from_email, recipient_list,\n fail_silently=False, auth_user=None, auth_password=None,\n connection=None, html_message=None, headers=None):\n \"\"\"Override django send_mail function to allow use of custom email headers.\n \"\"\"\n\n connection = connection or get_connection(username=auth_user,\n password=auth_password,\n fail_silently=fail_silently)\n\n mail = EmailMultiAlternatives(subject, message,\n from_email, recipient_list,\n connection=connection, headers=headers)\n\n if html_message:\n mail.attach_alternative(html_message, 'text/html')\n\n return mail.send()\n", "path": "pootle/core/mail.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.contrib.auth import get_user_model\nfrom django.core.mail import send_mail\nfrom django.db.models import Q\nfrom django.dispatch import receiver\nfrom django.template.loader import render_to_string\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom pootle.core.url_helpers import urljoin\n\nfrom .models import TranslationProject\nfrom .signals import tp_init_failed_async, tp_inited_async\n\n\ndef get_recipients(project):\n User = get_user_model()\n return list(set(User.objects.filter(\n Q(permissionset__positive_permissions__codename=\"administrate\",\n permissionset__directory__pootle_path=project.pootle_path) |\n Q(is_superuser=True)).values_list(\"email\", flat=True)))\n\n\n@receiver(tp_inited_async, sender=TranslationProject)\ndef tp_inited_async(instance, response_url, **kwargs):\n ctx = {\"tp\": instance,\n \"url\": urljoin(response_url, instance.get_absolute_url())}\n message = render_to_string(\n 'projects/admin/email/translation_project_created.txt', ctx)\n subject = _(u\"Translation project (%s) created\" % instance)\n recipients = get_recipients(instance.project)\n send_mail(subject, message, from_email=None,\n recipient_list=recipients, fail_silently=True)\n\n\n@receiver(tp_init_failed_async, sender=TranslationProject)\ndef tp_init_failed_async(instance, **kwargs):\n ctx = {\"tp\": instance}\n message = render_to_string(\n 'projects/admin/email/translation_project_creation_failed.txt', ctx)\n subject = _(u\"Translation project (%s) creation failed\" % instance)\n recipients = get_recipients(instance.project)\n send_mail(subject, message, from_email=None,\n recipient_list=recipients, fail_silently=True)\n", "path": "pootle/apps/pootle_translationproject/receivers.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.core.mail import EmailMultiAlternatives, get_connection\n\n\ndef send_mail(subject, message, from_email, recipient_list,\n fail_silently=False, auth_user=None, auth_password=None,\n connection=None, html_message=None, headers=None,\n cc=None, bcc=None):\n \"\"\"Override django send_mail function to allow use of custom email headers.\n \"\"\"\n\n connection = connection or get_connection(username=auth_user,\n password=auth_password,\n fail_silently=fail_silently)\n\n mail = EmailMultiAlternatives(subject, message,\n from_email, recipient_list,\n connection=connection, headers=headers,\n cc=cc, bcc=bcc)\n\n if html_message:\n mail.attach_alternative(html_message, 'text/html')\n\n return mail.send()\n", "path": "pootle/core/mail.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.contrib.auth import get_user_model\nfrom django.db.models import Q\nfrom django.dispatch import receiver\nfrom django.template.loader import render_to_string\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom pootle.core.mail import send_mail\nfrom pootle.core.url_helpers import urljoin\n\nfrom .models import TranslationProject\nfrom .signals import tp_init_failed_async, tp_inited_async\n\n\ndef get_recipients(project):\n User = get_user_model()\n return list(set(User.objects.filter(\n Q(permissionset__positive_permissions__codename=\"administrate\",\n permissionset__directory__pootle_path=project.pootle_path) |\n Q(is_superuser=True)).values_list(\"email\", flat=True)))\n\n\n@receiver(tp_inited_async, sender=TranslationProject)\ndef tp_inited_async(instance, response_url, **kwargs):\n ctx = {\"tp\": instance,\n \"url\": urljoin(response_url, instance.get_absolute_url())}\n message = render_to_string(\n 'projects/admin/email/translation_project_created.txt', ctx)\n subject = _(u\"Translation project (%s) created\" % instance)\n recipients = get_recipients(instance.project)\n send_mail(subject, message, from_email=None,\n recipient_list=[], fail_silently=True, bcc=recipients)\n\n\n@receiver(tp_init_failed_async, sender=TranslationProject)\ndef tp_init_failed_async(instance, **kwargs):\n ctx = {\"tp\": instance}\n message = render_to_string(\n 'projects/admin/email/translation_project_creation_failed.txt', ctx)\n subject = _(u\"Translation project (%s) creation failed\" % instance)\n recipients = get_recipients(instance.project)\n send_mail(subject, message, from_email=None,\n recipient_list=[], fail_silently=True, bcc=recipients)\n", "path": "pootle/apps/pootle_translationproject/receivers.py"}]} | 1,144 | 544 |
gh_patches_debug_8480 | rasdani/github-patches | git_diff | elastic__apm-agent-python-580 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DB interactions not traced when using context manager with psycopg2 connections or cursors
When using a context manager with psycopg2 connections or cursors, db interactions are not captured in spans.
The code below generates a span for `psycopg2.connect`, but not the query:
```
with psycopg2.connect(DSN) as conn:
with conn.cursor() as curs:
curs.execute("SELECT * FROM data.portfolio;")
portfolios = curs.fetchall()
```
whereas the following captures both spans as expected:
```
conn = psycopg2.connect(DSN)
curs = conn.cursor()
curs.execute("SELECT * FROM data.portfolio;")
portfolios = curs.fetchall()
curs.close()
conn.close()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/instrumentation/packages/psycopg2.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from elasticapm.instrumentation.packages.dbapi2 import (
32 ConnectionProxy,
33 CursorProxy,
34 DbApi2Instrumentation,
35 extract_signature,
36 )
37 from elasticapm.traces import capture_span
38 from elasticapm.utils import default_ports
39
40
41 class PGCursorProxy(CursorProxy):
42 provider_name = "postgresql"
43
44 def _bake_sql(self, sql):
45 # if this is a Composable object, use its `as_string` method
46 # see http://initd.org/psycopg/docs/sql.html
47 if hasattr(sql, "as_string"):
48 return sql.as_string(self.__wrapped__)
49 return sql
50
51 def extract_signature(self, sql):
52 return extract_signature(sql)
53
54
55 class PGConnectionProxy(ConnectionProxy):
56 cursor_proxy = PGCursorProxy
57
58
59 class Psycopg2Instrumentation(DbApi2Instrumentation):
60 name = "psycopg2"
61
62 instrument_list = [("psycopg2", "connect")]
63
64 def call(self, module, method, wrapped, instance, args, kwargs):
65 signature = "psycopg2.connect"
66
67 host = kwargs.get("host")
68 if host:
69 signature += " " + str(host)
70
71 port = kwargs.get("port")
72 if port:
73 port = str(port)
74 if int(port) != default_ports.get("postgresql"):
75 signature += ":" + port
76 else:
77 # Parse connection string and extract host/port
78 pass
79
80 with capture_span(signature, span_type="db", span_subtype="postgresql", span_action="connect"):
81 return PGConnectionProxy(wrapped(*args, **kwargs))
82
83
84 class Psycopg2RegisterTypeInstrumentation(DbApi2Instrumentation):
85 name = "psycopg2-register-type"
86
87 instrument_list = [
88 ("psycopg2.extensions", "register_type"),
89 # specifically instrument `register_json` as it bypasses `register_type`
90 ("psycopg2._json", "register_json"),
91 ]
92
93 def call(self, module, method, wrapped, instance, args, kwargs):
94 if "conn_or_curs" in kwargs and hasattr(kwargs["conn_or_curs"], "__wrapped__"):
95 kwargs["conn_or_curs"] = kwargs["conn_or_curs"].__wrapped__
96 # register_type takes the connection as second argument
97 elif len(args) == 2 and hasattr(args[1], "__wrapped__"):
98 args = (args[0], args[1].__wrapped__)
99 # register_json takes the connection as first argument, and can have
100 # several more arguments
101 elif method == "register_json":
102 if args and hasattr(args[0], "__wrapped__"):
103 args = (args[0].__wrapped__,) + args[1:]
104
105 return wrapped(*args, **kwargs)
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticapm/instrumentation/packages/psycopg2.py b/elasticapm/instrumentation/packages/psycopg2.py
--- a/elasticapm/instrumentation/packages/psycopg2.py
+++ b/elasticapm/instrumentation/packages/psycopg2.py
@@ -51,10 +51,16 @@
def extract_signature(self, sql):
return extract_signature(sql)
+ def __enter__(self):
+ return PGCursorProxy(self.__wrapped__.__enter__())
+
class PGConnectionProxy(ConnectionProxy):
cursor_proxy = PGCursorProxy
+ def __enter__(self):
+ return PGConnectionProxy(self.__wrapped__.__enter__())
+
class Psycopg2Instrumentation(DbApi2Instrumentation):
name = "psycopg2"
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/psycopg2.py b/elasticapm/instrumentation/packages/psycopg2.py\n--- a/elasticapm/instrumentation/packages/psycopg2.py\n+++ b/elasticapm/instrumentation/packages/psycopg2.py\n@@ -51,10 +51,16 @@\n def extract_signature(self, sql):\n return extract_signature(sql)\n \n+ def __enter__(self):\n+ return PGCursorProxy(self.__wrapped__.__enter__())\n+\n \n class PGConnectionProxy(ConnectionProxy):\n cursor_proxy = PGCursorProxy\n \n+ def __enter__(self):\n+ return PGConnectionProxy(self.__wrapped__.__enter__())\n+\n \n class Psycopg2Instrumentation(DbApi2Instrumentation):\n name = \"psycopg2\"\n", "issue": "DB interactions not traced when using context manager with psycopg2 connections or cursors\nWhen using a context manager with psycopg2 connections or cursors, db interactions are not captured in spans.\r\n\r\nThe code below generates a span for `psycopg2.connect`, but not the query:\r\n```\r\nwith psycopg2.connect(DSN) as conn:\r\n with conn.cursor() as curs:\r\n curs.execute(\"SELECT * FROM data.portfolio;\")\r\n portfolios = curs.fetchall()\r\n```\r\n\r\nwhereas the following captures both spans as expected:\r\n```\r\nconn = psycopg2.connect(DSN)\r\ncurs = conn.cursor()\r\ncurs.execute(\"SELECT * FROM data.portfolio;\")\r\nportfolios = curs.fetchall()\r\ncurs.close()\r\nconn.close()\r\n```\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom elasticapm.instrumentation.packages.dbapi2 import (\n ConnectionProxy,\n CursorProxy,\n DbApi2Instrumentation,\n extract_signature,\n)\nfrom elasticapm.traces import capture_span\nfrom elasticapm.utils import default_ports\n\n\nclass PGCursorProxy(CursorProxy):\n provider_name = \"postgresql\"\n\n def _bake_sql(self, sql):\n # if this is a Composable object, use its `as_string` method\n # see http://initd.org/psycopg/docs/sql.html\n if hasattr(sql, \"as_string\"):\n return sql.as_string(self.__wrapped__)\n return sql\n\n def extract_signature(self, sql):\n return extract_signature(sql)\n\n\nclass PGConnectionProxy(ConnectionProxy):\n cursor_proxy = PGCursorProxy\n\n\nclass Psycopg2Instrumentation(DbApi2Instrumentation):\n name = \"psycopg2\"\n\n instrument_list = [(\"psycopg2\", \"connect\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n signature = \"psycopg2.connect\"\n\n host = kwargs.get(\"host\")\n if host:\n signature += \" \" + str(host)\n\n port = kwargs.get(\"port\")\n if port:\n port = str(port)\n if int(port) != default_ports.get(\"postgresql\"):\n signature += \":\" + port\n else:\n # Parse connection string and extract host/port\n pass\n\n with capture_span(signature, span_type=\"db\", span_subtype=\"postgresql\", span_action=\"connect\"):\n return PGConnectionProxy(wrapped(*args, **kwargs))\n\n\nclass Psycopg2RegisterTypeInstrumentation(DbApi2Instrumentation):\n name = \"psycopg2-register-type\"\n\n instrument_list = [\n (\"psycopg2.extensions\", \"register_type\"),\n # specifically instrument `register_json` as it bypasses `register_type`\n (\"psycopg2._json\", \"register_json\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if \"conn_or_curs\" in kwargs and hasattr(kwargs[\"conn_or_curs\"], \"__wrapped__\"):\n kwargs[\"conn_or_curs\"] = kwargs[\"conn_or_curs\"].__wrapped__\n # register_type takes the connection as second argument\n elif len(args) == 2 and hasattr(args[1], \"__wrapped__\"):\n args = (args[0], args[1].__wrapped__)\n # register_json takes the connection as first argument, and can have\n # several more arguments\n elif method == \"register_json\":\n if args and hasattr(args[0], \"__wrapped__\"):\n args = (args[0].__wrapped__,) + args[1:]\n\n return wrapped(*args, **kwargs)\n", "path": "elasticapm/instrumentation/packages/psycopg2.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom elasticapm.instrumentation.packages.dbapi2 import (\n ConnectionProxy,\n CursorProxy,\n DbApi2Instrumentation,\n extract_signature,\n)\nfrom elasticapm.traces import capture_span\nfrom elasticapm.utils import default_ports\n\n\nclass PGCursorProxy(CursorProxy):\n provider_name = \"postgresql\"\n\n def _bake_sql(self, sql):\n # if this is a Composable object, use its `as_string` method\n # see http://initd.org/psycopg/docs/sql.html\n if hasattr(sql, \"as_string\"):\n return sql.as_string(self.__wrapped__)\n return sql\n\n def extract_signature(self, sql):\n return extract_signature(sql)\n\n def __enter__(self):\n return PGCursorProxy(self.__wrapped__.__enter__())\n\n\nclass PGConnectionProxy(ConnectionProxy):\n cursor_proxy = PGCursorProxy\n\n def __enter__(self):\n return PGConnectionProxy(self.__wrapped__.__enter__())\n\n\nclass Psycopg2Instrumentation(DbApi2Instrumentation):\n name = \"psycopg2\"\n\n instrument_list = [(\"psycopg2\", \"connect\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n signature = \"psycopg2.connect\"\n\n host = kwargs.get(\"host\")\n if host:\n signature += \" \" + str(host)\n\n port = kwargs.get(\"port\")\n if port:\n port = str(port)\n if int(port) != default_ports.get(\"postgresql\"):\n signature += \":\" + port\n else:\n # Parse connection string and extract host/port\n pass\n\n with capture_span(signature, span_type=\"db\", span_subtype=\"postgresql\", span_action=\"connect\"):\n return PGConnectionProxy(wrapped(*args, **kwargs))\n\n\nclass Psycopg2RegisterTypeInstrumentation(DbApi2Instrumentation):\n name = \"psycopg2-register-type\"\n\n instrument_list = [\n (\"psycopg2.extensions\", \"register_type\"),\n # specifically instrument `register_json` as it bypasses `register_type`\n (\"psycopg2._json\", \"register_json\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if \"conn_or_curs\" in kwargs and hasattr(kwargs[\"conn_or_curs\"], \"__wrapped__\"):\n kwargs[\"conn_or_curs\"] = kwargs[\"conn_or_curs\"].__wrapped__\n # register_type takes the connection as second argument\n elif len(args) == 2 and hasattr(args[1], \"__wrapped__\"):\n args = (args[0], args[1].__wrapped__)\n # register_json takes the connection as first argument, and can have\n # several more arguments\n elif method == \"register_json\":\n if args and hasattr(args[0], \"__wrapped__\"):\n args = (args[0].__wrapped__,) + args[1:]\n\n return wrapped(*args, **kwargs)\n", "path": "elasticapm/instrumentation/packages/psycopg2.py"}]} | 1,549 | 174 |
gh_patches_debug_143 | rasdani/github-patches | git_diff | ManimCommunity__manim-126 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove argparse from setup.py
https://github.com/ManimCommunity/manim/blob/cf8c5b9938abafba9f6c2c1aeff9e15c8edbfdd1/setup.py#L17
Remove `argparse` from setup.py as it is a default library and need not be mentioned in `requirements.txt` and `setup.py`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_namespace_packages
2
3 setup(
4 name="manimlib",
5 version="0.2.0",
6 description="Animation engine for explanatory math videos",
7 license="MIT",
8 packages=find_namespace_packages(),
9 package_data={ "manim": ["*.tex"] },
10 entry_points={
11 "console_scripts": [
12 "manim=manim.__main__:main",
13 "manimcm=manim.__main__:main",
14 ]
15 },
16 install_requires=[
17 "argparse",
18 "colour",
19 "numpy",
20 "Pillow",
21 "progressbar",
22 "scipy",
23 "tqdm",
24 "pycairo",
25 "pydub",
26 "pygments",
27 "pyreadline; sys_platform == 'win32'",
28 "rich",
29 ],
30 )
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -14,7 +14,6 @@
]
},
install_requires=[
- "argparse",
"colour",
"numpy",
"Pillow",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -14,7 +14,6 @@\n ]\n },\n install_requires=[\n- \"argparse\",\n \"colour\",\n \"numpy\",\n \"Pillow\",\n", "issue": "Remove argparse from setup.py\nhttps://github.com/ManimCommunity/manim/blob/cf8c5b9938abafba9f6c2c1aeff9e15c8edbfdd1/setup.py#L17\r\nRemove `argparse` from setup.py as it is a default library and need not be mentioned in `requirements.txt` and `setup.py`.\n", "before_files": [{"content": "from setuptools import setup, find_namespace_packages\n\nsetup(\n name=\"manimlib\",\n version=\"0.2.0\",\n description=\"Animation engine for explanatory math videos\",\n license=\"MIT\",\n packages=find_namespace_packages(),\n package_data={ \"manim\": [\"*.tex\"] },\n entry_points={\n \"console_scripts\": [\n \"manim=manim.__main__:main\",\n \"manimcm=manim.__main__:main\",\n ]\n },\n install_requires=[\n \"argparse\",\n \"colour\",\n \"numpy\",\n \"Pillow\",\n \"progressbar\",\n \"scipy\",\n \"tqdm\",\n \"pycairo\",\n \"pydub\",\n \"pygments\",\n \"pyreadline; sys_platform == 'win32'\",\n \"rich\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_namespace_packages\n\nsetup(\n name=\"manimlib\",\n version=\"0.2.0\",\n description=\"Animation engine for explanatory math videos\",\n license=\"MIT\",\n packages=find_namespace_packages(),\n package_data={ \"manim\": [\"*.tex\"] },\n entry_points={\n \"console_scripts\": [\n \"manim=manim.__main__:main\",\n \"manimcm=manim.__main__:main\",\n ]\n },\n install_requires=[\n \"colour\",\n \"numpy\",\n \"Pillow\",\n \"progressbar\",\n \"scipy\",\n \"tqdm\",\n \"pycairo\",\n \"pydub\",\n \"pygments\",\n \"pyreadline; sys_platform == 'win32'\",\n \"rich\",\n ],\n)\n", "path": "setup.py"}]} | 569 | 59 |
gh_patches_debug_9582 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-596 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error with Service Provider Stadtreinigung Leipzig / stadtreinigung-leipzig.de
Hi everyone,
Since 03.01.2023 (this is where I noticed it), WCS can no longer retrieve data from Stadtwerke Leipzig.
The following error is displayed:
fetch failed for source Stadtreinigung Leipzig: Traceback (most recent call last): File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py", line 134, in fetch entries = self._source.fetch() File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py", line 34, in fetch raise Exception(f"street not found: {self._street}") Exception: street not found: Pflugkstraße
My configuration.yaml:
waste_collection_schedule:
sources:
- name: stadtreinigung_leipzig_de
args:
street: Pflugkstraße
house_number: 1
calendar_title: Abfallkalender
I've been trying around here for a few days, but I can't find a solution to the problem. Is it possible that the API has been changed/defective?
Thanks for your help...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py`
Content:
```
1 import json
2 import logging
3
4 import requests
5 from waste_collection_schedule import Collection # type: ignore[attr-defined]
6 from waste_collection_schedule.service.ICS import ICS
7
8 _LOGGER = logging.getLogger(__name__)
9
10 TITLE = "Stadtreinigung Leipzig"
11 DESCRIPTION = "Source for Stadtreinigung Leipzig."
12 URL = "https://stadtreinigung-leipzig.de"
13 TEST_CASES = {"Bahnhofsallee": {"street": "Bahnhofsallee", "house_number": 7}}
14
15
16 class Source:
17 def __init__(self, street, house_number):
18 self._street = street
19 self._house_number = house_number
20 self._ics = ICS()
21
22 def fetch(self):
23 params = {
24 "name": self._street,
25 }
26
27 # get list of streets and house numbers
28 r = requests.get(
29 "https://stadtreinigung-leipzig.de/rest/wastecalendarstreets", params=params
30 )
31
32 data = json.loads(r.text)
33 if len(data["results"]) == 0:
34 raise Exception(f"street not found: {self._street}")
35 street_entry = data["results"].get(self._street)
36 if street_entry is None:
37 raise Exception(f"street not found: {self._street}")
38
39 id = street_entry.get(str(self._house_number))
40 if id is None:
41 raise Exception(f"house_number not found: {self._house_number}")
42
43 # get ics file
44 params = {
45 "position_nos": id,
46 }
47 r = requests.get(
48 "https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics",
49 params=params,
50 )
51 dates = self._ics.convert(r.text)
52
53 entries = []
54 for d in dates:
55 entries.append(Collection(d[0], d[1].removesuffix(", ")))
56 return entries
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py
@@ -21,12 +21,13 @@
def fetch(self):
params = {
- "name": self._street,
+ "old_format": 1,
+ "search": self._street,
}
# get list of streets and house numbers
r = requests.get(
- "https://stadtreinigung-leipzig.de/rest/wastecalendarstreets", params=params
+ "https://stadtreinigung-leipzig.de/rest/Navision/Streets", params=params
)
data = json.loads(r.text)
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py\n@@ -21,12 +21,13 @@\n \n def fetch(self):\n params = {\n- \"name\": self._street,\n+ \"old_format\": 1,\n+ \"search\": self._street,\n }\n \n # get list of streets and house numbers\n r = requests.get(\n- \"https://stadtreinigung-leipzig.de/rest/wastecalendarstreets\", params=params\n+ \"https://stadtreinigung-leipzig.de/rest/Navision/Streets\", params=params\n )\n \n data = json.loads(r.text)\n", "issue": "Error with Service Provider Stadtreinigung Leipzig / stadtreinigung-leipzig.de\nHi everyone,\r\nSince 03.01.2023 (this is where I noticed it), WCS can no longer retrieve data from Stadtwerke Leipzig.\r\nThe following error is displayed:\r\n\r\nfetch failed for source Stadtreinigung Leipzig: Traceback (most recent call last): File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py\", line 134, in fetch entries = self._source.fetch() File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py\", line 34, in fetch raise Exception(f\"street not found: {self._street}\") Exception: street not found: Pflugkstra\u00dfe\r\n\r\nMy configuration.yaml:\r\nwaste_collection_schedule:\r\n sources:\r\n - name: stadtreinigung_leipzig_de\r\n args:\r\n street: Pflugkstra\u00dfe\r\n house_number: 1\r\n calendar_title: Abfallkalender\r\n\r\nI've been trying around here for a few days, but I can't find a solution to the problem. Is it possible that the API has been changed/defective?\r\nThanks for your help...\n", "before_files": [{"content": "import json\nimport logging\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\n_LOGGER = logging.getLogger(__name__)\n\nTITLE = \"Stadtreinigung Leipzig\"\nDESCRIPTION = \"Source for Stadtreinigung Leipzig.\"\nURL = \"https://stadtreinigung-leipzig.de\"\nTEST_CASES = {\"Bahnhofsallee\": {\"street\": \"Bahnhofsallee\", \"house_number\": 7}}\n\n\nclass Source:\n def __init__(self, street, house_number):\n self._street = street\n self._house_number = house_number\n self._ics = ICS()\n\n def fetch(self):\n params = {\n \"name\": self._street,\n }\n\n # get list of streets and house numbers\n r = requests.get(\n \"https://stadtreinigung-leipzig.de/rest/wastecalendarstreets\", params=params\n )\n\n data = json.loads(r.text)\n if len(data[\"results\"]) == 0:\n raise Exception(f\"street not found: {self._street}\")\n street_entry = data[\"results\"].get(self._street)\n if street_entry is None:\n raise Exception(f\"street not found: {self._street}\")\n\n id = street_entry.get(str(self._house_number))\n if id is None:\n raise Exception(f\"house_number not found: {self._house_number}\")\n\n # get ics file\n params = {\n \"position_nos\": id,\n }\n r = requests.get(\n \"https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics\",\n params=params,\n )\n dates = self._ics.convert(r.text)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1].removesuffix(\", \")))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py"}], "after_files": [{"content": "import json\nimport logging\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\n_LOGGER = logging.getLogger(__name__)\n\nTITLE = \"Stadtreinigung Leipzig\"\nDESCRIPTION = \"Source for Stadtreinigung Leipzig.\"\nURL = \"https://stadtreinigung-leipzig.de\"\nTEST_CASES = {\"Bahnhofsallee\": {\"street\": \"Bahnhofsallee\", \"house_number\": 7}}\n\n\nclass Source:\n def __init__(self, street, house_number):\n self._street = street\n self._house_number = house_number\n self._ics = ICS()\n\n def fetch(self):\n params = {\n \"old_format\": 1,\n \"search\": self._street,\n }\n\n # get list of streets and house numbers\n r = requests.get(\n \"https://stadtreinigung-leipzig.de/rest/Navision/Streets\", params=params\n )\n\n data = json.loads(r.text)\n if len(data[\"results\"]) == 0:\n raise Exception(f\"street not found: {self._street}\")\n street_entry = data[\"results\"].get(self._street)\n if street_entry is None:\n raise Exception(f\"street not found: {self._street}\")\n\n id = street_entry.get(str(self._house_number))\n if id is None:\n raise Exception(f\"house_number not found: {self._house_number}\")\n\n # get ics file\n params = {\n \"position_nos\": id,\n }\n r = requests.get(\n \"https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics\",\n params=params,\n )\n dates = self._ics.convert(r.text)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1].removesuffix(\", \")))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/stadtreinigung_leipzig_de.py"}]} | 1,056 | 219 |
gh_patches_debug_37670 | rasdani/github-patches | git_diff | biolab__orange3-3842 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Replicability in Neural networks and Random forests
Follow up from #3715: Neural networks and Random forests should have a checkbox `Replicable training` or something like this, which would decide whether random seed is fixed (to 0) or "random".
In Neural networks: add the check box.
In Random forest: remove the spin box.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Orange/widgets/model/owrandomforest.py`
Content:
```
1 from AnyQt.QtCore import Qt
2
3 from Orange.data import Table
4 from Orange.modelling import RandomForestLearner
5 from Orange.widgets import settings, gui
6 from Orange.widgets.utils.owlearnerwidget import OWBaseLearner
7 from Orange.widgets.utils.widgetpreview import WidgetPreview
8 from Orange.widgets.widget import Msg
9
10
11 class OWRandomForest(OWBaseLearner):
12 name = "Random Forest"
13 description = "Predict using an ensemble of decision trees."
14 icon = "icons/RandomForest.svg"
15 replaces = [
16 "Orange.widgets.classify.owrandomforest.OWRandomForest",
17 "Orange.widgets.regression.owrandomforestregression.OWRandomForestRegression",
18 ]
19 priority = 40
20 keywords = []
21
22 LEARNER = RandomForestLearner
23
24 n_estimators = settings.Setting(10)
25 max_features = settings.Setting(5)
26 use_max_features = settings.Setting(False)
27 random_state = settings.Setting(0)
28 use_random_state = settings.Setting(False)
29 max_depth = settings.Setting(3)
30 use_max_depth = settings.Setting(False)
31 min_samples_split = settings.Setting(5)
32 use_min_samples_split = settings.Setting(True)
33 index_output = settings.Setting(0)
34
35 class Error(OWBaseLearner.Error):
36 not_enough_features = Msg("Insufficient number of attributes ({})")
37
38 def add_main_layout(self):
39 box = gui.vBox(self.controlArea, 'Basic Properties')
40 self.n_estimators_spin = gui.spin(
41 box, self, "n_estimators", minv=1, maxv=10000, controlWidth=80,
42 alignment=Qt.AlignRight, label="Number of trees: ",
43 callback=self.settings_changed)
44 self.max_features_spin = gui.spin(
45 box, self, "max_features", 2, 50, controlWidth=80,
46 label="Number of attributes considered at each split: ",
47 callback=self.settings_changed, checked="use_max_features",
48 checkCallback=self.settings_changed, alignment=Qt.AlignRight,)
49 self.random_state_spin = gui.spin(
50 box, self, "random_state", 0, 2 ** 31 - 1, controlWidth=80,
51 label="Fixed seed for random generator: ", alignment=Qt.AlignRight,
52 callback=self.settings_changed, checked="use_random_state",
53 checkCallback=self.settings_changed)
54
55 box = gui.vBox(self.controlArea, "Growth Control")
56 self.max_depth_spin = gui.spin(
57 box, self, "max_depth", 1, 50, controlWidth=80,
58 label="Limit depth of individual trees: ", alignment=Qt.AlignRight,
59 callback=self.settings_changed, checked="use_max_depth",
60 checkCallback=self.settings_changed)
61 self.min_samples_split_spin = gui.spin(
62 box, self, "min_samples_split", 2, 1000, controlWidth=80,
63 label="Do not split subsets smaller than: ",
64 callback=self.settings_changed, checked="use_min_samples_split",
65 checkCallback=self.settings_changed, alignment=Qt.AlignRight)
66
67 def create_learner(self):
68 common_args = {"n_estimators": self.n_estimators}
69 if self.use_max_features:
70 common_args["max_features"] = self.max_features
71 if self.use_random_state:
72 common_args["random_state"] = self.random_state
73 if self.use_max_depth:
74 common_args["max_depth"] = self.max_depth
75 if self.use_min_samples_split:
76 common_args["min_samples_split"] = self.min_samples_split
77
78 return self.LEARNER(preprocessors=self.preprocessors, **common_args)
79
80 def check_data(self):
81 self.Error.not_enough_features.clear()
82 if super().check_data():
83 n_features = len(self.data.domain.attributes)
84 if self.use_max_features and self.max_features > n_features:
85 self.Error.not_enough_features(n_features)
86 self.valid_data = False
87 return self.valid_data
88
89 def get_learner_parameters(self):
90 """Called by send report to list the parameters of the learner."""
91 return (
92 ("Number of trees", self.n_estimators),
93 ("Maximal number of considered features",
94 self.max_features if self.use_max_features else "unlimited"),
95 ("Fixed random seed", self.use_random_state and self.random_state),
96 ("Maximal tree depth",
97 self.max_depth if self.use_max_depth else "unlimited"),
98 ("Stop splitting nodes with maximum instances",
99 self.min_samples_split if self.use_min_samples_split else "unlimited")
100 )
101
102
103 if __name__ == "__main__": # pragma: no cover
104 WidgetPreview(OWRandomForest).run(Table("iris"))
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Orange/widgets/model/owrandomforest.py b/Orange/widgets/model/owrandomforest.py
--- a/Orange/widgets/model/owrandomforest.py
+++ b/Orange/widgets/model/owrandomforest.py
@@ -24,7 +24,6 @@
n_estimators = settings.Setting(10)
max_features = settings.Setting(5)
use_max_features = settings.Setting(False)
- random_state = settings.Setting(0)
use_random_state = settings.Setting(False)
max_depth = settings.Setting(3)
use_max_depth = settings.Setting(False)
@@ -46,11 +45,9 @@
label="Number of attributes considered at each split: ",
callback=self.settings_changed, checked="use_max_features",
checkCallback=self.settings_changed, alignment=Qt.AlignRight,)
- self.random_state_spin = gui.spin(
- box, self, "random_state", 0, 2 ** 31 - 1, controlWidth=80,
- label="Fixed seed for random generator: ", alignment=Qt.AlignRight,
- callback=self.settings_changed, checked="use_random_state",
- checkCallback=self.settings_changed)
+ self.random_state = gui.checkBox(
+ box, self, "use_random_state", label="Replicable training",
+ callback=self.settings_changed)
box = gui.vBox(self.controlArea, "Growth Control")
self.max_depth_spin = gui.spin(
@@ -69,7 +66,7 @@
if self.use_max_features:
common_args["max_features"] = self.max_features
if self.use_random_state:
- common_args["random_state"] = self.random_state
+ common_args["random_state"] = 0
if self.use_max_depth:
common_args["max_depth"] = self.max_depth
if self.use_min_samples_split:
@@ -92,7 +89,7 @@
("Number of trees", self.n_estimators),
("Maximal number of considered features",
self.max_features if self.use_max_features else "unlimited"),
- ("Fixed random seed", self.use_random_state and self.random_state),
+ ("Replicable training", ["No", "Yes"][self.use_random_state]),
("Maximal tree depth",
self.max_depth if self.use_max_depth else "unlimited"),
("Stop splitting nodes with maximum instances",
| {"golden_diff": "diff --git a/Orange/widgets/model/owrandomforest.py b/Orange/widgets/model/owrandomforest.py\n--- a/Orange/widgets/model/owrandomforest.py\n+++ b/Orange/widgets/model/owrandomforest.py\n@@ -24,7 +24,6 @@\n n_estimators = settings.Setting(10)\n max_features = settings.Setting(5)\n use_max_features = settings.Setting(False)\n- random_state = settings.Setting(0)\n use_random_state = settings.Setting(False)\n max_depth = settings.Setting(3)\n use_max_depth = settings.Setting(False)\n@@ -46,11 +45,9 @@\n label=\"Number of attributes considered at each split: \",\n callback=self.settings_changed, checked=\"use_max_features\",\n checkCallback=self.settings_changed, alignment=Qt.AlignRight,)\n- self.random_state_spin = gui.spin(\n- box, self, \"random_state\", 0, 2 ** 31 - 1, controlWidth=80,\n- label=\"Fixed seed for random generator: \", alignment=Qt.AlignRight,\n- callback=self.settings_changed, checked=\"use_random_state\",\n- checkCallback=self.settings_changed)\n+ self.random_state = gui.checkBox(\n+ box, self, \"use_random_state\", label=\"Replicable training\",\n+ callback=self.settings_changed)\n \n box = gui.vBox(self.controlArea, \"Growth Control\")\n self.max_depth_spin = gui.spin(\n@@ -69,7 +66,7 @@\n if self.use_max_features:\n common_args[\"max_features\"] = self.max_features\n if self.use_random_state:\n- common_args[\"random_state\"] = self.random_state\n+ common_args[\"random_state\"] = 0\n if self.use_max_depth:\n common_args[\"max_depth\"] = self.max_depth\n if self.use_min_samples_split:\n@@ -92,7 +89,7 @@\n (\"Number of trees\", self.n_estimators),\n (\"Maximal number of considered features\",\n self.max_features if self.use_max_features else \"unlimited\"),\n- (\"Fixed random seed\", self.use_random_state and self.random_state),\n+ (\"Replicable training\", [\"No\", \"Yes\"][self.use_random_state]),\n (\"Maximal tree depth\",\n self.max_depth if self.use_max_depth else \"unlimited\"),\n (\"Stop splitting nodes with maximum instances\",\n", "issue": "Replicability in Neural networks and Random forests\nFollow up from #3715: Neural networks and Random forests should have a checkbox `Replicable training` or something like this, which would decide whether random seed is fixed (to 0) or \"random\".\r\n\r\nIn Neural networks: add the check box.\r\n\r\nIn Random forest: remove the spin box.\n", "before_files": [{"content": "from AnyQt.QtCore import Qt\n\nfrom Orange.data import Table\nfrom Orange.modelling import RandomForestLearner\nfrom Orange.widgets import settings, gui\nfrom Orange.widgets.utils.owlearnerwidget import OWBaseLearner\nfrom Orange.widgets.utils.widgetpreview import WidgetPreview\nfrom Orange.widgets.widget import Msg\n\n\nclass OWRandomForest(OWBaseLearner):\n name = \"Random Forest\"\n description = \"Predict using an ensemble of decision trees.\"\n icon = \"icons/RandomForest.svg\"\n replaces = [\n \"Orange.widgets.classify.owrandomforest.OWRandomForest\",\n \"Orange.widgets.regression.owrandomforestregression.OWRandomForestRegression\",\n ]\n priority = 40\n keywords = []\n\n LEARNER = RandomForestLearner\n\n n_estimators = settings.Setting(10)\n max_features = settings.Setting(5)\n use_max_features = settings.Setting(False)\n random_state = settings.Setting(0)\n use_random_state = settings.Setting(False)\n max_depth = settings.Setting(3)\n use_max_depth = settings.Setting(False)\n min_samples_split = settings.Setting(5)\n use_min_samples_split = settings.Setting(True)\n index_output = settings.Setting(0)\n\n class Error(OWBaseLearner.Error):\n not_enough_features = Msg(\"Insufficient number of attributes ({})\")\n\n def add_main_layout(self):\n box = gui.vBox(self.controlArea, 'Basic Properties')\n self.n_estimators_spin = gui.spin(\n box, self, \"n_estimators\", minv=1, maxv=10000, controlWidth=80,\n alignment=Qt.AlignRight, label=\"Number of trees: \",\n callback=self.settings_changed)\n self.max_features_spin = gui.spin(\n box, self, \"max_features\", 2, 50, controlWidth=80,\n label=\"Number of attributes considered at each split: \",\n callback=self.settings_changed, checked=\"use_max_features\",\n checkCallback=self.settings_changed, alignment=Qt.AlignRight,)\n self.random_state_spin = gui.spin(\n box, self, \"random_state\", 0, 2 ** 31 - 1, controlWidth=80,\n label=\"Fixed seed for random generator: \", alignment=Qt.AlignRight,\n callback=self.settings_changed, checked=\"use_random_state\",\n checkCallback=self.settings_changed)\n\n box = gui.vBox(self.controlArea, \"Growth Control\")\n self.max_depth_spin = gui.spin(\n box, self, \"max_depth\", 1, 50, controlWidth=80,\n label=\"Limit depth of individual trees: \", alignment=Qt.AlignRight,\n callback=self.settings_changed, checked=\"use_max_depth\",\n checkCallback=self.settings_changed)\n self.min_samples_split_spin = gui.spin(\n box, self, \"min_samples_split\", 2, 1000, controlWidth=80,\n label=\"Do not split subsets smaller than: \",\n callback=self.settings_changed, checked=\"use_min_samples_split\",\n checkCallback=self.settings_changed, alignment=Qt.AlignRight)\n\n def create_learner(self):\n common_args = {\"n_estimators\": self.n_estimators}\n if self.use_max_features:\n common_args[\"max_features\"] = self.max_features\n if self.use_random_state:\n common_args[\"random_state\"] = self.random_state\n if self.use_max_depth:\n common_args[\"max_depth\"] = self.max_depth\n if self.use_min_samples_split:\n common_args[\"min_samples_split\"] = self.min_samples_split\n\n return self.LEARNER(preprocessors=self.preprocessors, **common_args)\n\n def check_data(self):\n self.Error.not_enough_features.clear()\n if super().check_data():\n n_features = len(self.data.domain.attributes)\n if self.use_max_features and self.max_features > n_features:\n self.Error.not_enough_features(n_features)\n self.valid_data = False\n return self.valid_data\n\n def get_learner_parameters(self):\n \"\"\"Called by send report to list the parameters of the learner.\"\"\"\n return (\n (\"Number of trees\", self.n_estimators),\n (\"Maximal number of considered features\",\n self.max_features if self.use_max_features else \"unlimited\"),\n (\"Fixed random seed\", self.use_random_state and self.random_state),\n (\"Maximal tree depth\",\n self.max_depth if self.use_max_depth else \"unlimited\"),\n (\"Stop splitting nodes with maximum instances\",\n self.min_samples_split if self.use_min_samples_split else \"unlimited\")\n )\n\n\nif __name__ == \"__main__\": # pragma: no cover\n WidgetPreview(OWRandomForest).run(Table(\"iris\"))\n", "path": "Orange/widgets/model/owrandomforest.py"}], "after_files": [{"content": "from AnyQt.QtCore import Qt\n\nfrom Orange.data import Table\nfrom Orange.modelling import RandomForestLearner\nfrom Orange.widgets import settings, gui\nfrom Orange.widgets.utils.owlearnerwidget import OWBaseLearner\nfrom Orange.widgets.utils.widgetpreview import WidgetPreview\nfrom Orange.widgets.widget import Msg\n\n\nclass OWRandomForest(OWBaseLearner):\n name = \"Random Forest\"\n description = \"Predict using an ensemble of decision trees.\"\n icon = \"icons/RandomForest.svg\"\n replaces = [\n \"Orange.widgets.classify.owrandomforest.OWRandomForest\",\n \"Orange.widgets.regression.owrandomforestregression.OWRandomForestRegression\",\n ]\n priority = 40\n keywords = []\n\n LEARNER = RandomForestLearner\n\n n_estimators = settings.Setting(10)\n max_features = settings.Setting(5)\n use_max_features = settings.Setting(False)\n use_random_state = settings.Setting(False)\n max_depth = settings.Setting(3)\n use_max_depth = settings.Setting(False)\n min_samples_split = settings.Setting(5)\n use_min_samples_split = settings.Setting(True)\n index_output = settings.Setting(0)\n\n class Error(OWBaseLearner.Error):\n not_enough_features = Msg(\"Insufficient number of attributes ({})\")\n\n def add_main_layout(self):\n box = gui.vBox(self.controlArea, 'Basic Properties')\n self.n_estimators_spin = gui.spin(\n box, self, \"n_estimators\", minv=1, maxv=10000, controlWidth=80,\n alignment=Qt.AlignRight, label=\"Number of trees: \",\n callback=self.settings_changed)\n self.max_features_spin = gui.spin(\n box, self, \"max_features\", 2, 50, controlWidth=80,\n label=\"Number of attributes considered at each split: \",\n callback=self.settings_changed, checked=\"use_max_features\",\n checkCallback=self.settings_changed, alignment=Qt.AlignRight,)\n self.random_state = gui.checkBox(\n box, self, \"use_random_state\", label=\"Replicable training\",\n callback=self.settings_changed)\n\n box = gui.vBox(self.controlArea, \"Growth Control\")\n self.max_depth_spin = gui.spin(\n box, self, \"max_depth\", 1, 50, controlWidth=80,\n label=\"Limit depth of individual trees: \", alignment=Qt.AlignRight,\n callback=self.settings_changed, checked=\"use_max_depth\",\n checkCallback=self.settings_changed)\n self.min_samples_split_spin = gui.spin(\n box, self, \"min_samples_split\", 2, 1000, controlWidth=80,\n label=\"Do not split subsets smaller than: \",\n callback=self.settings_changed, checked=\"use_min_samples_split\",\n checkCallback=self.settings_changed, alignment=Qt.AlignRight)\n\n def create_learner(self):\n common_args = {\"n_estimators\": self.n_estimators}\n if self.use_max_features:\n common_args[\"max_features\"] = self.max_features\n if self.use_random_state:\n common_args[\"random_state\"] = 0\n if self.use_max_depth:\n common_args[\"max_depth\"] = self.max_depth\n if self.use_min_samples_split:\n common_args[\"min_samples_split\"] = self.min_samples_split\n\n return self.LEARNER(preprocessors=self.preprocessors, **common_args)\n\n def check_data(self):\n self.Error.not_enough_features.clear()\n if super().check_data():\n n_features = len(self.data.domain.attributes)\n if self.use_max_features and self.max_features > n_features:\n self.Error.not_enough_features(n_features)\n self.valid_data = False\n return self.valid_data\n\n def get_learner_parameters(self):\n \"\"\"Called by send report to list the parameters of the learner.\"\"\"\n return (\n (\"Number of trees\", self.n_estimators),\n (\"Maximal number of considered features\",\n self.max_features if self.use_max_features else \"unlimited\"),\n (\"Replicable training\", [\"No\", \"Yes\"][self.use_random_state]),\n (\"Maximal tree depth\",\n self.max_depth if self.use_max_depth else \"unlimited\"),\n (\"Stop splitting nodes with maximum instances\",\n self.min_samples_split if self.use_min_samples_split else \"unlimited\")\n )\n\n\nif __name__ == \"__main__\": # pragma: no cover\n WidgetPreview(OWRandomForest).run(Table(\"iris\"))\n", "path": "Orange/widgets/model/owrandomforest.py"}]} | 1,556 | 516 |
gh_patches_debug_15202 | rasdani/github-patches | git_diff | vega__altair-1265 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
line_percent.py does not work offline
We need all examples to work offline. Currently ``line_percent.py`` uses ``pd.read_json`` from a URL.
The example should probably use a URL plus a filter.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `altair/examples/boxplot_max_min.py`
Content:
```
1 """
2 Box Plot with Min/Max Whiskers
3 ------------------------------
4 This example shows how to make a basic box plot using US Population data from 2000.
5 """
6 # category: other charts
7 import altair as alt
8 from vega_datasets import data
9
10 source = data.population()
11
12 base = alt.Chart(source)
13
14 # Define aggregate fields
15 lower_box = 'q1(people):Q'
16 lower_whisker = 'min(people):Q'
17 upper_box = 'q3(people):Q'
18 upper_whisker = 'max(people):Q'
19
20 # Compose each layer individually
21 lower_plot = base.mark_rule().encode(
22 y=alt.Y(lower_whisker, title="population"),
23 y2=lower_box,
24 x='age:O'
25 )
26
27 middle_plot = base.mark_bar(size=5.0).encode(
28 y=lower_box,
29 y2=upper_box,
30 x='age:O'
31 )
32
33 upper_plot = base.mark_rule().encode(
34 y=upper_whisker,
35 y2=upper_box,
36 x='age:O'
37 )
38
39 middle_tick = base.mark_tick(
40 color='white',
41 size=5.0
42 ).encode(
43 y='median(people):Q',
44 x='age:O',
45 )
46
47 lower_plot + middle_plot + upper_plot + middle_tick
48
```
Path: `altair/examples/line_percent.py`
Content:
```
1 """
2 Line Chart with Percent axis
3 ----------------------------
4 This example shows how to format the tick labels of the y-axis of a chart as percentages.
5 """
6 # category: line charts
7 import altair as alt
8 import pandas as pd
9 from vega_datasets import data
10
11 source = pd.read_json(data.jobs.url)
12 welders = source[source.job == 'Welder']
13
14 alt.Chart(welders).mark_line().encode(
15 alt.X('year:O'),
16 alt.Y('perc:Q', axis=alt.Axis(format='%')),
17 color='sex:N'
18 )
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/altair/examples/boxplot_max_min.py b/altair/examples/boxplot_max_min.py
--- a/altair/examples/boxplot_max_min.py
+++ b/altair/examples/boxplot_max_min.py
@@ -7,7 +7,7 @@
import altair as alt
from vega_datasets import data
-source = data.population()
+source = data.population.url
base = alt.Chart(source)
diff --git a/altair/examples/line_percent.py b/altair/examples/line_percent.py
--- a/altair/examples/line_percent.py
+++ b/altair/examples/line_percent.py
@@ -5,14 +5,14 @@
"""
# category: line charts
import altair as alt
-import pandas as pd
from vega_datasets import data
-source = pd.read_json(data.jobs.url)
-welders = source[source.job == 'Welder']
+source = data.jobs.url
-alt.Chart(welders).mark_line().encode(
+alt.Chart(source).mark_line().encode(
alt.X('year:O'),
alt.Y('perc:Q', axis=alt.Axis(format='%')),
color='sex:N'
+).transform_filter(
+ alt.datum.job == 'Welder'
)
| {"golden_diff": "diff --git a/altair/examples/boxplot_max_min.py b/altair/examples/boxplot_max_min.py\n--- a/altair/examples/boxplot_max_min.py\n+++ b/altair/examples/boxplot_max_min.py\n@@ -7,7 +7,7 @@\n import altair as alt\n from vega_datasets import data\n \n-source = data.population()\n+source = data.population.url\n \n base = alt.Chart(source)\n \ndiff --git a/altair/examples/line_percent.py b/altair/examples/line_percent.py\n--- a/altair/examples/line_percent.py\n+++ b/altair/examples/line_percent.py\n@@ -5,14 +5,14 @@\n \"\"\"\n # category: line charts\n import altair as alt\n-import pandas as pd\n from vega_datasets import data\n \n-source = pd.read_json(data.jobs.url)\n-welders = source[source.job == 'Welder']\n+source = data.jobs.url\n \n-alt.Chart(welders).mark_line().encode(\n+alt.Chart(source).mark_line().encode(\n alt.X('year:O'),\n alt.Y('perc:Q', axis=alt.Axis(format='%')),\n color='sex:N'\n+).transform_filter(\n+ alt.datum.job == 'Welder'\n )\n", "issue": "line_percent.py does not work offline\nWe need all examples to work offline. Currently ``line_percent.py`` uses ``pd.read_json`` from a URL.\r\n\r\nThe example should probably use a URL plus a filter.\n", "before_files": [{"content": "\"\"\"\nBox Plot with Min/Max Whiskers\n------------------------------\nThis example shows how to make a basic box plot using US Population data from 2000.\n\"\"\"\n# category: other charts\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.population()\n\nbase = alt.Chart(source)\n\n# Define aggregate fields\nlower_box = 'q1(people):Q'\nlower_whisker = 'min(people):Q'\nupper_box = 'q3(people):Q'\nupper_whisker = 'max(people):Q'\n\n# Compose each layer individually\nlower_plot = base.mark_rule().encode(\n y=alt.Y(lower_whisker, title=\"population\"),\n y2=lower_box,\n x='age:O'\n)\n\nmiddle_plot = base.mark_bar(size=5.0).encode(\n y=lower_box,\n y2=upper_box,\n x='age:O'\n)\n\nupper_plot = base.mark_rule().encode(\n y=upper_whisker,\n y2=upper_box,\n x='age:O'\n)\n\nmiddle_tick = base.mark_tick(\n color='white',\n size=5.0\n).encode(\n y='median(people):Q',\n x='age:O',\n)\n\nlower_plot + middle_plot + upper_plot + middle_tick\n", "path": "altair/examples/boxplot_max_min.py"}, {"content": "\"\"\"\nLine Chart with Percent axis\n----------------------------\nThis example shows how to format the tick labels of the y-axis of a chart as percentages.\n\"\"\"\n# category: line charts\nimport altair as alt\nimport pandas as pd\nfrom vega_datasets import data\n\nsource = pd.read_json(data.jobs.url)\nwelders = source[source.job == 'Welder']\n\nalt.Chart(welders).mark_line().encode(\n alt.X('year:O'),\n alt.Y('perc:Q', axis=alt.Axis(format='%')),\n color='sex:N'\n)\n", "path": "altair/examples/line_percent.py"}], "after_files": [{"content": "\"\"\"\nBox Plot with Min/Max Whiskers\n------------------------------\nThis example shows how to make a basic box plot using US Population data from 2000.\n\"\"\"\n# category: other charts\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.population.url\n\nbase = alt.Chart(source)\n\n# Define aggregate fields\nlower_box = 'q1(people):Q'\nlower_whisker = 'min(people):Q'\nupper_box = 'q3(people):Q'\nupper_whisker = 'max(people):Q'\n\n# Compose each layer individually\nlower_plot = base.mark_rule().encode(\n y=alt.Y(lower_whisker, title=\"population\"),\n y2=lower_box,\n x='age:O'\n)\n\nmiddle_plot = base.mark_bar(size=5.0).encode(\n y=lower_box,\n y2=upper_box,\n x='age:O'\n)\n\nupper_plot = base.mark_rule().encode(\n y=upper_whisker,\n y2=upper_box,\n x='age:O'\n)\n\nmiddle_tick = base.mark_tick(\n color='white',\n size=5.0\n).encode(\n y='median(people):Q',\n x='age:O',\n)\n\nlower_plot + middle_plot + upper_plot + middle_tick\n", "path": "altair/examples/boxplot_max_min.py"}, {"content": "\"\"\"\nLine Chart with Percent axis\n----------------------------\nThis example shows how to format the tick labels of the y-axis of a chart as percentages.\n\"\"\"\n# category: line charts\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.jobs.url\n\nalt.Chart(source).mark_line().encode(\n alt.X('year:O'),\n alt.Y('perc:Q', axis=alt.Axis(format='%')),\n color='sex:N'\n).transform_filter(\n alt.datum.job == 'Welder'\n)\n", "path": "altair/examples/line_percent.py"}]} | 843 | 270 |
gh_patches_debug_12654 | rasdani/github-patches | git_diff | ocadotechnology__aimmo-499 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Game Creator RC initialised with wrong game API URL
The `REPLACE_ME` change in one of the latest PR's has broken the game on minikube level in `minikube.py`. The URL is incorrect so minikube does not work and prohibits testing.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from setuptools import find_packages, setup
3
4 import versioneer
5
6 setup(
7 name='aimmo',
8 cmdclass=versioneer.get_cmdclass(),
9 packages=find_packages(),
10 include_package_data=True,
11 install_requires=[
12 'django >= 1.8.3, < 1.9.0',
13 'django-autoconfig >= 0.3.6, < 1.0.0',
14 'django-forms-bootstrap',
15 'django-js-reverse',
16 'eventlet',
17 'flask',
18 'flask-socketio',
19 'requests',
20 'six',
21 'pykube',
22 'hypothesis',
23 'flask-cors >= 3.0, < 3.1',
24 'psutil >= 5.4, < 5.5',
25 ],
26 tests_require=[
27 'django-setuptest',
28 'httmock',
29 ],
30 test_suite='setuptest.setuptest.SetupTestSuite',
31 version=versioneer.get_version(),
32 zip_safe=False,
33 )
34
```
Path: `aimmo_runner/shell_api.py`
Content:
```
1 import subprocess
2 import sys
3 import os
4 import stat
5 import errno
6 import platform
7 from subprocess import CalledProcessError
8 from urllib import urlretrieve, urlopen
9
10 BASE_DIR = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))
11 TEST_BIN = os.path.join(BASE_DIR, 'test-bin')
12 OS = platform.system().lower()
13 FILE_SUFFIX = '.exe' if OS == 'windows' else ''
14 KUBECTL = os.path.join(TEST_BIN, 'kubectl%s' % FILE_SUFFIX)
15 MINIKUBE = os.path.join(TEST_BIN, 'minikube%s' % FILE_SUFFIX)
16 FNULL = open(os.devnull, 'w')
17
18 def log(message):
19 sys.stderr.write(message + "\n")
20
21
22 def run_command(args, capture_output=False):
23 try:
24 if capture_output:
25 return subprocess.check_output(args)
26 else:
27 subprocess.check_call(args)
28 except CalledProcessError as e:
29 log('Command failed with exit status %d: %s' % (e.returncode, ' '.join(args)))
30 raise
31
32
33 def run_command_async(args, capture_output=False):
34 if capture_output is True:
35 p = subprocess.Popen(args, stdout=FNULL, stderr=subprocess.STDOUT)
36 else:
37 p = subprocess.Popen(args)
38 return p
39
40
41 def create_test_bin():
42 try:
43 os.makedirs(TEST_BIN)
44 except OSError as err:
45 if err.errno != errno.EEXIST:
46 raise
47
48
49 def binary_exists(filename):
50 # Check if binary is callable on our path
51 try:
52 run_command([filename], True)
53 return True
54 except OSError:
55 return False
56
57
58 def download_exec(url, dest):
59 dest = urlretrieve(url, dest)[0]
60 make_exec(dest)
61
62
63 def make_exec(file):
64 current_stat = os.stat(file)
65 os.chmod(file, current_stat.st_mode | stat.S_IEXEC)
66
67
68 def get_latest_github_version(repo):
69 result = urlopen('https://github.com/%s/releases/latest' % repo)
70 return result.geturl().split('/')[-1]
71
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/aimmo_runner/shell_api.py b/aimmo_runner/shell_api.py
--- a/aimmo_runner/shell_api.py
+++ b/aimmo_runner/shell_api.py
@@ -15,6 +15,7 @@
MINIKUBE = os.path.join(TEST_BIN, 'minikube%s' % FILE_SUFFIX)
FNULL = open(os.devnull, 'w')
+
def log(message):
sys.stderr.write(message + "\n")
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,6 +26,10 @@
tests_require=[
'django-setuptest',
'httmock',
+ 'mock == 2.0.0',
+ 'docker == 2.7.0',
+ 'kubernetes == 4.0.0',
+ 'PyYAML == 3.12',
],
test_suite='setuptest.setuptest.SetupTestSuite',
version=versioneer.get_version(),
| {"golden_diff": "diff --git a/aimmo_runner/shell_api.py b/aimmo_runner/shell_api.py\n--- a/aimmo_runner/shell_api.py\n+++ b/aimmo_runner/shell_api.py\n@@ -15,6 +15,7 @@\n MINIKUBE = os.path.join(TEST_BIN, 'minikube%s' % FILE_SUFFIX)\n FNULL = open(os.devnull, 'w')\n \n+\n def log(message):\n sys.stderr.write(message + \"\\n\")\n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,6 +26,10 @@\n tests_require=[\n 'django-setuptest',\n 'httmock',\n+ 'mock == 2.0.0',\n+ 'docker == 2.7.0',\n+ 'kubernetes == 4.0.0',\n+ 'PyYAML == 3.12',\n ],\n test_suite='setuptest.setuptest.SetupTestSuite',\n version=versioneer.get_version(),\n", "issue": "Game Creator RC initialised with wrong game API URL\nThe `REPLACE_ME` change in one of the latest PR's has broken the game on minikube level in `minikube.py`. The URL is incorrect so minikube does not work and prohibits testing. \r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\nsetup(\n name='aimmo',\n cmdclass=versioneer.get_cmdclass(),\n packages=find_packages(),\n include_package_data=True,\n install_requires=[\n 'django >= 1.8.3, < 1.9.0',\n 'django-autoconfig >= 0.3.6, < 1.0.0',\n 'django-forms-bootstrap',\n 'django-js-reverse',\n 'eventlet',\n 'flask',\n 'flask-socketio',\n 'requests',\n 'six',\n 'pykube',\n 'hypothesis',\n 'flask-cors >= 3.0, < 3.1',\n 'psutil >= 5.4, < 5.5',\n ],\n tests_require=[\n 'django-setuptest',\n 'httmock',\n ],\n test_suite='setuptest.setuptest.SetupTestSuite',\n version=versioneer.get_version(),\n zip_safe=False,\n)\n", "path": "setup.py"}, {"content": "import subprocess\nimport sys\nimport os\nimport stat\nimport errno\nimport platform\nfrom subprocess import CalledProcessError\nfrom urllib import urlretrieve, urlopen\n\nBASE_DIR = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))\nTEST_BIN = os.path.join(BASE_DIR, 'test-bin')\nOS = platform.system().lower()\nFILE_SUFFIX = '.exe' if OS == 'windows' else ''\nKUBECTL = os.path.join(TEST_BIN, 'kubectl%s' % FILE_SUFFIX)\nMINIKUBE = os.path.join(TEST_BIN, 'minikube%s' % FILE_SUFFIX)\nFNULL = open(os.devnull, 'w')\n\ndef log(message):\n sys.stderr.write(message + \"\\n\")\n\n\ndef run_command(args, capture_output=False):\n try:\n if capture_output:\n return subprocess.check_output(args)\n else:\n subprocess.check_call(args)\n except CalledProcessError as e:\n log('Command failed with exit status %d: %s' % (e.returncode, ' '.join(args)))\n raise\n\n\ndef run_command_async(args, capture_output=False):\n if capture_output is True:\n p = subprocess.Popen(args, stdout=FNULL, stderr=subprocess.STDOUT)\n else:\n p = subprocess.Popen(args)\n return p\n\n\ndef create_test_bin():\n try:\n os.makedirs(TEST_BIN)\n except OSError as err:\n if err.errno != errno.EEXIST:\n raise\n\n\ndef binary_exists(filename):\n # Check if binary is callable on our path\n try:\n run_command([filename], True)\n return True\n except OSError:\n return False\n\n\ndef download_exec(url, dest):\n dest = urlretrieve(url, dest)[0]\n make_exec(dest)\n\n\ndef make_exec(file):\n current_stat = os.stat(file)\n os.chmod(file, current_stat.st_mode | stat.S_IEXEC)\n\n\ndef get_latest_github_version(repo):\n result = urlopen('https://github.com/%s/releases/latest' % repo)\n return result.geturl().split('/')[-1]\n\n", "path": "aimmo_runner/shell_api.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\nsetup(\n name='aimmo',\n cmdclass=versioneer.get_cmdclass(),\n packages=find_packages(),\n include_package_data=True,\n install_requires=[\n 'django >= 1.8.3, < 1.9.0',\n 'django-autoconfig >= 0.3.6, < 1.0.0',\n 'django-forms-bootstrap',\n 'django-js-reverse',\n 'eventlet',\n 'flask',\n 'flask-socketio',\n 'requests',\n 'six',\n 'pykube',\n 'hypothesis',\n 'flask-cors >= 3.0, < 3.1',\n 'psutil >= 5.4, < 5.5',\n ],\n tests_require=[\n 'django-setuptest',\n 'httmock',\n 'mock == 2.0.0',\n 'docker == 2.7.0',\n 'kubernetes == 4.0.0',\n 'PyYAML == 3.12',\n ],\n test_suite='setuptest.setuptest.SetupTestSuite',\n version=versioneer.get_version(),\n zip_safe=False,\n)\n", "path": "setup.py"}, {"content": "import subprocess\nimport sys\nimport os\nimport stat\nimport errno\nimport platform\nfrom subprocess import CalledProcessError\nfrom urllib import urlretrieve, urlopen\n\nBASE_DIR = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))\nTEST_BIN = os.path.join(BASE_DIR, 'test-bin')\nOS = platform.system().lower()\nFILE_SUFFIX = '.exe' if OS == 'windows' else ''\nKUBECTL = os.path.join(TEST_BIN, 'kubectl%s' % FILE_SUFFIX)\nMINIKUBE = os.path.join(TEST_BIN, 'minikube%s' % FILE_SUFFIX)\nFNULL = open(os.devnull, 'w')\n\n\ndef log(message):\n sys.stderr.write(message + \"\\n\")\n\n\ndef run_command(args, capture_output=False):\n try:\n if capture_output:\n return subprocess.check_output(args)\n else:\n subprocess.check_call(args)\n except CalledProcessError as e:\n log('Command failed with exit status %d: %s' % (e.returncode, ' '.join(args)))\n raise\n\n\ndef run_command_async(args, capture_output=False):\n if capture_output is True:\n p = subprocess.Popen(args, stdout=FNULL, stderr=subprocess.STDOUT)\n else:\n p = subprocess.Popen(args)\n return p\n\n\ndef create_test_bin():\n try:\n os.makedirs(TEST_BIN)\n except OSError as err:\n if err.errno != errno.EEXIST:\n raise\n\n\ndef binary_exists(filename):\n # Check if binary is callable on our path\n try:\n run_command([filename], True)\n return True\n except OSError:\n return False\n\n\ndef download_exec(url, dest):\n dest = urlretrieve(url, dest)[0]\n make_exec(dest)\n\n\ndef make_exec(file):\n current_stat = os.stat(file)\n os.chmod(file, current_stat.st_mode | stat.S_IEXEC)\n\n\ndef get_latest_github_version(repo):\n result = urlopen('https://github.com/%s/releases/latest' % repo)\n return result.geturl().split('/')[-1]\n\n", "path": "aimmo_runner/shell_api.py"}]} | 1,197 | 227 |
gh_patches_debug_2784 | rasdani/github-patches | git_diff | archlinux__archinstall-1954 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[master] awesome (desktops in general?) don't install packages?
It appears when choosing awesome, install is called: https://github.com/archlinux/archinstall/blob/7326d51161bf6fd7f1c683cf1d7ce09338efe4b7/archinstall/default_profiles/desktops/awesome.py#L23-L24
And super being `XorgProfile`: https://github.com/archlinux/archinstall/blob/7326d51161bf6fd7f1c683cf1d7ce09338efe4b7/archinstall/default_profiles/xorg.py#L1-L21
That class does not have an install so it calls `Profile.install()` which contains: https://github.com/archlinux/archinstall/blob/7326d51161bf6fd7f1c683cf1d7ce09338efe4b7/archinstall/default_profiles/profile.py#L101-L104
Which is just a placeholder?

I haven't ran through all the profiles yet, but have we overlooked something here?
What happened to all the packages per profile when we moved them to the dataclass structure? :)
I obviously missed something in a PR some where hehe
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `archinstall/default_profiles/desktops/awesome.py`
Content:
```
1 from typing import List, Optional, Any, TYPE_CHECKING
2
3 from archinstall.default_profiles.profile import ProfileType
4 from archinstall.default_profiles.xorg import XorgProfile
5
6 if TYPE_CHECKING:
7 from archinstall.lib.installer import Installer
8 _: Any
9
10
11 class AwesomeProfile(XorgProfile):
12 def __init__(self):
13 super().__init__('Awesome', ProfileType.WindowMgr, description='')
14
15 @property
16 def packages(self) -> List[str]:
17 return ['alacritty']
18
19 def preview_text(self) -> Optional[str]:
20 text = str(_('Environment type: {}')).format(self.profile_type.value)
21 return text + '\n' + self.packages_text()
22
23 def install(self, install_session: 'Installer'):
24 super().install(install_session)
25
26 # TODO: Copy a full configuration to ~/.config/awesome/rc.lua instead.
27 with open(f"{install_session.target}/etc/xdg/awesome/rc.lua", 'r') as fh:
28 awesome_lua = fh.read()
29
30 # Replace xterm with alacritty for a smoother experience.
31 awesome_lua = awesome_lua.replace('"xterm"', '"alacritty"')
32
33 with open(f"{install_session.target}/etc/xdg/awesome/rc.lua", 'w') as fh:
34 fh.write(awesome_lua)
35
36 # TODO: Configure the right-click-menu to contain the above packages that were installed. (as a user config)
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/archinstall/default_profiles/desktops/awesome.py b/archinstall/default_profiles/desktops/awesome.py
--- a/archinstall/default_profiles/desktops/awesome.py
+++ b/archinstall/default_profiles/desktops/awesome.py
@@ -14,7 +14,10 @@
@property
def packages(self) -> List[str]:
- return ['alacritty']
+ return [
+ 'awesome',
+ 'alacritty'
+ ]
def preview_text(self) -> Optional[str]:
text = str(_('Environment type: {}')).format(self.profile_type.value)
| {"golden_diff": "diff --git a/archinstall/default_profiles/desktops/awesome.py b/archinstall/default_profiles/desktops/awesome.py\n--- a/archinstall/default_profiles/desktops/awesome.py\n+++ b/archinstall/default_profiles/desktops/awesome.py\n@@ -14,7 +14,10 @@\n \n \t@property\n \tdef packages(self) -> List[str]:\n-\t\treturn ['alacritty']\n+\t\treturn [\n+\t\t\t'awesome',\n+\t\t\t'alacritty'\n+\t\t]\n \n \tdef preview_text(self) -> Optional[str]:\n \t\ttext = str(_('Environment type: {}')).format(self.profile_type.value)\n", "issue": "[master] awesome (desktops in general?) don't install packages?\nIt appears when choosing awesome, install is called: https://github.com/archlinux/archinstall/blob/7326d51161bf6fd7f1c683cf1d7ce09338efe4b7/archinstall/default_profiles/desktops/awesome.py#L23-L24\r\n\r\nAnd super being `XorgProfile`: https://github.com/archlinux/archinstall/blob/7326d51161bf6fd7f1c683cf1d7ce09338efe4b7/archinstall/default_profiles/xorg.py#L1-L21\r\n\r\nThat class does not have an install so it calls `Profile.install()` which contains: https://github.com/archlinux/archinstall/blob/7326d51161bf6fd7f1c683cf1d7ce09338efe4b7/archinstall/default_profiles/profile.py#L101-L104\r\nWhich is just a placeholder?\r\n\r\n\r\n\r\nI haven't ran through all the profiles yet, but have we overlooked something here?\r\nWhat happened to all the packages per profile when we moved them to the dataclass structure? :)\r\n\r\nI obviously missed something in a PR some where hehe\n", "before_files": [{"content": "from typing import List, Optional, Any, TYPE_CHECKING\n\nfrom archinstall.default_profiles.profile import ProfileType\nfrom archinstall.default_profiles.xorg import XorgProfile\n\nif TYPE_CHECKING:\n\tfrom archinstall.lib.installer import Installer\n\t_: Any\n\n\nclass AwesomeProfile(XorgProfile):\n\tdef __init__(self):\n\t\tsuper().__init__('Awesome', ProfileType.WindowMgr, description='')\n\n\t@property\n\tdef packages(self) -> List[str]:\n\t\treturn ['alacritty']\n\n\tdef preview_text(self) -> Optional[str]:\n\t\ttext = str(_('Environment type: {}')).format(self.profile_type.value)\n\t\treturn text + '\\n' + self.packages_text()\n\n\tdef install(self, install_session: 'Installer'):\n\t\tsuper().install(install_session)\n\n\t\t# TODO: Copy a full configuration to ~/.config/awesome/rc.lua instead.\n\t\twith open(f\"{install_session.target}/etc/xdg/awesome/rc.lua\", 'r') as fh:\n\t\t\tawesome_lua = fh.read()\n\n\t\t# Replace xterm with alacritty for a smoother experience.\n\t\tawesome_lua = awesome_lua.replace('\"xterm\"', '\"alacritty\"')\n\n\t\twith open(f\"{install_session.target}/etc/xdg/awesome/rc.lua\", 'w') as fh:\n\t\t\tfh.write(awesome_lua)\n\n\t\t# TODO: Configure the right-click-menu to contain the above packages that were installed. (as a user config)\n", "path": "archinstall/default_profiles/desktops/awesome.py"}], "after_files": [{"content": "from typing import List, Optional, Any, TYPE_CHECKING\n\nfrom archinstall.default_profiles.profile import ProfileType\nfrom archinstall.default_profiles.xorg import XorgProfile\n\nif TYPE_CHECKING:\n\tfrom archinstall.lib.installer import Installer\n\t_: Any\n\n\nclass AwesomeProfile(XorgProfile):\n\tdef __init__(self):\n\t\tsuper().__init__('Awesome', ProfileType.WindowMgr, description='')\n\n\t@property\n\tdef packages(self) -> List[str]:\n\t\treturn [\n\t\t\t'awesome',\n\t\t\t'alacritty'\n\t\t]\n\n\tdef preview_text(self) -> Optional[str]:\n\t\ttext = str(_('Environment type: {}')).format(self.profile_type.value)\n\t\treturn text + '\\n' + self.packages_text()\n\n\tdef install(self, install_session: 'Installer'):\n\t\tsuper().install(install_session)\n\n\t\t# TODO: Copy a full configuration to ~/.config/awesome/rc.lua instead.\n\t\twith open(f\"{install_session.target}/etc/xdg/awesome/rc.lua\", 'r') as fh:\n\t\t\tawesome_lua = fh.read()\n\n\t\t# Replace xterm with alacritty for a smoother experience.\n\t\tawesome_lua = awesome_lua.replace('\"xterm\"', '\"alacritty\"')\n\n\t\twith open(f\"{install_session.target}/etc/xdg/awesome/rc.lua\", 'w') as fh:\n\t\t\tfh.write(awesome_lua)\n\n\t\t# TODO: Configure the right-click-menu to contain the above packages that were installed. (as a user config)\n", "path": "archinstall/default_profiles/desktops/awesome.py"}]} | 984 | 133 |
gh_patches_debug_24635 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-1438 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plone-upgrade to 5.0.3 shows plain text as result
This is caused by https://github.com/plone/plone.app.upgrade/pull/67 by @vangheem, though this change looks fine to me.
With the above change, when running `@@plone-upgrade`, even with dry-run selected, the result page is shown as text: you see plain html. Very strange. Reported here: https://community.plone.org/t/plone-5-0-3-soft-released/1699/4
When I empty the registry.xml, keeping only the main `registry` tags for safety, it all works fine. Keeping one of the two changed records, it again shows as text.
To check it:
- Use current coredev 5.0
- Create a Plone Site.
- Simulate a Plone 5.0.2 site: in portal_setup, Upgrades, select Products.CMFPlone:plone, and run the to502 upgrade profile.
- Go to @@plone-upgrade, optionally select dry-run, and run the upgrade.
Result: it will show as plain text.
BTW, afterwards, all is fine: the migration has succeeded and it looks like all pages show up fine.
Any idea?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Products/CMFPlone/resources/exportimport/bundles.py`
Content:
```
1 from plone.registry.interfaces import IRegistry
2 from zope.component import queryUtility
3
4 from ..browser.combine import combine_bundles
5
6
7 def combine(context):
8
9 logger = context.getLogger('bundles')
10 registry = queryUtility(IRegistry)
11
12 if registry is None:
13 logger.info("Cannot find registry")
14 return
15
16 body = context.readDataFile('registry.xml')
17 if body and "IBundleRegistry" in body:
18 site = context.getSite()
19 combine_bundles(site)
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Products/CMFPlone/resources/exportimport/bundles.py b/Products/CMFPlone/resources/exportimport/bundles.py
--- a/Products/CMFPlone/resources/exportimport/bundles.py
+++ b/Products/CMFPlone/resources/exportimport/bundles.py
@@ -1,5 +1,6 @@
from plone.registry.interfaces import IRegistry
from zope.component import queryUtility
+from zope.globalrequest import getRequest
from ..browser.combine import combine_bundles
@@ -16,4 +17,20 @@
body = context.readDataFile('registry.xml')
if body and "IBundleRegistry" in body:
site = context.getSite()
+ # Calling combine_bundles will have as side effect that the
+ # Content-Type header of the response is set to application/javascript,
+ # which we do not want. So we reset it to the original at the end.
+ site = context.getSite()
+ request = getattr(site, 'REQUEST', getRequest())
+ if request is not None:
+ # Easily happens in tests.
+ orig_header = request.response.getHeader('Content-Type')
combine_bundles(site)
+ if request is not None:
+ new_header = request.response.getHeader('Content-Type')
+ if new_header != orig_header:
+ if orig_header is None:
+ # Setting it to None would result in the string 'None'.
+ # So pick a saner one.
+ orig_header = 'text/html'
+ request.response.setHeader('Content-Type', orig_header)
| {"golden_diff": "diff --git a/Products/CMFPlone/resources/exportimport/bundles.py b/Products/CMFPlone/resources/exportimport/bundles.py\n--- a/Products/CMFPlone/resources/exportimport/bundles.py\n+++ b/Products/CMFPlone/resources/exportimport/bundles.py\n@@ -1,5 +1,6 @@\n from plone.registry.interfaces import IRegistry\n from zope.component import queryUtility\n+from zope.globalrequest import getRequest\n \n from ..browser.combine import combine_bundles\n \n@@ -16,4 +17,20 @@\n body = context.readDataFile('registry.xml')\n if body and \"IBundleRegistry\" in body:\n site = context.getSite()\n+ # Calling combine_bundles will have as side effect that the\n+ # Content-Type header of the response is set to application/javascript,\n+ # which we do not want. So we reset it to the original at the end.\n+ site = context.getSite()\n+ request = getattr(site, 'REQUEST', getRequest())\n+ if request is not None:\n+ # Easily happens in tests.\n+ orig_header = request.response.getHeader('Content-Type')\n combine_bundles(site)\n+ if request is not None:\n+ new_header = request.response.getHeader('Content-Type')\n+ if new_header != orig_header:\n+ if orig_header is None:\n+ # Setting it to None would result in the string 'None'.\n+ # So pick a saner one.\n+ orig_header = 'text/html'\n+ request.response.setHeader('Content-Type', orig_header)\n", "issue": "plone-upgrade to 5.0.3 shows plain text as result\nThis is caused by https://github.com/plone/plone.app.upgrade/pull/67 by @vangheem, though this change looks fine to me.\n\nWith the above change, when running `@@plone-upgrade`, even with dry-run selected, the result page is shown as text: you see plain html. Very strange. Reported here: https://community.plone.org/t/plone-5-0-3-soft-released/1699/4\nWhen I empty the registry.xml, keeping only the main `registry` tags for safety, it all works fine. Keeping one of the two changed records, it again shows as text.\n\nTo check it:\n- Use current coredev 5.0\n- Create a Plone Site.\n- Simulate a Plone 5.0.2 site: in portal_setup, Upgrades, select Products.CMFPlone:plone, and run the to502 upgrade profile.\n- Go to @@plone-upgrade, optionally select dry-run, and run the upgrade.\n\nResult: it will show as plain text.\nBTW, afterwards, all is fine: the migration has succeeded and it looks like all pages show up fine.\n\nAny idea?\n\n", "before_files": [{"content": "from plone.registry.interfaces import IRegistry\nfrom zope.component import queryUtility\n\nfrom ..browser.combine import combine_bundles\n\n\ndef combine(context):\n\n logger = context.getLogger('bundles')\n registry = queryUtility(IRegistry)\n\n if registry is None:\n logger.info(\"Cannot find registry\")\n return\n\n body = context.readDataFile('registry.xml')\n if body and \"IBundleRegistry\" in body:\n site = context.getSite()\n combine_bundles(site)\n", "path": "Products/CMFPlone/resources/exportimport/bundles.py"}], "after_files": [{"content": "from plone.registry.interfaces import IRegistry\nfrom zope.component import queryUtility\nfrom zope.globalrequest import getRequest\n\nfrom ..browser.combine import combine_bundles\n\n\ndef combine(context):\n\n logger = context.getLogger('bundles')\n registry = queryUtility(IRegistry)\n\n if registry is None:\n logger.info(\"Cannot find registry\")\n return\n\n body = context.readDataFile('registry.xml')\n if body and \"IBundleRegistry\" in body:\n site = context.getSite()\n # Calling combine_bundles will have as side effect that the\n # Content-Type header of the response is set to application/javascript,\n # which we do not want. So we reset it to the original at the end.\n site = context.getSite()\n request = getattr(site, 'REQUEST', getRequest())\n if request is not None:\n # Easily happens in tests.\n orig_header = request.response.getHeader('Content-Type')\n combine_bundles(site)\n if request is not None:\n new_header = request.response.getHeader('Content-Type')\n if new_header != orig_header:\n if orig_header is None:\n # Setting it to None would result in the string 'None'.\n # So pick a saner one.\n orig_header = 'text/html'\n request.response.setHeader('Content-Type', orig_header)\n", "path": "Products/CMFPlone/resources/exportimport/bundles.py"}]} | 672 | 340 |
gh_patches_debug_9715 | rasdani/github-patches | git_diff | OCA__server-tools-74 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[7.0] [base_optional_quick_create] AttributeError: 'NoneType' object has no attribute 'name_create'
Error at starting before a migration if a model has been removed
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `base_optional_quick_create/model.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 ##############################################################################
3 #
4 # Copyright (C) 2013 Agile Business Group sagl (<http://www.agilebg.com>)
5 #
6 # This program is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU Affero General Public License as published
8 # by the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU Affero General Public License for more details.
15 #
16 # You should have received a copy of the GNU Affero General Public License
17 # along with this program. If not, see <http://www.gnu.org/licenses/>.
18 #
19 ##############################################################################
20
21 from openerp.osv import orm, fields
22 from openerp import SUPERUSER_ID
23 from openerp.tools.translate import _
24
25
26 class ir_model(orm.Model):
27
28 _inherit = 'ir.model'
29
30 _columns = {
31 'avoid_quick_create': fields.boolean('Avoid quick create'),
32 }
33
34 def _wrap_name_create(self, old_create, model):
35 def wrapper(cr, uid, name, context=None):
36 raise orm.except_orm(_('Error'),
37 _("Can't create quickly. "
38 "Opening create form"))
39 return wrapper
40
41 def _register_hook(self, cr, ids=None):
42 if ids is None:
43 ids = self.search(cr, SUPERUSER_ID, [])
44 for model in self.browse(cr, SUPERUSER_ID, ids):
45 if model.avoid_quick_create:
46 model_name = model.model
47 model_obj = self.pool.get(model_name)
48 if not hasattr(model_obj, 'check_quick_create'):
49 model_obj.name_create = self._wrap_name_create(
50 model_obj.name_create,
51 model_name)
52 model_obj.check_quick_create = True
53 return True
54
55 def create(self, cr, uid, vals, context=None):
56 res_id = super(ir_model, self).create(cr, uid, vals, context=context)
57 self._register_hook(cr, [res_id])
58 return res_id
59
60 def write(self, cr, uid, ids, vals, context=None):
61 if isinstance(ids, (int, long)):
62 ids = [ids]
63 super(ir_model, self).write(cr, uid, ids, vals, context=context)
64 self._register_hook(cr, ids)
65 return True
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/base_optional_quick_create/model.py b/base_optional_quick_create/model.py
--- a/base_optional_quick_create/model.py
+++ b/base_optional_quick_create/model.py
@@ -45,7 +45,7 @@
if model.avoid_quick_create:
model_name = model.model
model_obj = self.pool.get(model_name)
- if not hasattr(model_obj, 'check_quick_create'):
+ if model_obj and not hasattr(model_obj, 'check_quick_create'):
model_obj.name_create = self._wrap_name_create(
model_obj.name_create,
model_name)
| {"golden_diff": "diff --git a/base_optional_quick_create/model.py b/base_optional_quick_create/model.py\n--- a/base_optional_quick_create/model.py\n+++ b/base_optional_quick_create/model.py\n@@ -45,7 +45,7 @@\n if model.avoid_quick_create:\n model_name = model.model\n model_obj = self.pool.get(model_name)\n- if not hasattr(model_obj, 'check_quick_create'):\n+ if model_obj and not hasattr(model_obj, 'check_quick_create'):\n model_obj.name_create = self._wrap_name_create(\n model_obj.name_create,\n model_name)\n", "issue": "[7.0] [base_optional_quick_create] AttributeError: 'NoneType' object has no attribute 'name_create'\nError at starting before a migration if a model has been removed\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n##############################################################################\n#\n# Copyright (C) 2013 Agile Business Group sagl (<http://www.agilebg.com>)\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published\n# by the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n##############################################################################\n\nfrom openerp.osv import orm, fields\nfrom openerp import SUPERUSER_ID\nfrom openerp.tools.translate import _\n\n\nclass ir_model(orm.Model):\n\n _inherit = 'ir.model'\n\n _columns = {\n 'avoid_quick_create': fields.boolean('Avoid quick create'),\n }\n\n def _wrap_name_create(self, old_create, model):\n def wrapper(cr, uid, name, context=None):\n raise orm.except_orm(_('Error'),\n _(\"Can't create quickly. \"\n \"Opening create form\"))\n return wrapper\n\n def _register_hook(self, cr, ids=None):\n if ids is None:\n ids = self.search(cr, SUPERUSER_ID, [])\n for model in self.browse(cr, SUPERUSER_ID, ids):\n if model.avoid_quick_create:\n model_name = model.model\n model_obj = self.pool.get(model_name)\n if not hasattr(model_obj, 'check_quick_create'):\n model_obj.name_create = self._wrap_name_create(\n model_obj.name_create,\n model_name)\n model_obj.check_quick_create = True\n return True\n\n def create(self, cr, uid, vals, context=None):\n res_id = super(ir_model, self).create(cr, uid, vals, context=context)\n self._register_hook(cr, [res_id])\n return res_id\n\n def write(self, cr, uid, ids, vals, context=None):\n if isinstance(ids, (int, long)):\n ids = [ids]\n super(ir_model, self).write(cr, uid, ids, vals, context=context)\n self._register_hook(cr, ids)\n return True\n", "path": "base_optional_quick_create/model.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n##############################################################################\n#\n# Copyright (C) 2013 Agile Business Group sagl (<http://www.agilebg.com>)\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published\n# by the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n##############################################################################\n\nfrom openerp.osv import orm, fields\nfrom openerp import SUPERUSER_ID\nfrom openerp.tools.translate import _\n\n\nclass ir_model(orm.Model):\n\n _inherit = 'ir.model'\n\n _columns = {\n 'avoid_quick_create': fields.boolean('Avoid quick create'),\n }\n\n def _wrap_name_create(self, old_create, model):\n def wrapper(cr, uid, name, context=None):\n raise orm.except_orm(_('Error'),\n _(\"Can't create quickly. \"\n \"Opening create form\"))\n return wrapper\n\n def _register_hook(self, cr, ids=None):\n if ids is None:\n ids = self.search(cr, SUPERUSER_ID, [])\n for model in self.browse(cr, SUPERUSER_ID, ids):\n if model.avoid_quick_create:\n model_name = model.model\n model_obj = self.pool.get(model_name)\n if model_obj and not hasattr(model_obj, 'check_quick_create'):\n model_obj.name_create = self._wrap_name_create(\n model_obj.name_create,\n model_name)\n model_obj.check_quick_create = True\n return True\n\n def create(self, cr, uid, vals, context=None):\n res_id = super(ir_model, self).create(cr, uid, vals, context=context)\n self._register_hook(cr, [res_id])\n return res_id\n\n def write(self, cr, uid, ids, vals, context=None):\n if isinstance(ids, (int, long)):\n ids = [ids]\n super(ir_model, self).write(cr, uid, ids, vals, context=context)\n self._register_hook(cr, ids)\n return True\n", "path": "base_optional_quick_create/model.py"}]} | 967 | 124 |
gh_patches_debug_27735 | rasdani/github-patches | git_diff | e-valuation__EvaP-1263 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove ViewTest where possible
Right now we have a `class ViewTest`, for which there is one subclass for each view that we have.
For views that we have tested properly, it provides no additional value and I I propose to replace it with the original `WebTest`.
Originally I proposed to remove it altogether and copypaste its test to all the test cases that wouldn't have any valuable test otherwise. @janno42 convinced me to leave it there and rename it to `WebTestWith200Check` instead.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/evaluation/migrations/0002_initial_data.py`
Content:
```
1 from django.db import migrations
2 from django.contrib.auth.models import Group
3
4
5 def insert_emailtemplates(apps, _schema_editor):
6 emailtemplates = [
7 ("Lecturer Review Notice", "[EvaP] New Course ready for approval"),
8 ("Student Reminder", "[EvaP] Evaluation period is ending"),
9 ("Publishing Notice", "[EvaP] A course has been published"),
10 ("Login Key Created", "[EvaP] A login key was created"),
11 ]
12
13 EmailTemplate = apps.get_model("evaluation", "EmailTemplate")
14
15 for name, subject in emailtemplates:
16 if not EmailTemplate.objects.filter(name=name).exists():
17 EmailTemplate.objects.create(name=name, subject=subject, body="")
18
19 Group.objects.create(name="Staff")
20
21
22 class Migration(migrations.Migration):
23
24 dependencies = [
25 ('evaluation', '0001_initial'),
26 ]
27
28 operations = [
29 migrations.RunPython(insert_emailtemplates),
30 ]
31
```
Path: `evap/grades/migrations/0002_initial_data.py`
Content:
```
1 from django.db import migrations
2 from django.contrib.auth.models import Group
3
4
5 def add_group(_apps, _schema_editor):
6 Group.objects.create(name="Grade publisher")
7
8
9 class Migration(migrations.Migration):
10
11 dependencies = [
12 ('grades', '0001_initial'),
13 ]
14
15 operations = [
16 migrations.RunPython(add_group),
17 ]
18
```
Path: `evap/evaluation/migrations/0055_reviewer_group.py`
Content:
```
1 from django.contrib.auth.models import Group
2 from django.db import migrations
3
4
5 def add_group(_apps, _schema_editor):
6 Group.objects.create(name="Reviewer")
7
8
9 def delete_group(_apps, _schema_editor):
10 Group.objects.get(name="Reviewer").delete()
11
12
13 class Migration(migrations.Migration):
14
15 dependencies = [
16 ('evaluation', '0054_userprofile_language'),
17 ]
18
19 operations = [
20 migrations.RunPython(add_group, reverse_code=delete_group),
21 ]
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/evap/evaluation/migrations/0002_initial_data.py b/evap/evaluation/migrations/0002_initial_data.py
--- a/evap/evaluation/migrations/0002_initial_data.py
+++ b/evap/evaluation/migrations/0002_initial_data.py
@@ -1,5 +1,4 @@
from django.db import migrations
-from django.contrib.auth.models import Group
def insert_emailtemplates(apps, _schema_editor):
@@ -16,6 +15,7 @@
if not EmailTemplate.objects.filter(name=name).exists():
EmailTemplate.objects.create(name=name, subject=subject, body="")
+ Group = apps.get_model("auth", "Group")
Group.objects.create(name="Staff")
diff --git a/evap/evaluation/migrations/0055_reviewer_group.py b/evap/evaluation/migrations/0055_reviewer_group.py
--- a/evap/evaluation/migrations/0055_reviewer_group.py
+++ b/evap/evaluation/migrations/0055_reviewer_group.py
@@ -1,12 +1,13 @@
-from django.contrib.auth.models import Group
from django.db import migrations
-def add_group(_apps, _schema_editor):
+def add_group(apps, _schema_editor):
+ Group = apps.get_model("auth", "Group")
Group.objects.create(name="Reviewer")
-def delete_group(_apps, _schema_editor):
+def delete_group(apps, _schema_editor):
+ Group = apps.get_model("auth", "Group")
Group.objects.get(name="Reviewer").delete()
diff --git a/evap/grades/migrations/0002_initial_data.py b/evap/grades/migrations/0002_initial_data.py
--- a/evap/grades/migrations/0002_initial_data.py
+++ b/evap/grades/migrations/0002_initial_data.py
@@ -1,8 +1,8 @@
from django.db import migrations
-from django.contrib.auth.models import Group
-def add_group(_apps, _schema_editor):
+def add_group(apps, _schema_editor):
+ Group = apps.get_model("auth", "Group")
Group.objects.create(name="Grade publisher")
| {"golden_diff": "diff --git a/evap/evaluation/migrations/0002_initial_data.py b/evap/evaluation/migrations/0002_initial_data.py\n--- a/evap/evaluation/migrations/0002_initial_data.py\n+++ b/evap/evaluation/migrations/0002_initial_data.py\n@@ -1,5 +1,4 @@\n from django.db import migrations\n-from django.contrib.auth.models import Group\n \n \n def insert_emailtemplates(apps, _schema_editor):\n@@ -16,6 +15,7 @@\n if not EmailTemplate.objects.filter(name=name).exists():\n EmailTemplate.objects.create(name=name, subject=subject, body=\"\")\n \n+ Group = apps.get_model(\"auth\", \"Group\")\n Group.objects.create(name=\"Staff\")\n \n \ndiff --git a/evap/evaluation/migrations/0055_reviewer_group.py b/evap/evaluation/migrations/0055_reviewer_group.py\n--- a/evap/evaluation/migrations/0055_reviewer_group.py\n+++ b/evap/evaluation/migrations/0055_reviewer_group.py\n@@ -1,12 +1,13 @@\n-from django.contrib.auth.models import Group\n from django.db import migrations\n \n \n-def add_group(_apps, _schema_editor):\n+def add_group(apps, _schema_editor):\n+ Group = apps.get_model(\"auth\", \"Group\")\n Group.objects.create(name=\"Reviewer\")\n \n \n-def delete_group(_apps, _schema_editor):\n+def delete_group(apps, _schema_editor):\n+ Group = apps.get_model(\"auth\", \"Group\")\n Group.objects.get(name=\"Reviewer\").delete()\n \n \ndiff --git a/evap/grades/migrations/0002_initial_data.py b/evap/grades/migrations/0002_initial_data.py\n--- a/evap/grades/migrations/0002_initial_data.py\n+++ b/evap/grades/migrations/0002_initial_data.py\n@@ -1,8 +1,8 @@\n from django.db import migrations\n-from django.contrib.auth.models import Group\n \n \n-def add_group(_apps, _schema_editor):\n+def add_group(apps, _schema_editor):\n+ Group = apps.get_model(\"auth\", \"Group\")\n Group.objects.create(name=\"Grade publisher\")\n", "issue": "Remove ViewTest where possible\nRight now we have a `class ViewTest`, for which there is one subclass for each view that we have.\r\n\r\nFor views that we have tested properly, it provides no additional value and I I propose to replace it with the original `WebTest`. \r\n\r\nOriginally I proposed to remove it altogether and copypaste its test to all the test cases that wouldn't have any valuable test otherwise. @janno42 convinced me to leave it there and rename it to `WebTestWith200Check` instead.\n", "before_files": [{"content": "from django.db import migrations\nfrom django.contrib.auth.models import Group\n\n\ndef insert_emailtemplates(apps, _schema_editor):\n emailtemplates = [\n (\"Lecturer Review Notice\", \"[EvaP] New Course ready for approval\"),\n (\"Student Reminder\", \"[EvaP] Evaluation period is ending\"),\n (\"Publishing Notice\", \"[EvaP] A course has been published\"),\n (\"Login Key Created\", \"[EvaP] A login key was created\"),\n ]\n\n EmailTemplate = apps.get_model(\"evaluation\", \"EmailTemplate\")\n\n for name, subject in emailtemplates:\n if not EmailTemplate.objects.filter(name=name).exists():\n EmailTemplate.objects.create(name=name, subject=subject, body=\"\")\n\n Group.objects.create(name=\"Staff\")\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('evaluation', '0001_initial'),\n ]\n\n operations = [\n migrations.RunPython(insert_emailtemplates),\n ]\n", "path": "evap/evaluation/migrations/0002_initial_data.py"}, {"content": "from django.db import migrations\nfrom django.contrib.auth.models import Group\n\n\ndef add_group(_apps, _schema_editor):\n Group.objects.create(name=\"Grade publisher\")\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('grades', '0001_initial'),\n ]\n\n operations = [\n migrations.RunPython(add_group),\n ]\n", "path": "evap/grades/migrations/0002_initial_data.py"}, {"content": "from django.contrib.auth.models import Group\nfrom django.db import migrations\n\n\ndef add_group(_apps, _schema_editor):\n Group.objects.create(name=\"Reviewer\")\n\n\ndef delete_group(_apps, _schema_editor):\n Group.objects.get(name=\"Reviewer\").delete()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('evaluation', '0054_userprofile_language'),\n ]\n\n operations = [\n migrations.RunPython(add_group, reverse_code=delete_group),\n ]\n", "path": "evap/evaluation/migrations/0055_reviewer_group.py"}], "after_files": [{"content": "from django.db import migrations\n\n\ndef insert_emailtemplates(apps, _schema_editor):\n emailtemplates = [\n (\"Lecturer Review Notice\", \"[EvaP] New Course ready for approval\"),\n (\"Student Reminder\", \"[EvaP] Evaluation period is ending\"),\n (\"Publishing Notice\", \"[EvaP] A course has been published\"),\n (\"Login Key Created\", \"[EvaP] A login key was created\"),\n ]\n\n EmailTemplate = apps.get_model(\"evaluation\", \"EmailTemplate\")\n\n for name, subject in emailtemplates:\n if not EmailTemplate.objects.filter(name=name).exists():\n EmailTemplate.objects.create(name=name, subject=subject, body=\"\")\n\n Group = apps.get_model(\"auth\", \"Group\")\n Group.objects.create(name=\"Staff\")\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('evaluation', '0001_initial'),\n ]\n\n operations = [\n migrations.RunPython(insert_emailtemplates),\n ]\n", "path": "evap/evaluation/migrations/0002_initial_data.py"}, {"content": "from django.db import migrations\n\n\ndef add_group(apps, _schema_editor):\n Group = apps.get_model(\"auth\", \"Group\")\n Group.objects.create(name=\"Grade publisher\")\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('grades', '0001_initial'),\n ]\n\n operations = [\n migrations.RunPython(add_group),\n ]\n", "path": "evap/grades/migrations/0002_initial_data.py"}, {"content": "from django.db import migrations\n\n\ndef add_group(apps, _schema_editor):\n Group = apps.get_model(\"auth\", \"Group\")\n Group.objects.create(name=\"Reviewer\")\n\n\ndef delete_group(apps, _schema_editor):\n Group = apps.get_model(\"auth\", \"Group\")\n Group.objects.get(name=\"Reviewer\").delete()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('evaluation', '0054_userprofile_language'),\n ]\n\n operations = [\n migrations.RunPython(add_group, reverse_code=delete_group),\n ]\n", "path": "evap/evaluation/migrations/0055_reviewer_group.py"}]} | 929 | 500 |
gh_patches_debug_8092 | rasdani/github-patches | git_diff | vega__altair-1907 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Interval Selection Example Bug
I am having trouble with the the [Interval Selection Example](https://altair-viz.github.io/gallery/interval_selection.html).

```python
import altair as alt
from vega_datasets import data
source = data.sp500.url
brush = alt.selection(type='interval', encodings=['x'])
upper = alt.Chart(source).mark_area().encode(
alt.X('date:T', scale=alt.Scale(domain=brush)),
y='price:Q'
).properties(
width=600,
height=200
)
lower = upper.properties(
height=60
).add_selection(brush)
upper & lower
```
The example looks correct so I am unsure what is causing this behavior.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `altair/examples/interval_selection.py`
Content:
```
1 """
2 Interval Selection Example
3 ==========================
4
5 This is an example of creating a stacked chart for which the domain of the
6 top chart can be selected by interacting with the bottom chart.
7 """
8 # category: area charts
9 import altair as alt
10 from vega_datasets import data
11
12 source = data.sp500.url
13
14 brush = alt.selection(type='interval', encodings=['x'])
15
16 upper = alt.Chart(source).mark_area().encode(
17 alt.X('date:T', scale=alt.Scale(domain=brush)),
18 y='price:Q'
19 ).properties(
20 width=600,
21 height=200
22 )
23
24 lower = upper.properties(
25 height=60
26 ).add_selection(brush)
27
28 upper & lower
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/altair/examples/interval_selection.py b/altair/examples/interval_selection.py
--- a/altair/examples/interval_selection.py
+++ b/altair/examples/interval_selection.py
@@ -13,15 +13,19 @@
brush = alt.selection(type='interval', encodings=['x'])
-upper = alt.Chart(source).mark_area().encode(
- alt.X('date:T', scale=alt.Scale(domain=brush)),
- y='price:Q'
+base = alt.Chart(source).mark_area().encode(
+ x = 'date:T',
+ y = 'price:Q'
).properties(
width=600,
height=200
)
-lower = upper.properties(
+upper = base.encode(
+ alt.X('date:T', scale=alt.Scale(domain=brush))
+)
+
+lower = base.properties(
height=60
).add_selection(brush)
| {"golden_diff": "diff --git a/altair/examples/interval_selection.py b/altair/examples/interval_selection.py\n--- a/altair/examples/interval_selection.py\n+++ b/altair/examples/interval_selection.py\n@@ -13,15 +13,19 @@\n \n brush = alt.selection(type='interval', encodings=['x'])\n \n-upper = alt.Chart(source).mark_area().encode(\n- alt.X('date:T', scale=alt.Scale(domain=brush)),\n- y='price:Q'\n+base = alt.Chart(source).mark_area().encode(\n+ x = 'date:T',\n+ y = 'price:Q'\n ).properties(\n width=600,\n height=200\n )\n \n-lower = upper.properties(\n+upper = base.encode(\n+ alt.X('date:T', scale=alt.Scale(domain=brush))\n+)\n+\n+lower = base.properties(\n height=60\n ).add_selection(brush)\n", "issue": "Interval Selection Example Bug\nI am having trouble with the the [Interval Selection Example](https://altair-viz.github.io/gallery/interval_selection.html). \r\n\r\n\r\n```python\r\nimport altair as alt\r\nfrom vega_datasets import data\r\n\r\nsource = data.sp500.url\r\n\r\nbrush = alt.selection(type='interval', encodings=['x'])\r\n\r\nupper = alt.Chart(source).mark_area().encode(\r\n alt.X('date:T', scale=alt.Scale(domain=brush)),\r\n y='price:Q'\r\n).properties(\r\n width=600,\r\n height=200\r\n)\r\n\r\nlower = upper.properties(\r\n height=60\r\n).add_selection(brush)\r\n\r\nupper & lower\r\n```\r\n\r\nThe example looks correct so I am unsure what is causing this behavior. \n", "before_files": [{"content": "\"\"\"\nInterval Selection Example\n==========================\n\nThis is an example of creating a stacked chart for which the domain of the\ntop chart can be selected by interacting with the bottom chart.\n\"\"\"\n# category: area charts\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.sp500.url\n\nbrush = alt.selection(type='interval', encodings=['x'])\n\nupper = alt.Chart(source).mark_area().encode(\n alt.X('date:T', scale=alt.Scale(domain=brush)),\n y='price:Q'\n).properties(\n width=600,\n height=200\n)\n\nlower = upper.properties(\n height=60\n).add_selection(brush)\n\nupper & lower\n", "path": "altair/examples/interval_selection.py"}], "after_files": [{"content": "\"\"\"\nInterval Selection Example\n==========================\n\nThis is an example of creating a stacked chart for which the domain of the\ntop chart can be selected by interacting with the bottom chart.\n\"\"\"\n# category: area charts\nimport altair as alt\nfrom vega_datasets import data\n\nsource = data.sp500.url\n\nbrush = alt.selection(type='interval', encodings=['x'])\n\nbase = alt.Chart(source).mark_area().encode(\n x = 'date:T',\n y = 'price:Q'\n).properties(\n width=600,\n height=200\n)\n\nupper = base.encode(\n alt.X('date:T', scale=alt.Scale(domain=brush))\n)\n\nlower = base.properties(\n height=60\n).add_selection(brush)\n\nupper & lower\n", "path": "altair/examples/interval_selection.py"}]} | 687 | 205 |
gh_patches_debug_21929 | rasdani/github-patches | git_diff | Lightning-Universe__lightning-flash-210 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NLTK being loaded on image classifcation
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
### To Reproduce
```python
from flash.data import labels_from_csv
from flash.vision import ImageClassificationData
from flash.vision import ImageClassifier
from flash import Trainer
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
[nltk_data] Error loading punkt: <urlopen error [Errno -3] Temporary
[nltk_data] failure in name resolution>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flash/text/seq2seq/summarization/metric.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Dict, List, Tuple
15
16 import numpy as np
17 from rouge_score import rouge_scorer, scoring
18 from rouge_score.scoring import AggregateScore, Score
19 from torch import tensor
20 from torchmetrics import Metric
21
22 from flash.text.seq2seq.summarization.utils import add_newline_to_end_of_each_sentence
23
24
25 class RougeMetric(Metric):
26 """
27 Metric used for automatic summarization. https://www.aclweb.org/anthology/W04-1013/
28
29 Example:
30
31 >>> target = "Is your name John".split()
32 >>> preds = "My name is John".split()
33 >>> rouge = RougeMetric()
34 >>> from pprint import pprint
35 >>> pprint(rouge(preds, target)) # doctest: +NORMALIZE_WHITESPACE
36 {'rouge1_fmeasure': 0.25,
37 'rouge1_precision': 0.25,
38 'rouge1_recall': 0.25,
39 'rouge2_fmeasure': 0.0,
40 'rouge2_precision': 0.0,
41 'rouge2_recall': 0.0,
42 'rougeL_fmeasure': 0.25,
43 'rougeL_precision': 0.25,
44 'rougeL_recall': 0.25,
45 'rougeLsum_fmeasure': 0.25,
46 'rougeLsum_precision': 0.25,
47 'rougeLsum_recall': 0.25}
48 """
49
50 def __init__(
51 self,
52 rouge_newline_sep: bool = False,
53 use_stemmer: bool = False,
54 rouge_keys: Tuple[str] = ("rouge1", "rouge2", "rougeL", "rougeLsum"),
55 ):
56 super().__init__()
57 self.rouge_newline_sep = rouge_newline_sep
58 self.rouge_keys = rouge_keys
59 self.use_stemmer = use_stemmer
60 self.aggregator = RougeBatchAggregator()
61 self.scorer = rouge_scorer.RougeScorer(rouge_keys, use_stemmer=self.use_stemmer)
62
63 for key in rouge_keys:
64 self.add_state(key, [])
65
66 def update(self, pred_lns: List[str], tgt_lns: List[str]):
67 for pred, tgt in zip(pred_lns, tgt_lns):
68 # rougeLsum expects "\n" separated sentences within a summary
69 if self.rouge_newline_sep:
70 pred = add_newline_to_end_of_each_sentence(pred)
71 tgt = add_newline_to_end_of_each_sentence(tgt)
72 results = self.scorer.score(pred, tgt)
73 for key, score in results.items():
74 score = tensor([score.precision, score.recall, score.fmeasure])
75 getattr(self, key).append(score)
76
77 def compute(self) -> Dict[str, float]:
78 scores = {key: getattr(self, key) for key in self.rouge_keys}
79 self.aggregator.add_scores(scores)
80 result = self.aggregator.aggregate()
81 return format_rouge_results(result)
82
83 def __hash__(self):
84 # override to hash list objects.
85 # this is a bug in the upstream pytorch release.
86 hash_vals = [self.__class__.__name__]
87
88 for key in self._defaults.keys():
89 value = getattr(self, key)
90 if isinstance(value, list):
91 value = tuple(value)
92 hash_vals.append(value)
93
94 return hash(tuple(hash_vals))
95
96
97 class RougeBatchAggregator(scoring.BootstrapAggregator):
98 """
99 Aggregates rouge scores and provides confidence intervals.
100 """
101
102 def aggregate(self):
103 """
104 Override function to wrap the final results in `Score` objects.
105 This is due to the scores being replaced with a list of torch tensors.
106 """
107 result = {}
108 for score_type, scores in self._scores.items():
109 # Stack scores into a 2-d matrix of (sample, measure).
110 score_matrix = np.vstack(tuple(scores))
111 # Percentiles are returned as (interval, measure).
112 percentiles = self._bootstrap_resample(score_matrix)
113 # Extract the three intervals (low, mid, high).
114 intervals = tuple((Score(*percentiles[j, :]) for j in range(3)))
115 result[score_type] = AggregateScore(low=intervals[0], mid=intervals[1], high=intervals[2])
116 return result
117
118 def add_scores(self, scores):
119 self._scores = scores
120
121
122 def format_rouge_results(result: Dict[str, AggregateScore], decimal_places: int = 4) -> Dict[str, float]:
123 flattened_result = {}
124 for rouge_key, rouge_aggregate_score in result.items():
125 for stat in ["precision", "recall", "fmeasure"]:
126 mid = rouge_aggregate_score.mid
127 score = round(getattr(mid, stat), decimal_places)
128 flattened_result[f"{rouge_key}_{stat}"] = score
129 return flattened_result
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flash/text/seq2seq/summarization/metric.py b/flash/text/seq2seq/summarization/metric.py
--- a/flash/text/seq2seq/summarization/metric.py
+++ b/flash/text/seq2seq/summarization/metric.py
@@ -19,7 +19,7 @@
from torch import tensor
from torchmetrics import Metric
-from flash.text.seq2seq.summarization.utils import add_newline_to_end_of_each_sentence
+from flash.text.seq2seq import summarization
class RougeMetric(Metric):
@@ -67,8 +67,8 @@
for pred, tgt in zip(pred_lns, tgt_lns):
# rougeLsum expects "\n" separated sentences within a summary
if self.rouge_newline_sep:
- pred = add_newline_to_end_of_each_sentence(pred)
- tgt = add_newline_to_end_of_each_sentence(tgt)
+ pred = summarization.utils.add_newline_to_end_of_each_sentence(pred)
+ tgt = summarization.utils.add_newline_to_end_of_each_sentence(tgt)
results = self.scorer.score(pred, tgt)
for key, score in results.items():
score = tensor([score.precision, score.recall, score.fmeasure])
| {"golden_diff": "diff --git a/flash/text/seq2seq/summarization/metric.py b/flash/text/seq2seq/summarization/metric.py\n--- a/flash/text/seq2seq/summarization/metric.py\n+++ b/flash/text/seq2seq/summarization/metric.py\n@@ -19,7 +19,7 @@\n from torch import tensor\n from torchmetrics import Metric\n \n-from flash.text.seq2seq.summarization.utils import add_newline_to_end_of_each_sentence\n+from flash.text.seq2seq import summarization\n \n \n class RougeMetric(Metric):\n@@ -67,8 +67,8 @@\n for pred, tgt in zip(pred_lns, tgt_lns):\n # rougeLsum expects \"\\n\" separated sentences within a summary\n if self.rouge_newline_sep:\n- pred = add_newline_to_end_of_each_sentence(pred)\n- tgt = add_newline_to_end_of_each_sentence(tgt)\n+ pred = summarization.utils.add_newline_to_end_of_each_sentence(pred)\n+ tgt = summarization.utils.add_newline_to_end_of_each_sentence(tgt)\n results = self.scorer.score(pred, tgt)\n for key, score in results.items():\n score = tensor([score.precision, score.recall, score.fmeasure])\n", "issue": "NLTK being loaded on image classifcation\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n### To Reproduce\r\n\r\n\r\n```python\r\nfrom flash.data import labels_from_csv\r\nfrom flash.vision import ImageClassificationData\r\nfrom flash.vision import ImageClassifier\r\nfrom flash import Trainer\r\n```\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n[nltk_data] Error loading punkt: <urlopen error [Errno -3] Temporary\r\n[nltk_data] failure in name resolution>\r\n\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Dict, List, Tuple\n\nimport numpy as np\nfrom rouge_score import rouge_scorer, scoring\nfrom rouge_score.scoring import AggregateScore, Score\nfrom torch import tensor\nfrom torchmetrics import Metric\n\nfrom flash.text.seq2seq.summarization.utils import add_newline_to_end_of_each_sentence\n\n\nclass RougeMetric(Metric):\n \"\"\"\n Metric used for automatic summarization. https://www.aclweb.org/anthology/W04-1013/\n\n Example:\n\n >>> target = \"Is your name John\".split()\n >>> preds = \"My name is John\".split()\n >>> rouge = RougeMetric()\n >>> from pprint import pprint\n >>> pprint(rouge(preds, target)) # doctest: +NORMALIZE_WHITESPACE\n {'rouge1_fmeasure': 0.25,\n 'rouge1_precision': 0.25,\n 'rouge1_recall': 0.25,\n 'rouge2_fmeasure': 0.0,\n 'rouge2_precision': 0.0,\n 'rouge2_recall': 0.0,\n 'rougeL_fmeasure': 0.25,\n 'rougeL_precision': 0.25,\n 'rougeL_recall': 0.25,\n 'rougeLsum_fmeasure': 0.25,\n 'rougeLsum_precision': 0.25,\n 'rougeLsum_recall': 0.25}\n \"\"\"\n\n def __init__(\n self,\n rouge_newline_sep: bool = False,\n use_stemmer: bool = False,\n rouge_keys: Tuple[str] = (\"rouge1\", \"rouge2\", \"rougeL\", \"rougeLsum\"),\n ):\n super().__init__()\n self.rouge_newline_sep = rouge_newline_sep\n self.rouge_keys = rouge_keys\n self.use_stemmer = use_stemmer\n self.aggregator = RougeBatchAggregator()\n self.scorer = rouge_scorer.RougeScorer(rouge_keys, use_stemmer=self.use_stemmer)\n\n for key in rouge_keys:\n self.add_state(key, [])\n\n def update(self, pred_lns: List[str], tgt_lns: List[str]):\n for pred, tgt in zip(pred_lns, tgt_lns):\n # rougeLsum expects \"\\n\" separated sentences within a summary\n if self.rouge_newline_sep:\n pred = add_newline_to_end_of_each_sentence(pred)\n tgt = add_newline_to_end_of_each_sentence(tgt)\n results = self.scorer.score(pred, tgt)\n for key, score in results.items():\n score = tensor([score.precision, score.recall, score.fmeasure])\n getattr(self, key).append(score)\n\n def compute(self) -> Dict[str, float]:\n scores = {key: getattr(self, key) for key in self.rouge_keys}\n self.aggregator.add_scores(scores)\n result = self.aggregator.aggregate()\n return format_rouge_results(result)\n\n def __hash__(self):\n # override to hash list objects.\n # this is a bug in the upstream pytorch release.\n hash_vals = [self.__class__.__name__]\n\n for key in self._defaults.keys():\n value = getattr(self, key)\n if isinstance(value, list):\n value = tuple(value)\n hash_vals.append(value)\n\n return hash(tuple(hash_vals))\n\n\nclass RougeBatchAggregator(scoring.BootstrapAggregator):\n \"\"\"\n Aggregates rouge scores and provides confidence intervals.\n \"\"\"\n\n def aggregate(self):\n \"\"\"\n Override function to wrap the final results in `Score` objects.\n This is due to the scores being replaced with a list of torch tensors.\n \"\"\"\n result = {}\n for score_type, scores in self._scores.items():\n # Stack scores into a 2-d matrix of (sample, measure).\n score_matrix = np.vstack(tuple(scores))\n # Percentiles are returned as (interval, measure).\n percentiles = self._bootstrap_resample(score_matrix)\n # Extract the three intervals (low, mid, high).\n intervals = tuple((Score(*percentiles[j, :]) for j in range(3)))\n result[score_type] = AggregateScore(low=intervals[0], mid=intervals[1], high=intervals[2])\n return result\n\n def add_scores(self, scores):\n self._scores = scores\n\n\ndef format_rouge_results(result: Dict[str, AggregateScore], decimal_places: int = 4) -> Dict[str, float]:\n flattened_result = {}\n for rouge_key, rouge_aggregate_score in result.items():\n for stat in [\"precision\", \"recall\", \"fmeasure\"]:\n mid = rouge_aggregate_score.mid\n score = round(getattr(mid, stat), decimal_places)\n flattened_result[f\"{rouge_key}_{stat}\"] = score\n return flattened_result\n", "path": "flash/text/seq2seq/summarization/metric.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Dict, List, Tuple\n\nimport numpy as np\nfrom rouge_score import rouge_scorer, scoring\nfrom rouge_score.scoring import AggregateScore, Score\nfrom torch import tensor\nfrom torchmetrics import Metric\n\nfrom flash.text.seq2seq import summarization\n\n\nclass RougeMetric(Metric):\n \"\"\"\n Metric used for automatic summarization. https://www.aclweb.org/anthology/W04-1013/\n\n Example:\n\n >>> target = \"Is your name John\".split()\n >>> preds = \"My name is John\".split()\n >>> rouge = RougeMetric()\n >>> from pprint import pprint\n >>> pprint(rouge(preds, target)) # doctest: +NORMALIZE_WHITESPACE\n {'rouge1_fmeasure': 0.25,\n 'rouge1_precision': 0.25,\n 'rouge1_recall': 0.25,\n 'rouge2_fmeasure': 0.0,\n 'rouge2_precision': 0.0,\n 'rouge2_recall': 0.0,\n 'rougeL_fmeasure': 0.25,\n 'rougeL_precision': 0.25,\n 'rougeL_recall': 0.25,\n 'rougeLsum_fmeasure': 0.25,\n 'rougeLsum_precision': 0.25,\n 'rougeLsum_recall': 0.25}\n \"\"\"\n\n def __init__(\n self,\n rouge_newline_sep: bool = False,\n use_stemmer: bool = False,\n rouge_keys: Tuple[str] = (\"rouge1\", \"rouge2\", \"rougeL\", \"rougeLsum\"),\n ):\n super().__init__()\n self.rouge_newline_sep = rouge_newline_sep\n self.rouge_keys = rouge_keys\n self.use_stemmer = use_stemmer\n self.aggregator = RougeBatchAggregator()\n self.scorer = rouge_scorer.RougeScorer(rouge_keys, use_stemmer=self.use_stemmer)\n\n for key in rouge_keys:\n self.add_state(key, [])\n\n def update(self, pred_lns: List[str], tgt_lns: List[str]):\n for pred, tgt in zip(pred_lns, tgt_lns):\n # rougeLsum expects \"\\n\" separated sentences within a summary\n if self.rouge_newline_sep:\n pred = summarization.utils.add_newline_to_end_of_each_sentence(pred)\n tgt = summarization.utils.add_newline_to_end_of_each_sentence(tgt)\n results = self.scorer.score(pred, tgt)\n for key, score in results.items():\n score = tensor([score.precision, score.recall, score.fmeasure])\n getattr(self, key).append(score)\n\n def compute(self) -> Dict[str, float]:\n scores = {key: getattr(self, key) for key in self.rouge_keys}\n self.aggregator.add_scores(scores)\n result = self.aggregator.aggregate()\n return format_rouge_results(result)\n\n def __hash__(self):\n # override to hash list objects.\n # this is a bug in the upstream pytorch release.\n hash_vals = [self.__class__.__name__]\n\n for key in self._defaults.keys():\n value = getattr(self, key)\n if isinstance(value, list):\n value = tuple(value)\n hash_vals.append(value)\n\n return hash(tuple(hash_vals))\n\n\nclass RougeBatchAggregator(scoring.BootstrapAggregator):\n \"\"\"\n Aggregates rouge scores and provides confidence intervals.\n \"\"\"\n\n def aggregate(self):\n \"\"\"\n Override function to wrap the final results in `Score` objects.\n This is due to the scores being replaced with a list of torch tensors.\n \"\"\"\n result = {}\n for score_type, scores in self._scores.items():\n # Stack scores into a 2-d matrix of (sample, measure).\n score_matrix = np.vstack(tuple(scores))\n # Percentiles are returned as (interval, measure).\n percentiles = self._bootstrap_resample(score_matrix)\n # Extract the three intervals (low, mid, high).\n intervals = tuple((Score(*percentiles[j, :]) for j in range(3)))\n result[score_type] = AggregateScore(low=intervals[0], mid=intervals[1], high=intervals[2])\n return result\n\n def add_scores(self, scores):\n self._scores = scores\n\n\ndef format_rouge_results(result: Dict[str, AggregateScore], decimal_places: int = 4) -> Dict[str, float]:\n flattened_result = {}\n for rouge_key, rouge_aggregate_score in result.items():\n for stat in [\"precision\", \"recall\", \"fmeasure\"]:\n mid = rouge_aggregate_score.mid\n score = round(getattr(mid, stat), decimal_places)\n flattened_result[f\"{rouge_key}_{stat}\"] = score\n return flattened_result\n", "path": "flash/text/seq2seq/summarization/metric.py"}]} | 1,885 | 282 |
gh_patches_debug_34681 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-323 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
submit master pod using k8s python client instead of kubectl
use k8s python client to submit master pod instead of using the command below
`os.system('kubectl run ...')`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/client/client.py`
Content:
```
1 import os
2 import inspect
3 import shutil
4 import time
5 import getpass
6 from string import Template
7 import docker
8
9
10 def run(model_class, train_data_dir=None,
11 num_epoch=1, minibatch_size=10,
12 record_per_task=100, num_worker=1, grads_to_wait=2):
13 m_path, m_file = _getModelFile()
14 m_file_in_docker = "/model/" + m_file
15 timestamp = int(round(time.time() * 1000))
16 _build_docker_image(m_path, m_file, m_file_in_docker, timestamp)
17 yaml_file = _generate_yaml(m_file_in_docker, model_class.__name__, train_data_dir=train_data_dir,
18 num_epoch=num_epoch, minibatch_size=minibatch_size,
19 record_per_task=record_per_task, num_worker=num_worker,
20 grads_to_wait=grads_to_wait, timestamp=timestamp)
21 _submit(yaml_file)
22
23 def _getModelFile():
24 m_file = inspect.currentframe().f_back.f_back.f_code.co_filename
25 m_path = os.path.abspath(os.path.dirname(m_file))
26 return m_path, m_file
27
28 def _build_docker_image(m_path, m_file, m_file_in_docker, timestamp):
29 d_path = os.path.abspath(os.path.dirname(
30 inspect.currentframe().f_back.f_code.co_filename))
31 new_dfile = m_path + "/Dockerfile"
32 shutil.copyfile(d_path + "/../Dockerfile.dev", new_dfile)
33
34 with open(new_dfile, 'a') as df:
35 df.write("COPY " + m_file + " " + m_file_in_docker)
36 client = docker.APIClient(base_url='unix://var/run/docker.sock')
37 for line in client.build(dockerfile='Dockerfile', path='.', tag='elasticdl:dev_' + str(timestamp)):
38 print(str(line, encoding = "utf-8"))
39
40 # TODO: upload docker image to docker hub.
41
42 def _generate_yaml(m_file, m_class,
43 train_data_dir=None, num_epoch=1,
44 minibatch_size=10, record_per_task=100,
45 num_worker=1, grads_to_wait=2, timestamp=1):
46 YAML_TEMPLATE = """
47 apiVersion: v1
48 kind: Pod
49 metadata:
50 name: elasticdl-master-$timestamp
51 labels:
52 purpose: test-command
53 spec:
54 containers:
55 - name: elasticdl-master-$timestamp
56 image: elasticdl:dev_$timestamp
57 command: ["python"]
58 args: ["-m", "elasticdl.master.main",
59 "--model-file", "$m_file",
60 "--num_worker", "$num_worker",
61 "--worker_image", "elasticdl:dev_$timestamp",
62 "--job_name", "elasticdl-$timestamp",
63 "--model-class", "$m_class",
64 "--train_data_dir", "$train_data_dir",
65 "--num_epoch", "$num_epoch",
66 "--grads_to_wait", "$grads_to_wait",
67 "--minibatch_size", "$minibatch_size",
68 "--record_per_task", "$record_per_task"]
69 imagePullPolicy: Never
70 env:
71 - name: MY_POD_IP
72 valueFrom:
73 fieldRef:
74 fieldPath: status.podIP
75 restartPolicy: Never
76 """
77 t = Template(YAML_TEMPLATE)
78 yaml_file = 'job_desc.yaml'
79 with open(yaml_file, "w") as yaml:
80 yaml.write(t.substitute(m_file=m_file, m_class=m_class,
81 train_data_dir=train_data_dir,
82 timestamp=timestamp, num_worker=num_worker, num_epoch=num_epoch,
83 minibatch_size=minibatch_size, record_per_task=record_per_task,
84 user=getpass.getuser(), grads_to_wait=grads_to_wait))
85 return yaml_file
86
87 def _submit(yaml_file):
88 os.system('kubectl create -f ' + yaml_file)
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticdl/client/client.py b/elasticdl/client/client.py
--- a/elasticdl/client/client.py
+++ b/elasticdl/client/client.py
@@ -5,6 +5,9 @@
import getpass
from string import Template
import docker
+import yaml
+from kubernetes.client.apis import core_v1_api
+from kubernetes import config
def run(model_class, train_data_dir=None,
@@ -14,11 +17,11 @@
m_file_in_docker = "/model/" + m_file
timestamp = int(round(time.time() * 1000))
_build_docker_image(m_path, m_file, m_file_in_docker, timestamp)
- yaml_file = _generate_yaml(m_file_in_docker, model_class.__name__, train_data_dir=train_data_dir,
+ yaml_content = _generate_yaml(m_file_in_docker, model_class.__name__, train_data_dir=train_data_dir,
num_epoch=num_epoch, minibatch_size=minibatch_size,
record_per_task=record_per_task, num_worker=num_worker,
grads_to_wait=grads_to_wait, timestamp=timestamp)
- _submit(yaml_file)
+ _submit(yaml_content)
def _getModelFile():
m_file = inspect.currentframe().f_back.f_back.f_code.co_filename
@@ -75,14 +78,15 @@
restartPolicy: Never
"""
t = Template(YAML_TEMPLATE)
- yaml_file = 'job_desc.yaml'
- with open(yaml_file, "w") as yaml:
- yaml.write(t.substitute(m_file=m_file, m_class=m_class,
- train_data_dir=train_data_dir,
- timestamp=timestamp, num_worker=num_worker, num_epoch=num_epoch,
- minibatch_size=minibatch_size, record_per_task=record_per_task,
- user=getpass.getuser(), grads_to_wait=grads_to_wait))
- return yaml_file
+ return t.substitute(m_file=m_file, m_class=m_class,
+ train_data_dir=train_data_dir,
+ timestamp=timestamp, num_worker=num_worker, num_epoch=num_epoch,
+ minibatch_size=minibatch_size, record_per_task=record_per_task,
+ user=getpass.getuser(), grads_to_wait=grads_to_wait)
-def _submit(yaml_file):
- os.system('kubectl create -f ' + yaml_file)
+def _submit(yaml_content):
+ config.load_kube_config()
+ pod_desc = yaml.safe_load(yaml_content)
+ api = core_v1_api.CoreV1Api()
+ resp = api.create_namespaced_pod(body=pod_desc, namespace='default')
+ print("Pod created. status='%s'" % str(resp.status))
| {"golden_diff": "diff --git a/elasticdl/client/client.py b/elasticdl/client/client.py\n--- a/elasticdl/client/client.py\n+++ b/elasticdl/client/client.py\n@@ -5,6 +5,9 @@\n import getpass\n from string import Template\n import docker\n+import yaml\n+from kubernetes.client.apis import core_v1_api\n+from kubernetes import config\n \n \n def run(model_class, train_data_dir=None, \n@@ -14,11 +17,11 @@\n m_file_in_docker = \"/model/\" + m_file \n timestamp = int(round(time.time() * 1000))\n _build_docker_image(m_path, m_file, m_file_in_docker, timestamp)\n- yaml_file = _generate_yaml(m_file_in_docker, model_class.__name__, train_data_dir=train_data_dir, \n+ yaml_content = _generate_yaml(m_file_in_docker, model_class.__name__, train_data_dir=train_data_dir, \n num_epoch=num_epoch, minibatch_size=minibatch_size, \n record_per_task=record_per_task, num_worker=num_worker, \n grads_to_wait=grads_to_wait, timestamp=timestamp)\n- _submit(yaml_file)\n+ _submit(yaml_content)\n \n def _getModelFile():\n m_file = inspect.currentframe().f_back.f_back.f_code.co_filename\n@@ -75,14 +78,15 @@\n restartPolicy: Never\n \"\"\"\n t = Template(YAML_TEMPLATE)\n- yaml_file = 'job_desc.yaml'\n- with open(yaml_file, \"w\") as yaml:\n- yaml.write(t.substitute(m_file=m_file, m_class=m_class, \n- train_data_dir=train_data_dir, \n- timestamp=timestamp, num_worker=num_worker, num_epoch=num_epoch,\n- minibatch_size=minibatch_size, record_per_task=record_per_task,\n- user=getpass.getuser(), grads_to_wait=grads_to_wait))\n- return yaml_file\n+ return t.substitute(m_file=m_file, m_class=m_class, \n+ train_data_dir=train_data_dir, \n+ timestamp=timestamp, num_worker=num_worker, num_epoch=num_epoch,\n+ minibatch_size=minibatch_size, record_per_task=record_per_task,\n+ user=getpass.getuser(), grads_to_wait=grads_to_wait)\n \n-def _submit(yaml_file):\n- os.system('kubectl create -f ' + yaml_file)\n+def _submit(yaml_content):\n+ config.load_kube_config()\n+ pod_desc = yaml.safe_load(yaml_content)\n+ api = core_v1_api.CoreV1Api()\n+ resp = api.create_namespaced_pod(body=pod_desc, namespace='default')\n+ print(\"Pod created. status='%s'\" % str(resp.status))\n", "issue": "submit master pod using k8s python client instead of kubectl \nuse k8s python client to submit master pod instead of using the command below\r\n`os.system('kubectl run ...')`\n", "before_files": [{"content": "import os\nimport inspect\nimport shutil\nimport time\nimport getpass\nfrom string import Template\nimport docker\n\n\ndef run(model_class, train_data_dir=None, \n num_epoch=1, minibatch_size=10, \n record_per_task=100, num_worker=1, grads_to_wait=2):\n m_path, m_file = _getModelFile()\n m_file_in_docker = \"/model/\" + m_file \n timestamp = int(round(time.time() * 1000))\n _build_docker_image(m_path, m_file, m_file_in_docker, timestamp)\n yaml_file = _generate_yaml(m_file_in_docker, model_class.__name__, train_data_dir=train_data_dir, \n num_epoch=num_epoch, minibatch_size=minibatch_size, \n record_per_task=record_per_task, num_worker=num_worker, \n grads_to_wait=grads_to_wait, timestamp=timestamp)\n _submit(yaml_file)\n\ndef _getModelFile():\n m_file = inspect.currentframe().f_back.f_back.f_code.co_filename\n m_path = os.path.abspath(os.path.dirname(m_file))\n return m_path, m_file\n\ndef _build_docker_image(m_path, m_file, m_file_in_docker, timestamp):\n d_path = os.path.abspath(os.path.dirname(\n inspect.currentframe().f_back.f_code.co_filename))\n new_dfile = m_path + \"/Dockerfile\"\n shutil.copyfile(d_path + \"/../Dockerfile.dev\", new_dfile)\n\n with open(new_dfile, 'a') as df:\n df.write(\"COPY \" + m_file + \" \" + m_file_in_docker)\n client = docker.APIClient(base_url='unix://var/run/docker.sock') \n for line in client.build(dockerfile='Dockerfile', path='.', tag='elasticdl:dev_' + str(timestamp)):\n print(str(line, encoding = \"utf-8\"))\n\n # TODO: upload docker image to docker hub.\n\ndef _generate_yaml(m_file, m_class,\n train_data_dir=None, num_epoch=1,\n minibatch_size=10, record_per_task=100, \n num_worker=1, grads_to_wait=2, timestamp=1):\n YAML_TEMPLATE = \"\"\"\n apiVersion: v1\n kind: Pod\n metadata:\n name: elasticdl-master-$timestamp\n labels:\n purpose: test-command\n spec:\n containers:\n - name: elasticdl-master-$timestamp\n image: elasticdl:dev_$timestamp\n command: [\"python\"]\n args: [\"-m\", \"elasticdl.master.main\",\n \"--model-file\", \"$m_file\",\n \"--num_worker\", \"$num_worker\",\n \"--worker_image\", \"elasticdl:dev_$timestamp\",\n \"--job_name\", \"elasticdl-$timestamp\",\n \"--model-class\", \"$m_class\",\n \"--train_data_dir\", \"$train_data_dir\",\n \"--num_epoch\", \"$num_epoch\",\n \"--grads_to_wait\", \"$grads_to_wait\",\n \"--minibatch_size\", \"$minibatch_size\",\n \"--record_per_task\", \"$record_per_task\"]\n imagePullPolicy: Never\n env:\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n restartPolicy: Never\n \"\"\"\n t = Template(YAML_TEMPLATE)\n yaml_file = 'job_desc.yaml'\n with open(yaml_file, \"w\") as yaml:\n yaml.write(t.substitute(m_file=m_file, m_class=m_class, \n train_data_dir=train_data_dir, \n timestamp=timestamp, num_worker=num_worker, num_epoch=num_epoch,\n minibatch_size=minibatch_size, record_per_task=record_per_task,\n user=getpass.getuser(), grads_to_wait=grads_to_wait))\n return yaml_file\n\ndef _submit(yaml_file):\n os.system('kubectl create -f ' + yaml_file)\n", "path": "elasticdl/client/client.py"}], "after_files": [{"content": "import os\nimport inspect\nimport shutil\nimport time\nimport getpass\nfrom string import Template\nimport docker\nimport yaml\nfrom kubernetes.client.apis import core_v1_api\nfrom kubernetes import config\n\n\ndef run(model_class, train_data_dir=None, \n num_epoch=1, minibatch_size=10, \n record_per_task=100, num_worker=1, grads_to_wait=2):\n m_path, m_file = _getModelFile()\n m_file_in_docker = \"/model/\" + m_file \n timestamp = int(round(time.time() * 1000))\n _build_docker_image(m_path, m_file, m_file_in_docker, timestamp)\n yaml_content = _generate_yaml(m_file_in_docker, model_class.__name__, train_data_dir=train_data_dir, \n num_epoch=num_epoch, minibatch_size=minibatch_size, \n record_per_task=record_per_task, num_worker=num_worker, \n grads_to_wait=grads_to_wait, timestamp=timestamp)\n _submit(yaml_content)\n\ndef _getModelFile():\n m_file = inspect.currentframe().f_back.f_back.f_code.co_filename\n m_path = os.path.abspath(os.path.dirname(m_file))\n return m_path, m_file\n\ndef _build_docker_image(m_path, m_file, m_file_in_docker, timestamp):\n d_path = os.path.abspath(os.path.dirname(\n inspect.currentframe().f_back.f_code.co_filename))\n new_dfile = m_path + \"/Dockerfile\"\n shutil.copyfile(d_path + \"/../Dockerfile.dev\", new_dfile)\n\n with open(new_dfile, 'a') as df:\n df.write(\"COPY \" + m_file + \" \" + m_file_in_docker)\n client = docker.APIClient(base_url='unix://var/run/docker.sock') \n for line in client.build(dockerfile='Dockerfile', path='.', tag='elasticdl:dev_' + str(timestamp)):\n print(str(line, encoding = \"utf-8\"))\n\n # TODO: upload docker image to docker hub.\n\ndef _generate_yaml(m_file, m_class,\n train_data_dir=None, num_epoch=1,\n minibatch_size=10, record_per_task=100, \n num_worker=1, grads_to_wait=2, timestamp=1):\n YAML_TEMPLATE = \"\"\"\n apiVersion: v1\n kind: Pod\n metadata:\n name: elasticdl-master-$timestamp\n labels:\n purpose: test-command\n spec:\n containers:\n - name: elasticdl-master-$timestamp\n image: elasticdl:dev_$timestamp\n command: [\"python\"]\n args: [\"-m\", \"elasticdl.master.main\",\n \"--model-file\", \"$m_file\",\n \"--num_worker\", \"$num_worker\",\n \"--worker_image\", \"elasticdl:dev_$timestamp\",\n \"--job_name\", \"elasticdl-$timestamp\",\n \"--model-class\", \"$m_class\",\n \"--train_data_dir\", \"$train_data_dir\",\n \"--num_epoch\", \"$num_epoch\",\n \"--grads_to_wait\", \"$grads_to_wait\",\n \"--minibatch_size\", \"$minibatch_size\",\n \"--record_per_task\", \"$record_per_task\"]\n imagePullPolicy: Never\n env:\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n restartPolicy: Never\n \"\"\"\n t = Template(YAML_TEMPLATE)\n return t.substitute(m_file=m_file, m_class=m_class, \n train_data_dir=train_data_dir, \n timestamp=timestamp, num_worker=num_worker, num_epoch=num_epoch,\n minibatch_size=minibatch_size, record_per_task=record_per_task,\n user=getpass.getuser(), grads_to_wait=grads_to_wait)\n\ndef _submit(yaml_content):\n config.load_kube_config()\n pod_desc = yaml.safe_load(yaml_content)\n api = core_v1_api.CoreV1Api()\n resp = api.create_namespaced_pod(body=pod_desc, namespace='default')\n print(\"Pod created. status='%s'\" % str(resp.status))\n", "path": "elasticdl/client/client.py"}]} | 1,317 | 608 |
gh_patches_debug_27572 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-1358 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add log message to get_user_config function
there should be debug message for cases when default config overwrites user_config.
Currently, it is done silently.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cookiecutter/config.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Global configuration handling."""
4
5 from __future__ import unicode_literals
6 import copy
7 import logging
8 import os
9 import io
10 import collections
11
12 import poyo
13
14 from cookiecutter.exceptions import ConfigDoesNotExistException
15 from cookiecutter.exceptions import InvalidConfiguration
16
17
18 logger = logging.getLogger(__name__)
19
20 USER_CONFIG_PATH = os.path.expanduser('~/.cookiecutterrc')
21
22 BUILTIN_ABBREVIATIONS = {
23 'gh': 'https://github.com/{0}.git',
24 'gl': 'https://gitlab.com/{0}.git',
25 'bb': 'https://bitbucket.org/{0}',
26 }
27
28 DEFAULT_CONFIG = {
29 'cookiecutters_dir': os.path.expanduser('~/.cookiecutters/'),
30 'replay_dir': os.path.expanduser('~/.cookiecutter_replay/'),
31 'default_context': collections.OrderedDict([]),
32 'abbreviations': BUILTIN_ABBREVIATIONS,
33 }
34
35
36 def _expand_path(path):
37 """Expand both environment variables and user home in the given path."""
38 path = os.path.expandvars(path)
39 path = os.path.expanduser(path)
40 return path
41
42
43 def merge_configs(default, overwrite):
44 """Recursively update a dict with the key/value pair of another.
45
46 Dict values that are dictionaries themselves will be updated, whilst
47 preserving existing keys.
48 """
49 new_config = copy.deepcopy(default)
50
51 for k, v in overwrite.items():
52 # Make sure to preserve existing items in
53 # nested dicts, for example `abbreviations`
54 if isinstance(v, dict):
55 new_config[k] = merge_configs(default[k], v)
56 else:
57 new_config[k] = v
58
59 return new_config
60
61
62 def get_config(config_path):
63 """Retrieve the config from the specified path, returning a config dict."""
64 if not os.path.exists(config_path):
65 raise ConfigDoesNotExistException
66
67 logger.debug('config_path is %s', config_path)
68 with io.open(config_path, encoding='utf-8') as file_handle:
69 try:
70 yaml_dict = poyo.parse_string(file_handle.read())
71 except poyo.exceptions.PoyoException as e:
72 raise InvalidConfiguration(
73 'Unable to parse YAML file {}. Error: {}'
74 ''.format(config_path, e)
75 )
76
77 config_dict = merge_configs(DEFAULT_CONFIG, yaml_dict)
78
79 raw_replay_dir = config_dict['replay_dir']
80 config_dict['replay_dir'] = _expand_path(raw_replay_dir)
81
82 raw_cookies_dir = config_dict['cookiecutters_dir']
83 config_dict['cookiecutters_dir'] = _expand_path(raw_cookies_dir)
84
85 return config_dict
86
87
88 def get_user_config(config_file=None, default_config=False):
89 """Return the user config as a dict.
90
91 If ``default_config`` is True, ignore ``config_file`` and return default
92 values for the config parameters.
93
94 If a path to a ``config_file`` is given, that is different from the default
95 location, load the user config from that.
96
97 Otherwise look up the config file path in the ``COOKIECUTTER_CONFIG``
98 environment variable. If set, load the config from this path. This will
99 raise an error if the specified path is not valid.
100
101 If the environment variable is not set, try the default config file path
102 before falling back to the default config values.
103 """
104 # Do NOT load a config. Return defaults instead.
105 if default_config:
106 return copy.copy(DEFAULT_CONFIG)
107
108 # Load the given config file
109 if config_file and config_file is not USER_CONFIG_PATH:
110 return get_config(config_file)
111
112 try:
113 # Does the user set up a config environment variable?
114 env_config_file = os.environ['COOKIECUTTER_CONFIG']
115 except KeyError:
116 # Load an optional user config if it exists
117 # otherwise return the defaults
118 if os.path.exists(USER_CONFIG_PATH):
119 return get_config(USER_CONFIG_PATH)
120 else:
121 return copy.copy(DEFAULT_CONFIG)
122 else:
123 # There is a config environment variable. Try to load it.
124 # Do not check for existence, so invalid file paths raise an error.
125 return get_config(env_config_file)
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cookiecutter/config.py b/cookiecutter/config.py
--- a/cookiecutter/config.py
+++ b/cookiecutter/config.py
@@ -103,10 +103,12 @@
"""
# Do NOT load a config. Return defaults instead.
if default_config:
+ logger.debug("Force ignoring user config with default_config switch.")
return copy.copy(DEFAULT_CONFIG)
# Load the given config file
if config_file and config_file is not USER_CONFIG_PATH:
+ logger.debug("Loading custom config from %s.", config_file)
return get_config(config_file)
try:
@@ -116,10 +118,13 @@
# Load an optional user config if it exists
# otherwise return the defaults
if os.path.exists(USER_CONFIG_PATH):
+ logger.debug("Loading config from %s.", USER_CONFIG_PATH)
return get_config(USER_CONFIG_PATH)
else:
+ logger.debug("User config not found. Loading default config.")
return copy.copy(DEFAULT_CONFIG)
else:
# There is a config environment variable. Try to load it.
# Do not check for existence, so invalid file paths raise an error.
+ logger.debug("User config not found or not specified. Loading default config.")
return get_config(env_config_file)
| {"golden_diff": "diff --git a/cookiecutter/config.py b/cookiecutter/config.py\n--- a/cookiecutter/config.py\n+++ b/cookiecutter/config.py\n@@ -103,10 +103,12 @@\n \"\"\"\n # Do NOT load a config. Return defaults instead.\n if default_config:\n+ logger.debug(\"Force ignoring user config with default_config switch.\")\n return copy.copy(DEFAULT_CONFIG)\n \n # Load the given config file\n if config_file and config_file is not USER_CONFIG_PATH:\n+ logger.debug(\"Loading custom config from %s.\", config_file)\n return get_config(config_file)\n \n try:\n@@ -116,10 +118,13 @@\n # Load an optional user config if it exists\n # otherwise return the defaults\n if os.path.exists(USER_CONFIG_PATH):\n+ logger.debug(\"Loading config from %s.\", USER_CONFIG_PATH)\n return get_config(USER_CONFIG_PATH)\n else:\n+ logger.debug(\"User config not found. Loading default config.\")\n return copy.copy(DEFAULT_CONFIG)\n else:\n # There is a config environment variable. Try to load it.\n # Do not check for existence, so invalid file paths raise an error.\n+ logger.debug(\"User config not found or not specified. Loading default config.\")\n return get_config(env_config_file)\n", "issue": "Add log message to get_user_config function\nthere should be debug message for cases when default config overwrites user_config. \r\nCurrently, it is done silently.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Global configuration handling.\"\"\"\n\nfrom __future__ import unicode_literals\nimport copy\nimport logging\nimport os\nimport io\nimport collections\n\nimport poyo\n\nfrom cookiecutter.exceptions import ConfigDoesNotExistException\nfrom cookiecutter.exceptions import InvalidConfiguration\n\n\nlogger = logging.getLogger(__name__)\n\nUSER_CONFIG_PATH = os.path.expanduser('~/.cookiecutterrc')\n\nBUILTIN_ABBREVIATIONS = {\n 'gh': 'https://github.com/{0}.git',\n 'gl': 'https://gitlab.com/{0}.git',\n 'bb': 'https://bitbucket.org/{0}',\n}\n\nDEFAULT_CONFIG = {\n 'cookiecutters_dir': os.path.expanduser('~/.cookiecutters/'),\n 'replay_dir': os.path.expanduser('~/.cookiecutter_replay/'),\n 'default_context': collections.OrderedDict([]),\n 'abbreviations': BUILTIN_ABBREVIATIONS,\n}\n\n\ndef _expand_path(path):\n \"\"\"Expand both environment variables and user home in the given path.\"\"\"\n path = os.path.expandvars(path)\n path = os.path.expanduser(path)\n return path\n\n\ndef merge_configs(default, overwrite):\n \"\"\"Recursively update a dict with the key/value pair of another.\n\n Dict values that are dictionaries themselves will be updated, whilst\n preserving existing keys.\n \"\"\"\n new_config = copy.deepcopy(default)\n\n for k, v in overwrite.items():\n # Make sure to preserve existing items in\n # nested dicts, for example `abbreviations`\n if isinstance(v, dict):\n new_config[k] = merge_configs(default[k], v)\n else:\n new_config[k] = v\n\n return new_config\n\n\ndef get_config(config_path):\n \"\"\"Retrieve the config from the specified path, returning a config dict.\"\"\"\n if not os.path.exists(config_path):\n raise ConfigDoesNotExistException\n\n logger.debug('config_path is %s', config_path)\n with io.open(config_path, encoding='utf-8') as file_handle:\n try:\n yaml_dict = poyo.parse_string(file_handle.read())\n except poyo.exceptions.PoyoException as e:\n raise InvalidConfiguration(\n 'Unable to parse YAML file {}. Error: {}'\n ''.format(config_path, e)\n )\n\n config_dict = merge_configs(DEFAULT_CONFIG, yaml_dict)\n\n raw_replay_dir = config_dict['replay_dir']\n config_dict['replay_dir'] = _expand_path(raw_replay_dir)\n\n raw_cookies_dir = config_dict['cookiecutters_dir']\n config_dict['cookiecutters_dir'] = _expand_path(raw_cookies_dir)\n\n return config_dict\n\n\ndef get_user_config(config_file=None, default_config=False):\n \"\"\"Return the user config as a dict.\n\n If ``default_config`` is True, ignore ``config_file`` and return default\n values for the config parameters.\n\n If a path to a ``config_file`` is given, that is different from the default\n location, load the user config from that.\n\n Otherwise look up the config file path in the ``COOKIECUTTER_CONFIG``\n environment variable. If set, load the config from this path. This will\n raise an error if the specified path is not valid.\n\n If the environment variable is not set, try the default config file path\n before falling back to the default config values.\n \"\"\"\n # Do NOT load a config. Return defaults instead.\n if default_config:\n return copy.copy(DEFAULT_CONFIG)\n\n # Load the given config file\n if config_file and config_file is not USER_CONFIG_PATH:\n return get_config(config_file)\n\n try:\n # Does the user set up a config environment variable?\n env_config_file = os.environ['COOKIECUTTER_CONFIG']\n except KeyError:\n # Load an optional user config if it exists\n # otherwise return the defaults\n if os.path.exists(USER_CONFIG_PATH):\n return get_config(USER_CONFIG_PATH)\n else:\n return copy.copy(DEFAULT_CONFIG)\n else:\n # There is a config environment variable. Try to load it.\n # Do not check for existence, so invalid file paths raise an error.\n return get_config(env_config_file)\n", "path": "cookiecutter/config.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Global configuration handling.\"\"\"\n\nfrom __future__ import unicode_literals\nimport copy\nimport logging\nimport os\nimport io\nimport collections\n\nimport poyo\n\nfrom cookiecutter.exceptions import ConfigDoesNotExistException\nfrom cookiecutter.exceptions import InvalidConfiguration\n\n\nlogger = logging.getLogger(__name__)\n\nUSER_CONFIG_PATH = os.path.expanduser('~/.cookiecutterrc')\n\nBUILTIN_ABBREVIATIONS = {\n 'gh': 'https://github.com/{0}.git',\n 'gl': 'https://gitlab.com/{0}.git',\n 'bb': 'https://bitbucket.org/{0}',\n}\n\nDEFAULT_CONFIG = {\n 'cookiecutters_dir': os.path.expanduser('~/.cookiecutters/'),\n 'replay_dir': os.path.expanduser('~/.cookiecutter_replay/'),\n 'default_context': collections.OrderedDict([]),\n 'abbreviations': BUILTIN_ABBREVIATIONS,\n}\n\n\ndef _expand_path(path):\n \"\"\"Expand both environment variables and user home in the given path.\"\"\"\n path = os.path.expandvars(path)\n path = os.path.expanduser(path)\n return path\n\n\ndef merge_configs(default, overwrite):\n \"\"\"Recursively update a dict with the key/value pair of another.\n\n Dict values that are dictionaries themselves will be updated, whilst\n preserving existing keys.\n \"\"\"\n new_config = copy.deepcopy(default)\n\n for k, v in overwrite.items():\n # Make sure to preserve existing items in\n # nested dicts, for example `abbreviations`\n if isinstance(v, dict):\n new_config[k] = merge_configs(default[k], v)\n else:\n new_config[k] = v\n\n return new_config\n\n\ndef get_config(config_path):\n \"\"\"Retrieve the config from the specified path, returning a config dict.\"\"\"\n if not os.path.exists(config_path):\n raise ConfigDoesNotExistException\n\n logger.debug('config_path is %s', config_path)\n with io.open(config_path, encoding='utf-8') as file_handle:\n try:\n yaml_dict = poyo.parse_string(file_handle.read())\n except poyo.exceptions.PoyoException as e:\n raise InvalidConfiguration(\n 'Unable to parse YAML file {}. Error: {}'\n ''.format(config_path, e)\n )\n\n config_dict = merge_configs(DEFAULT_CONFIG, yaml_dict)\n\n raw_replay_dir = config_dict['replay_dir']\n config_dict['replay_dir'] = _expand_path(raw_replay_dir)\n\n raw_cookies_dir = config_dict['cookiecutters_dir']\n config_dict['cookiecutters_dir'] = _expand_path(raw_cookies_dir)\n\n return config_dict\n\n\ndef get_user_config(config_file=None, default_config=False):\n \"\"\"Return the user config as a dict.\n\n If ``default_config`` is True, ignore ``config_file`` and return default\n values for the config parameters.\n\n If a path to a ``config_file`` is given, that is different from the default\n location, load the user config from that.\n\n Otherwise look up the config file path in the ``COOKIECUTTER_CONFIG``\n environment variable. If set, load the config from this path. This will\n raise an error if the specified path is not valid.\n\n If the environment variable is not set, try the default config file path\n before falling back to the default config values.\n \"\"\"\n # Do NOT load a config. Return defaults instead.\n if default_config:\n logger.debug(\"Force ignoring user config with default_config switch.\")\n return copy.copy(DEFAULT_CONFIG)\n\n # Load the given config file\n if config_file and config_file is not USER_CONFIG_PATH:\n logger.debug(\"Loading custom config from %s.\", config_file)\n return get_config(config_file)\n\n try:\n # Does the user set up a config environment variable?\n env_config_file = os.environ['COOKIECUTTER_CONFIG']\n except KeyError:\n # Load an optional user config if it exists\n # otherwise return the defaults\n if os.path.exists(USER_CONFIG_PATH):\n logger.debug(\"Loading config from %s.\", USER_CONFIG_PATH)\n return get_config(USER_CONFIG_PATH)\n else:\n logger.debug(\"User config not found. Loading default config.\")\n return copy.copy(DEFAULT_CONFIG)\n else:\n # There is a config environment variable. Try to load it.\n # Do not check for existence, so invalid file paths raise an error.\n logger.debug(\"User config not found or not specified. Loading default config.\")\n return get_config(env_config_file)\n", "path": "cookiecutter/config.py"}]} | 1,469 | 288 |
gh_patches_debug_14365 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1084 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create a simple interface for fetching datatypes schemas on grand-challenge
**Problem**
The grand-challenge datatypes are currently only stored in the [gc-api](https://github.com/DIAGNijmegen/rse-gcapi/tree/master/gcapi/schemas) repository. However, the information is required by other libraries as well. Duplication of this information seems bad.
**Solution**
* [x] It would be nice to have this information in a central location like grand-challenge and provide a simple GET interface to allow the libraries /comic/evalutils and /DIAGNijmegen/rse-gcapi/ to fetch and cache this information (only a few kbs) from grand-challenge.
* [x] The answer type schemas should be added to the generated schema
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/reader_studies/serializers.py`
Content:
```
1 from rest_framework.exceptions import ValidationError
2 from rest_framework.fields import CharField
3 from rest_framework.relations import HyperlinkedRelatedField, SlugRelatedField
4 from rest_framework.serializers import (
5 HyperlinkedModelSerializer,
6 SerializerMethodField,
7 )
8
9 from grandchallenge.api.swagger import swagger_schema_fields_for_charfield
10 from grandchallenge.cases.models import Image
11 from grandchallenge.reader_studies.models import Answer, Question, ReaderStudy
12
13
14 class QuestionSerializer(HyperlinkedModelSerializer):
15 answer_type = CharField(source="get_answer_type_display")
16 reader_study = HyperlinkedRelatedField(
17 view_name="api:reader-study-detail", read_only=True
18 )
19 form_direction = CharField(source="get_direction_display")
20 image_port = CharField(source="get_image_port_display")
21
22 class Meta:
23 model = Question
24 fields = (
25 "answer_type",
26 "api_url",
27 "form_direction",
28 "help_text",
29 "image_port",
30 "pk",
31 "question_text",
32 "reader_study",
33 "required",
34 )
35 swagger_schema_fields = swagger_schema_fields_for_charfield(
36 answer_type=model._meta.get_field("answer_type"),
37 form_direction=model._meta.get_field(
38 "direction"
39 ), # model.direction gets remapped
40 image_port=model._meta.get_field("image_port"),
41 )
42
43
44 class ReaderStudySerializer(HyperlinkedModelSerializer):
45 questions = QuestionSerializer(many=True, read_only=True)
46 hanging_list_images = SerializerMethodField()
47
48 class Meta:
49 model = ReaderStudy
50 fields = (
51 "api_url",
52 "description",
53 "hanging_list_images",
54 "is_valid",
55 "pk",
56 "questions",
57 "title",
58 )
59
60 def get_hanging_list_images(self, obj: ReaderStudy):
61 """Used by hanging_list_images serializer field."""
62 return obj.get_hanging_list_images_for_user(
63 user=self.context["request"].user
64 )
65
66
67 class AnswerSerializer(HyperlinkedModelSerializer):
68 creator = SlugRelatedField(read_only=True, slug_field="username")
69 question = HyperlinkedRelatedField(
70 view_name="api:reader-studies-question-detail",
71 queryset=Question.objects.all(),
72 )
73 images = HyperlinkedRelatedField(
74 many=True, queryset=Image.objects.all(), view_name="api:image-detail"
75 )
76
77 def validate(self, attrs):
78 question = attrs["question"]
79 images = attrs["images"]
80 answer = attrs["answer"]
81 creator = self.context.get("request").user
82
83 if not question.reader_study.is_reader(user=creator):
84 raise ValidationError("This user is not a reader for this study.")
85
86 if not question.is_answer_valid(answer=answer):
87 raise ValidationError(
88 f"You answer is not the correct type. "
89 f"{question.get_answer_type_display()} expected, "
90 f"{type(answer)} found."
91 )
92
93 if len(images) == 0:
94 raise ValidationError(
95 "You must specify the images that this answer corresponds to."
96 )
97
98 reader_study_images = question.reader_study.images.all()
99 for im in images:
100 if im not in reader_study_images:
101 raise ValidationError(
102 f"Image {im} does not belong to this reader study."
103 )
104
105 if Answer.objects.filter(
106 creator=creator, question=question, images__in=images
107 ).exists():
108 raise ValidationError(
109 f"User {creator} has already answered this question "
110 f"for at least 1 of these images."
111 )
112
113 return attrs
114
115 class Meta:
116 model = Answer
117 fields = (
118 "answer",
119 "api_url",
120 "created",
121 "creator",
122 "images",
123 "pk",
124 "question",
125 )
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/reader_studies/serializers.py b/app/grandchallenge/reader_studies/serializers.py
--- a/app/grandchallenge/reader_studies/serializers.py
+++ b/app/grandchallenge/reader_studies/serializers.py
@@ -8,7 +8,12 @@
from grandchallenge.api.swagger import swagger_schema_fields_for_charfield
from grandchallenge.cases.models import Image
-from grandchallenge.reader_studies.models import Answer, Question, ReaderStudy
+from grandchallenge.reader_studies.models import (
+ ANSWER_TYPE_SCHEMA,
+ Answer,
+ Question,
+ ReaderStudy,
+)
class QuestionSerializer(HyperlinkedModelSerializer):
@@ -123,3 +128,6 @@
"pk",
"question",
)
+ swagger_schema_fields = {
+ "properties": {"answer": {"title": "Answer", **ANSWER_TYPE_SCHEMA}}
+ }
| {"golden_diff": "diff --git a/app/grandchallenge/reader_studies/serializers.py b/app/grandchallenge/reader_studies/serializers.py\n--- a/app/grandchallenge/reader_studies/serializers.py\n+++ b/app/grandchallenge/reader_studies/serializers.py\n@@ -8,7 +8,12 @@\n \n from grandchallenge.api.swagger import swagger_schema_fields_for_charfield\n from grandchallenge.cases.models import Image\n-from grandchallenge.reader_studies.models import Answer, Question, ReaderStudy\n+from grandchallenge.reader_studies.models import (\n+ ANSWER_TYPE_SCHEMA,\n+ Answer,\n+ Question,\n+ ReaderStudy,\n+)\n \n \n class QuestionSerializer(HyperlinkedModelSerializer):\n@@ -123,3 +128,6 @@\n \"pk\",\n \"question\",\n )\n+ swagger_schema_fields = {\n+ \"properties\": {\"answer\": {\"title\": \"Answer\", **ANSWER_TYPE_SCHEMA}}\n+ }\n", "issue": "Create a simple interface for fetching datatypes schemas on grand-challenge \n**Problem**\r\nThe grand-challenge datatypes are currently only stored in the [gc-api](https://github.com/DIAGNijmegen/rse-gcapi/tree/master/gcapi/schemas) repository. However, the information is required by other libraries as well. Duplication of this information seems bad.\r\n\r\n**Solution**\r\n* [x] It would be nice to have this information in a central location like grand-challenge and provide a simple GET interface to allow the libraries /comic/evalutils and /DIAGNijmegen/rse-gcapi/ to fetch and cache this information (only a few kbs) from grand-challenge.\r\n* [x] The answer type schemas should be added to the generated schema\r\n\n", "before_files": [{"content": "from rest_framework.exceptions import ValidationError\nfrom rest_framework.fields import CharField\nfrom rest_framework.relations import HyperlinkedRelatedField, SlugRelatedField\nfrom rest_framework.serializers import (\n HyperlinkedModelSerializer,\n SerializerMethodField,\n)\n\nfrom grandchallenge.api.swagger import swagger_schema_fields_for_charfield\nfrom grandchallenge.cases.models import Image\nfrom grandchallenge.reader_studies.models import Answer, Question, ReaderStudy\n\n\nclass QuestionSerializer(HyperlinkedModelSerializer):\n answer_type = CharField(source=\"get_answer_type_display\")\n reader_study = HyperlinkedRelatedField(\n view_name=\"api:reader-study-detail\", read_only=True\n )\n form_direction = CharField(source=\"get_direction_display\")\n image_port = CharField(source=\"get_image_port_display\")\n\n class Meta:\n model = Question\n fields = (\n \"answer_type\",\n \"api_url\",\n \"form_direction\",\n \"help_text\",\n \"image_port\",\n \"pk\",\n \"question_text\",\n \"reader_study\",\n \"required\",\n )\n swagger_schema_fields = swagger_schema_fields_for_charfield(\n answer_type=model._meta.get_field(\"answer_type\"),\n form_direction=model._meta.get_field(\n \"direction\"\n ), # model.direction gets remapped\n image_port=model._meta.get_field(\"image_port\"),\n )\n\n\nclass ReaderStudySerializer(HyperlinkedModelSerializer):\n questions = QuestionSerializer(many=True, read_only=True)\n hanging_list_images = SerializerMethodField()\n\n class Meta:\n model = ReaderStudy\n fields = (\n \"api_url\",\n \"description\",\n \"hanging_list_images\",\n \"is_valid\",\n \"pk\",\n \"questions\",\n \"title\",\n )\n\n def get_hanging_list_images(self, obj: ReaderStudy):\n \"\"\"Used by hanging_list_images serializer field.\"\"\"\n return obj.get_hanging_list_images_for_user(\n user=self.context[\"request\"].user\n )\n\n\nclass AnswerSerializer(HyperlinkedModelSerializer):\n creator = SlugRelatedField(read_only=True, slug_field=\"username\")\n question = HyperlinkedRelatedField(\n view_name=\"api:reader-studies-question-detail\",\n queryset=Question.objects.all(),\n )\n images = HyperlinkedRelatedField(\n many=True, queryset=Image.objects.all(), view_name=\"api:image-detail\"\n )\n\n def validate(self, attrs):\n question = attrs[\"question\"]\n images = attrs[\"images\"]\n answer = attrs[\"answer\"]\n creator = self.context.get(\"request\").user\n\n if not question.reader_study.is_reader(user=creator):\n raise ValidationError(\"This user is not a reader for this study.\")\n\n if not question.is_answer_valid(answer=answer):\n raise ValidationError(\n f\"You answer is not the correct type. \"\n f\"{question.get_answer_type_display()} expected, \"\n f\"{type(answer)} found.\"\n )\n\n if len(images) == 0:\n raise ValidationError(\n \"You must specify the images that this answer corresponds to.\"\n )\n\n reader_study_images = question.reader_study.images.all()\n for im in images:\n if im not in reader_study_images:\n raise ValidationError(\n f\"Image {im} does not belong to this reader study.\"\n )\n\n if Answer.objects.filter(\n creator=creator, question=question, images__in=images\n ).exists():\n raise ValidationError(\n f\"User {creator} has already answered this question \"\n f\"for at least 1 of these images.\"\n )\n\n return attrs\n\n class Meta:\n model = Answer\n fields = (\n \"answer\",\n \"api_url\",\n \"created\",\n \"creator\",\n \"images\",\n \"pk\",\n \"question\",\n )\n", "path": "app/grandchallenge/reader_studies/serializers.py"}], "after_files": [{"content": "from rest_framework.exceptions import ValidationError\nfrom rest_framework.fields import CharField\nfrom rest_framework.relations import HyperlinkedRelatedField, SlugRelatedField\nfrom rest_framework.serializers import (\n HyperlinkedModelSerializer,\n SerializerMethodField,\n)\n\nfrom grandchallenge.api.swagger import swagger_schema_fields_for_charfield\nfrom grandchallenge.cases.models import Image\nfrom grandchallenge.reader_studies.models import (\n ANSWER_TYPE_SCHEMA,\n Answer,\n Question,\n ReaderStudy,\n)\n\n\nclass QuestionSerializer(HyperlinkedModelSerializer):\n answer_type = CharField(source=\"get_answer_type_display\")\n reader_study = HyperlinkedRelatedField(\n view_name=\"api:reader-study-detail\", read_only=True\n )\n form_direction = CharField(source=\"get_direction_display\")\n image_port = CharField(source=\"get_image_port_display\")\n\n class Meta:\n model = Question\n fields = (\n \"answer_type\",\n \"api_url\",\n \"form_direction\",\n \"help_text\",\n \"image_port\",\n \"pk\",\n \"question_text\",\n \"reader_study\",\n \"required\",\n )\n swagger_schema_fields = swagger_schema_fields_for_charfield(\n answer_type=model._meta.get_field(\"answer_type\"),\n form_direction=model._meta.get_field(\n \"direction\"\n ), # model.direction gets remapped\n image_port=model._meta.get_field(\"image_port\"),\n )\n\n\nclass ReaderStudySerializer(HyperlinkedModelSerializer):\n questions = QuestionSerializer(many=True, read_only=True)\n hanging_list_images = SerializerMethodField()\n\n class Meta:\n model = ReaderStudy\n fields = (\n \"api_url\",\n \"description\",\n \"hanging_list_images\",\n \"is_valid\",\n \"pk\",\n \"questions\",\n \"title\",\n )\n\n def get_hanging_list_images(self, obj: ReaderStudy):\n \"\"\"Used by hanging_list_images serializer field.\"\"\"\n return obj.get_hanging_list_images_for_user(\n user=self.context[\"request\"].user\n )\n\n\nclass AnswerSerializer(HyperlinkedModelSerializer):\n creator = SlugRelatedField(read_only=True, slug_field=\"username\")\n question = HyperlinkedRelatedField(\n view_name=\"api:reader-studies-question-detail\",\n queryset=Question.objects.all(),\n )\n images = HyperlinkedRelatedField(\n many=True, queryset=Image.objects.all(), view_name=\"api:image-detail\"\n )\n\n def validate(self, attrs):\n question = attrs[\"question\"]\n images = attrs[\"images\"]\n answer = attrs[\"answer\"]\n creator = self.context.get(\"request\").user\n\n if not question.reader_study.is_reader(user=creator):\n raise ValidationError(\"This user is not a reader for this study.\")\n\n if not question.is_answer_valid(answer=answer):\n raise ValidationError(\n f\"You answer is not the correct type. \"\n f\"{question.get_answer_type_display()} expected, \"\n f\"{type(answer)} found.\"\n )\n\n if len(images) == 0:\n raise ValidationError(\n \"You must specify the images that this answer corresponds to.\"\n )\n\n reader_study_images = question.reader_study.images.all()\n for im in images:\n if im not in reader_study_images:\n raise ValidationError(\n f\"Image {im} does not belong to this reader study.\"\n )\n\n if Answer.objects.filter(\n creator=creator, question=question, images__in=images\n ).exists():\n raise ValidationError(\n f\"User {creator} has already answered this question \"\n f\"for at least 1 of these images.\"\n )\n\n return attrs\n\n class Meta:\n model = Answer\n fields = (\n \"answer\",\n \"api_url\",\n \"created\",\n \"creator\",\n \"images\",\n \"pk\",\n \"question\",\n )\n swagger_schema_fields = {\n \"properties\": {\"answer\": {\"title\": \"Answer\", **ANSWER_TYPE_SCHEMA}}\n }\n", "path": "app/grandchallenge/reader_studies/serializers.py"}]} | 1,497 | 205 |
gh_patches_debug_21733 | rasdani/github-patches | git_diff | getredash__redash-3619 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support for Presto username and password
Currently the Presto query runner supports username only. We should support password as well.
This probably requires upgrading the PyHive library.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/query_runner/presto.py`
Content:
```
1 from redash.query_runner import *
2 from redash.utils import json_dumps, json_loads
3
4 import logging
5 logger = logging.getLogger(__name__)
6
7 from collections import defaultdict
8
9 try:
10 from pyhive import presto
11 from pyhive.exc import DatabaseError
12 enabled = True
13
14 except ImportError:
15 enabled = False
16
17 PRESTO_TYPES_MAPPING = {
18 "integer": TYPE_INTEGER,
19 "tinyint": TYPE_INTEGER,
20 "smallint": TYPE_INTEGER,
21 "long": TYPE_INTEGER,
22 "bigint": TYPE_INTEGER,
23 "float": TYPE_FLOAT,
24 "double": TYPE_FLOAT,
25 "boolean": TYPE_BOOLEAN,
26 "string": TYPE_STRING,
27 "varchar": TYPE_STRING,
28 "date": TYPE_DATE,
29 }
30
31
32 class Presto(BaseQueryRunner):
33 noop_query = 'SHOW TABLES'
34
35 @classmethod
36 def configuration_schema(cls):
37 return {
38 'type': 'object',
39 'properties': {
40 'host': {
41 'type': 'string'
42 },
43 'protocol': {
44 'type': 'string',
45 'default': 'http'
46 },
47 'port': {
48 'type': 'number'
49 },
50 'schema': {
51 'type': 'string'
52 },
53 'catalog': {
54 'type': 'string'
55 },
56 'username': {
57 'type': 'string'
58 },
59 },
60 'order': ['host', 'protocol', 'port', 'username', 'schema', 'catalog'],
61 'required': ['host']
62 }
63
64 @classmethod
65 def enabled(cls):
66 return enabled
67
68 @classmethod
69 def type(cls):
70 return "presto"
71
72 def get_schema(self, get_stats=False):
73 schema = {}
74 query = """
75 SELECT table_schema, table_name, column_name
76 FROM information_schema.columns
77 WHERE table_schema NOT IN ('pg_catalog', 'information_schema')
78 """
79
80 results, error = self.run_query(query, None)
81
82 if error is not None:
83 raise Exception("Failed getting schema.")
84
85 results = json_loads(results)
86
87 for row in results['rows']:
88 table_name = '{}.{}'.format(row['table_schema'], row['table_name'])
89
90 if table_name not in schema:
91 schema[table_name] = {'name': table_name, 'columns': []}
92
93 schema[table_name]['columns'].append(row['column_name'])
94
95 return schema.values()
96
97 def run_query(self, query, user):
98 connection = presto.connect(
99 host=self.configuration.get('host', ''),
100 port=self.configuration.get('port', 8080),
101 protocol=self.configuration.get('protocol', 'http'),
102 username=self.configuration.get('username', 'redash'),
103 catalog=self.configuration.get('catalog', 'hive'),
104 schema=self.configuration.get('schema', 'default'))
105
106 cursor = connection.cursor()
107
108
109 try:
110 cursor.execute(query)
111 column_tuples = [(i[0], PRESTO_TYPES_MAPPING.get(i[1], None)) for i in cursor.description]
112 columns = self.fetch_columns(column_tuples)
113 rows = [dict(zip(([c['name'] for c in columns]), r)) for i, r in enumerate(cursor.fetchall())]
114 data = {'columns': columns, 'rows': rows}
115 json_data = json_dumps(data)
116 error = None
117 except DatabaseError as db:
118 json_data = None
119 default_message = 'Unspecified DatabaseError: {0}'.format(db.message)
120 if isinstance(db.message, dict):
121 message = db.message.get('failureInfo', {'message', None}).get('message')
122 else:
123 message = None
124 error = default_message if message is None else message
125 except (KeyboardInterrupt, InterruptException) as e:
126 cursor.cancel()
127 error = "Query cancelled by user."
128 json_data = None
129 except Exception as ex:
130 json_data = None
131 error = ex.message
132 if not isinstance(error, basestring):
133 error = unicode(error)
134
135 return json_data, error
136
137 register(Presto)
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redash/query_runner/presto.py b/redash/query_runner/presto.py
--- a/redash/query_runner/presto.py
+++ b/redash/query_runner/presto.py
@@ -56,8 +56,11 @@
'username': {
'type': 'string'
},
+ 'password': {
+ 'type': 'string'
+ },
},
- 'order': ['host', 'protocol', 'port', 'username', 'schema', 'catalog'],
+ 'order': ['host', 'protocol', 'port', 'username', 'password', 'schema', 'catalog'],
'required': ['host']
}
@@ -100,6 +103,7 @@
port=self.configuration.get('port', 8080),
protocol=self.configuration.get('protocol', 'http'),
username=self.configuration.get('username', 'redash'),
+ password=self.configuration.get('password', ''),
catalog=self.configuration.get('catalog', 'hive'),
schema=self.configuration.get('schema', 'default'))
| {"golden_diff": "diff --git a/redash/query_runner/presto.py b/redash/query_runner/presto.py\n--- a/redash/query_runner/presto.py\n+++ b/redash/query_runner/presto.py\n@@ -56,8 +56,11 @@\n 'username': {\n 'type': 'string'\n },\n+ 'password': {\n+ 'type': 'string'\n+ },\n },\n- 'order': ['host', 'protocol', 'port', 'username', 'schema', 'catalog'],\n+ 'order': ['host', 'protocol', 'port', 'username', 'password', 'schema', 'catalog'],\n 'required': ['host']\n }\n \n@@ -100,6 +103,7 @@\n port=self.configuration.get('port', 8080),\n protocol=self.configuration.get('protocol', 'http'),\n username=self.configuration.get('username', 'redash'),\n+ password=self.configuration.get('password', ''),\n catalog=self.configuration.get('catalog', 'hive'),\n schema=self.configuration.get('schema', 'default'))\n", "issue": "Support for Presto username and password\nCurrently the Presto query runner supports username only. We should support password as well.\r\n\r\nThis probably requires upgrading the PyHive library.\n", "before_files": [{"content": "from redash.query_runner import *\nfrom redash.utils import json_dumps, json_loads\n\nimport logging\nlogger = logging.getLogger(__name__)\n\nfrom collections import defaultdict\n\ntry:\n from pyhive import presto\n from pyhive.exc import DatabaseError\n enabled = True\n\nexcept ImportError:\n enabled = False\n\nPRESTO_TYPES_MAPPING = {\n \"integer\": TYPE_INTEGER,\n \"tinyint\": TYPE_INTEGER,\n \"smallint\": TYPE_INTEGER,\n \"long\": TYPE_INTEGER,\n \"bigint\": TYPE_INTEGER,\n \"float\": TYPE_FLOAT,\n \"double\": TYPE_FLOAT,\n \"boolean\": TYPE_BOOLEAN,\n \"string\": TYPE_STRING,\n \"varchar\": TYPE_STRING,\n \"date\": TYPE_DATE,\n}\n\n\nclass Presto(BaseQueryRunner):\n noop_query = 'SHOW TABLES'\n\n @classmethod\n def configuration_schema(cls):\n return {\n 'type': 'object',\n 'properties': {\n 'host': {\n 'type': 'string'\n },\n 'protocol': {\n 'type': 'string',\n 'default': 'http'\n },\n 'port': {\n 'type': 'number'\n },\n 'schema': {\n 'type': 'string'\n },\n 'catalog': {\n 'type': 'string'\n },\n 'username': {\n 'type': 'string'\n },\n },\n 'order': ['host', 'protocol', 'port', 'username', 'schema', 'catalog'],\n 'required': ['host']\n }\n\n @classmethod\n def enabled(cls):\n return enabled\n\n @classmethod\n def type(cls):\n return \"presto\"\n\n def get_schema(self, get_stats=False):\n schema = {}\n query = \"\"\"\n SELECT table_schema, table_name, column_name\n FROM information_schema.columns\n WHERE table_schema NOT IN ('pg_catalog', 'information_schema')\n \"\"\"\n\n results, error = self.run_query(query, None)\n\n if error is not None:\n raise Exception(\"Failed getting schema.\")\n\n results = json_loads(results)\n\n for row in results['rows']:\n table_name = '{}.{}'.format(row['table_schema'], row['table_name'])\n\n if table_name not in schema:\n schema[table_name] = {'name': table_name, 'columns': []}\n\n schema[table_name]['columns'].append(row['column_name'])\n\n return schema.values()\n\n def run_query(self, query, user):\n connection = presto.connect(\n host=self.configuration.get('host', ''),\n port=self.configuration.get('port', 8080),\n protocol=self.configuration.get('protocol', 'http'),\n username=self.configuration.get('username', 'redash'),\n catalog=self.configuration.get('catalog', 'hive'),\n schema=self.configuration.get('schema', 'default'))\n\n cursor = connection.cursor()\n\n\n try:\n cursor.execute(query)\n column_tuples = [(i[0], PRESTO_TYPES_MAPPING.get(i[1], None)) for i in cursor.description]\n columns = self.fetch_columns(column_tuples)\n rows = [dict(zip(([c['name'] for c in columns]), r)) for i, r in enumerate(cursor.fetchall())]\n data = {'columns': columns, 'rows': rows}\n json_data = json_dumps(data)\n error = None\n except DatabaseError as db:\n json_data = None\n default_message = 'Unspecified DatabaseError: {0}'.format(db.message)\n if isinstance(db.message, dict):\n message = db.message.get('failureInfo', {'message', None}).get('message')\n else:\n message = None\n error = default_message if message is None else message\n except (KeyboardInterrupt, InterruptException) as e:\n cursor.cancel()\n error = \"Query cancelled by user.\"\n json_data = None\n except Exception as ex:\n json_data = None\n error = ex.message\n if not isinstance(error, basestring):\n error = unicode(error)\n\n return json_data, error\n\nregister(Presto)\n", "path": "redash/query_runner/presto.py"}], "after_files": [{"content": "from redash.query_runner import *\nfrom redash.utils import json_dumps, json_loads\n\nimport logging\nlogger = logging.getLogger(__name__)\n\nfrom collections import defaultdict\n\ntry:\n from pyhive import presto\n from pyhive.exc import DatabaseError\n enabled = True\n\nexcept ImportError:\n enabled = False\n\nPRESTO_TYPES_MAPPING = {\n \"integer\": TYPE_INTEGER,\n \"tinyint\": TYPE_INTEGER,\n \"smallint\": TYPE_INTEGER,\n \"long\": TYPE_INTEGER,\n \"bigint\": TYPE_INTEGER,\n \"float\": TYPE_FLOAT,\n \"double\": TYPE_FLOAT,\n \"boolean\": TYPE_BOOLEAN,\n \"string\": TYPE_STRING,\n \"varchar\": TYPE_STRING,\n \"date\": TYPE_DATE,\n}\n\n\nclass Presto(BaseQueryRunner):\n noop_query = 'SHOW TABLES'\n\n @classmethod\n def configuration_schema(cls):\n return {\n 'type': 'object',\n 'properties': {\n 'host': {\n 'type': 'string'\n },\n 'protocol': {\n 'type': 'string',\n 'default': 'http'\n },\n 'port': {\n 'type': 'number'\n },\n 'schema': {\n 'type': 'string'\n },\n 'catalog': {\n 'type': 'string'\n },\n 'username': {\n 'type': 'string'\n },\n 'password': {\n 'type': 'string'\n },\n },\n 'order': ['host', 'protocol', 'port', 'username', 'password', 'schema', 'catalog'],\n 'required': ['host']\n }\n\n @classmethod\n def enabled(cls):\n return enabled\n\n @classmethod\n def type(cls):\n return \"presto\"\n\n def get_schema(self, get_stats=False):\n schema = {}\n query = \"\"\"\n SELECT table_schema, table_name, column_name\n FROM information_schema.columns\n WHERE table_schema NOT IN ('pg_catalog', 'information_schema')\n \"\"\"\n\n results, error = self.run_query(query, None)\n\n if error is not None:\n raise Exception(\"Failed getting schema.\")\n\n results = json_loads(results)\n\n for row in results['rows']:\n table_name = '{}.{}'.format(row['table_schema'], row['table_name'])\n\n if table_name not in schema:\n schema[table_name] = {'name': table_name, 'columns': []}\n\n schema[table_name]['columns'].append(row['column_name'])\n\n return schema.values()\n\n def run_query(self, query, user):\n connection = presto.connect(\n host=self.configuration.get('host', ''),\n port=self.configuration.get('port', 8080),\n protocol=self.configuration.get('protocol', 'http'),\n username=self.configuration.get('username', 'redash'),\n password=self.configuration.get('password', ''),\n catalog=self.configuration.get('catalog', 'hive'),\n schema=self.configuration.get('schema', 'default'))\n\n cursor = connection.cursor()\n\n\n try:\n cursor.execute(query)\n column_tuples = [(i[0], PRESTO_TYPES_MAPPING.get(i[1], None)) for i in cursor.description]\n columns = self.fetch_columns(column_tuples)\n rows = [dict(zip(([c['name'] for c in columns]), r)) for i, r in enumerate(cursor.fetchall())]\n data = {'columns': columns, 'rows': rows}\n json_data = json_dumps(data)\n error = None\n except DatabaseError as db:\n json_data = None\n default_message = 'Unspecified DatabaseError: {0}'.format(db.message)\n if isinstance(db.message, dict):\n message = db.message.get('failureInfo', {'message', None}).get('message')\n else:\n message = None\n error = default_message if message is None else message\n except (KeyboardInterrupt, InterruptException) as e:\n cursor.cancel()\n error = \"Query cancelled by user.\"\n json_data = None\n except Exception as ex:\n json_data = None\n error = ex.message\n if not isinstance(error, basestring):\n error = unicode(error)\n\n return json_data, error\n\nregister(Presto)\n", "path": "redash/query_runner/presto.py"}]} | 1,471 | 228 |
gh_patches_debug_21440 | rasdani/github-patches | git_diff | UTNkar__moore-120 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Application drafts in limbo
<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->
### Prerequisites
* [x] Put an X between the brackets on this line if you have done all of the
following:
* Reproduced the problem with clear cache.
* (If running the application locally:) Made sure your running the newest version on the development branch
* Checked that your issue isn't already filed: https://github.com/UTNkar/moore/issues
### Description
The problem has not been reproduced but it's the kind of problem that probably will occur any way.
When a user starts an application draft and forgets to submit it the person can not be appointed to the position with the overturn function. If the application for a position has been saved as draft when the application period ends the application becomes stuck in limbo. The group administrator can before the application period has ended see that there is a draft for the person. When the application period ends the draft is not visible among the submitted applications (very resonable). The problem occurs when the administrator wants to appoint that person anyway with the overturn function then an error message occurs saying. "You can not appoint this person since an application has been submitted". This should not be the case, a draft application should not be seen as a submitted application by the overturn function. The user can not see or delete the draft application after the application period has ended.
Quick fix, an application system administrator can access the applications and set the status to submitted.
### Steps to Reproduce
1. Apply to an position and save the application as draft
2. Wait for the application time to end
3. Go to appoint for the given position
4. Try overturn with the user who saved the application as draft
5. Error message occurs
<!-- Please select the appropriate "topic category"/blue and "issue type"/yellow label -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/involvement/forms.py`
Content:
```
1 from django import forms
2 from django.contrib.auth import get_user_model
3 from django.utils.translation import ugettext_lazy as _
4
5 from involvement.models import Application, Reference
6 from utils.forms import AdvancedModelMultipleChoiceField
7
8
9 class ApplicationForm(forms.ModelForm):
10 class Meta:
11 model = Application
12 exclude = ['position', 'applicant']
13 widgets = {
14 'cover_letter': forms.Textarea(attrs={'style': 'height: 200px',
15 'class': 'form-control'}),
16 'qualifications': forms.Textarea(attrs={'style': 'height: 200px',
17 'class': 'form-control'}),
18 }
19
20 def clean_status(self):
21 status = self.cleaned_data['status']
22 if status not in ['draft', 'submitted'] \
23 or (self.initial['status'] == 'submitted'
24 and status == 'draft'):
25 raise forms.ValidationError(_('The submitted status was invalid.'))
26 return status
27
28
29 ReferenceFormSet = forms.inlineformset_factory(
30 Application,
31 Reference,
32 fields=('name', 'position', 'email', 'phone_number', 'comment'),
33 widgets={
34 'name': forms.TextInput(attrs={'class': 'form-control'}),
35 'position': forms.TextInput(attrs={'class': 'form-control'}),
36 'email': forms.TextInput(attrs={'class': 'form-control'}),
37 'phone_number': forms.TextInput(attrs={'class': 'form-control'}),
38 'comment': forms.TextInput(attrs={'class': 'form-control'}),
39 },
40 extra=0,
41 )
42
43
44 class ApprovalForm(forms.ModelForm):
45 status = forms.ChoiceField(
46 choices=(
47 ('submitted', '---------'),
48 ('approved', _('Approved')),
49 ('disapproved', _('Disapproved')),
50 ),
51 )
52
53 class Meta:
54 model = Application
55 fields = []
56
57 def clean_status(self):
58 status = self.cleaned_data['status']
59 if status not in ['submitted', 'approved', 'disapproved']:
60 raise forms.ValidationError(_('The submitted status was invalid.'))
61 return status
62
63 def save(self, commit=True):
64 self.instance.status = self.cleaned_data['status']
65
66 super(ApprovalForm, self).save(commit)
67
68
69 class AppointmentForm(forms.Form):
70 appoint = AdvancedModelMultipleChoiceField(
71 Application.objects.none(),
72 widget=forms.CheckboxSelectMultiple(),
73 required=False,
74 )
75 overturn = forms.CharField(
76 required=False,
77 label=_('Overturn'),
78 help_text=_('Enter a comma separated list of users you want to '
79 'appoint to the position, even though did not apply for '
80 'the position.')
81 )
82
83 def __init__(self, position, *args, **kwargs):
84 super(AppointmentForm, self).__init__(*args, **kwargs)
85 self.position = position
86 self.fields['appoint'].queryset = position.applications.filter(
87 status__in=['submitted', 'approved', 'appointed', 'turned_down']
88 )
89 self.initial['appoint'] = position.applications.filter(
90 status='appointed'
91 )
92
93 def clean_overturn(self):
94 string = self.cleaned_data['overturn']
95 string = string.replace(' ', '')
96 if string == '':
97 return []
98 else:
99 users = string.split(',')
100 for u in users:
101 if not get_user_model().objects.filter(
102 username=u
103 ).exists():
104 raise forms.ValidationError(
105 _('No user with the username %(user)s exists.'),
106 params={'user': u},
107 )
108 elif self.position.applications.filter(
109 applicant__username=u
110 ).exists():
111 raise forms.ValidationError(
112 _('User %(user)s already applied for this position '
113 'and can not be appointed through the overturn '
114 'field.'),
115 params={'user': u},
116 )
117 return users
118
119 def clean(self):
120 super(AppointmentForm, self).clean()
121 appoint = self.cleaned_data.get('appoint', [])
122 overturn = self.cleaned_data.get('overturn', [])
123 nr_appointment = len(appoint) + len(overturn)
124 if nr_appointment > self.position.appointments:
125 raise forms.ValidationError(
126 _('You cannot appoint %(current)s applicants. The maximum '
127 'for this position is %(max)s.'),
128 params={
129 'current': nr_appointment,
130 'max': self.position.appointments,
131 },
132 )
133 return self.cleaned_data
134
135 def save(self):
136 for application in self.fields['appoint'].queryset:
137 if application in self.cleaned_data['appoint']:
138 application.status = 'appointed'
139 else:
140 application.status = 'turned_down'
141 application.save()
142
143 for user in self.cleaned_data['overturn']:
144 user = get_user_model().objects.get(
145 username=user
146 )
147 Application.objects.create(
148 position=self.position,
149 applicant=user,
150 status='appointed',
151 )
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/involvement/forms.py b/website/involvement/forms.py
--- a/website/involvement/forms.py
+++ b/website/involvement/forms.py
@@ -107,6 +107,8 @@
)
elif self.position.applications.filter(
applicant__username=u
+ ).exclude(
+ status='draft'
).exists():
raise forms.ValidationError(
_('User %(user)s already applied for this position '
@@ -144,8 +146,11 @@
user = get_user_model().objects.get(
username=user
)
- Application.objects.create(
+ appl, created = Application.objects.get_or_create(
position=self.position,
applicant=user,
- status='appointed',
+ defaults={'status': 'appointed'}
)
+ if not created:
+ appl.status = 'appointed'
+ appl.save()
| {"golden_diff": "diff --git a/website/involvement/forms.py b/website/involvement/forms.py\n--- a/website/involvement/forms.py\n+++ b/website/involvement/forms.py\n@@ -107,6 +107,8 @@\n )\n elif self.position.applications.filter(\n applicant__username=u\n+ ).exclude(\n+ status='draft'\n ).exists():\n raise forms.ValidationError(\n _('User %(user)s already applied for this position '\n@@ -144,8 +146,11 @@\n user = get_user_model().objects.get(\n username=user\n )\n- Application.objects.create(\n+ appl, created = Application.objects.get_or_create(\n position=self.position,\n applicant=user,\n- status='appointed',\n+ defaults={'status': 'appointed'}\n )\n+ if not created:\n+ appl.status = 'appointed'\n+ appl.save()\n", "issue": "Application drafts in limbo\n<!-- Do you want to ask a question? Are you looking for support? The system administrator can help you: [email protected] -->\r\n\r\n### Prerequisites\r\n\r\n* [x] Put an X between the brackets on this line if you have done all of the\r\nfollowing:\r\n * Reproduced the problem with clear cache.\r\n * (If running the application locally:) Made sure your running the newest version on the development branch\r\n * Checked that your issue isn't already filed: https://github.com/UTNkar/moore/issues\r\n\r\n### Description\r\n\r\nThe problem has not been reproduced but it's the kind of problem that probably will occur any way.\r\n\r\nWhen a user starts an application draft and forgets to submit it the person can not be appointed to the position with the overturn function. If the application for a position has been saved as draft when the application period ends the application becomes stuck in limbo. The group administrator can before the application period has ended see that there is a draft for the person. When the application period ends the draft is not visible among the submitted applications (very resonable). The problem occurs when the administrator wants to appoint that person anyway with the overturn function then an error message occurs saying. \"You can not appoint this person since an application has been submitted\". This should not be the case, a draft application should not be seen as a submitted application by the overturn function. The user can not see or delete the draft application after the application period has ended.\r\n\r\nQuick fix, an application system administrator can access the applications and set the status to submitted. \r\n\r\n### Steps to Reproduce\r\n\r\n1. Apply to an position and save the application as draft\r\n2. Wait for the application time to end\r\n3. Go to appoint for the given position\r\n4. Try overturn with the user who saved the application as draft\r\n5. Error message occurs \r\n\r\n<!-- Please select the appropriate \"topic category\"/blue and \"issue type\"/yellow label -->\n", "before_files": [{"content": "from django import forms\nfrom django.contrib.auth import get_user_model\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom involvement.models import Application, Reference\nfrom utils.forms import AdvancedModelMultipleChoiceField\n\n\nclass ApplicationForm(forms.ModelForm):\n class Meta:\n model = Application\n exclude = ['position', 'applicant']\n widgets = {\n 'cover_letter': forms.Textarea(attrs={'style': 'height: 200px',\n 'class': 'form-control'}),\n 'qualifications': forms.Textarea(attrs={'style': 'height: 200px',\n 'class': 'form-control'}),\n }\n\n def clean_status(self):\n status = self.cleaned_data['status']\n if status not in ['draft', 'submitted'] \\\n or (self.initial['status'] == 'submitted'\n and status == 'draft'):\n raise forms.ValidationError(_('The submitted status was invalid.'))\n return status\n\n\nReferenceFormSet = forms.inlineformset_factory(\n Application,\n Reference,\n fields=('name', 'position', 'email', 'phone_number', 'comment'),\n widgets={\n 'name': forms.TextInput(attrs={'class': 'form-control'}),\n 'position': forms.TextInput(attrs={'class': 'form-control'}),\n 'email': forms.TextInput(attrs={'class': 'form-control'}),\n 'phone_number': forms.TextInput(attrs={'class': 'form-control'}),\n 'comment': forms.TextInput(attrs={'class': 'form-control'}),\n },\n extra=0,\n)\n\n\nclass ApprovalForm(forms.ModelForm):\n status = forms.ChoiceField(\n choices=(\n ('submitted', '---------'),\n ('approved', _('Approved')),\n ('disapproved', _('Disapproved')),\n ),\n )\n\n class Meta:\n model = Application\n fields = []\n\n def clean_status(self):\n status = self.cleaned_data['status']\n if status not in ['submitted', 'approved', 'disapproved']:\n raise forms.ValidationError(_('The submitted status was invalid.'))\n return status\n\n def save(self, commit=True):\n self.instance.status = self.cleaned_data['status']\n\n super(ApprovalForm, self).save(commit)\n\n\nclass AppointmentForm(forms.Form):\n appoint = AdvancedModelMultipleChoiceField(\n Application.objects.none(),\n widget=forms.CheckboxSelectMultiple(),\n required=False,\n )\n overturn = forms.CharField(\n required=False,\n label=_('Overturn'),\n help_text=_('Enter a comma separated list of users you want to '\n 'appoint to the position, even though did not apply for '\n 'the position.')\n )\n\n def __init__(self, position, *args, **kwargs):\n super(AppointmentForm, self).__init__(*args, **kwargs)\n self.position = position\n self.fields['appoint'].queryset = position.applications.filter(\n status__in=['submitted', 'approved', 'appointed', 'turned_down']\n )\n self.initial['appoint'] = position.applications.filter(\n status='appointed'\n )\n\n def clean_overturn(self):\n string = self.cleaned_data['overturn']\n string = string.replace(' ', '')\n if string == '':\n return []\n else:\n users = string.split(',')\n for u in users:\n if not get_user_model().objects.filter(\n username=u\n ).exists():\n raise forms.ValidationError(\n _('No user with the username %(user)s exists.'),\n params={'user': u},\n )\n elif self.position.applications.filter(\n applicant__username=u\n ).exists():\n raise forms.ValidationError(\n _('User %(user)s already applied for this position '\n 'and can not be appointed through the overturn '\n 'field.'),\n params={'user': u},\n )\n return users\n\n def clean(self):\n super(AppointmentForm, self).clean()\n appoint = self.cleaned_data.get('appoint', [])\n overturn = self.cleaned_data.get('overturn', [])\n nr_appointment = len(appoint) + len(overturn)\n if nr_appointment > self.position.appointments:\n raise forms.ValidationError(\n _('You cannot appoint %(current)s applicants. The maximum '\n 'for this position is %(max)s.'),\n params={\n 'current': nr_appointment,\n 'max': self.position.appointments,\n },\n )\n return self.cleaned_data\n\n def save(self):\n for application in self.fields['appoint'].queryset:\n if application in self.cleaned_data['appoint']:\n application.status = 'appointed'\n else:\n application.status = 'turned_down'\n application.save()\n\n for user in self.cleaned_data['overturn']:\n user = get_user_model().objects.get(\n username=user\n )\n Application.objects.create(\n position=self.position,\n applicant=user,\n status='appointed',\n )\n", "path": "website/involvement/forms.py"}], "after_files": [{"content": "from django import forms\nfrom django.contrib.auth import get_user_model\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom involvement.models import Application, Reference\nfrom utils.forms import AdvancedModelMultipleChoiceField\n\n\nclass ApplicationForm(forms.ModelForm):\n class Meta:\n model = Application\n exclude = ['position', 'applicant']\n widgets = {\n 'cover_letter': forms.Textarea(attrs={'style': 'height: 200px',\n 'class': 'form-control'}),\n 'qualifications': forms.Textarea(attrs={'style': 'height: 200px',\n 'class': 'form-control'}),\n }\n\n def clean_status(self):\n status = self.cleaned_data['status']\n if status not in ['draft', 'submitted'] \\\n or (self.initial['status'] == 'submitted'\n and status == 'draft'):\n raise forms.ValidationError(_('The submitted status was invalid.'))\n return status\n\n\nReferenceFormSet = forms.inlineformset_factory(\n Application,\n Reference,\n fields=('name', 'position', 'email', 'phone_number', 'comment'),\n widgets={\n 'name': forms.TextInput(attrs={'class': 'form-control'}),\n 'position': forms.TextInput(attrs={'class': 'form-control'}),\n 'email': forms.TextInput(attrs={'class': 'form-control'}),\n 'phone_number': forms.TextInput(attrs={'class': 'form-control'}),\n 'comment': forms.TextInput(attrs={'class': 'form-control'}),\n },\n extra=0,\n)\n\n\nclass ApprovalForm(forms.ModelForm):\n status = forms.ChoiceField(\n choices=(\n ('submitted', '---------'),\n ('approved', _('Approved')),\n ('disapproved', _('Disapproved')),\n ),\n )\n\n class Meta:\n model = Application\n fields = []\n\n def clean_status(self):\n status = self.cleaned_data['status']\n if status not in ['submitted', 'approved', 'disapproved']:\n raise forms.ValidationError(_('The submitted status was invalid.'))\n return status\n\n def save(self, commit=True):\n self.instance.status = self.cleaned_data['status']\n\n super(ApprovalForm, self).save(commit)\n\n\nclass AppointmentForm(forms.Form):\n appoint = AdvancedModelMultipleChoiceField(\n Application.objects.none(),\n widget=forms.CheckboxSelectMultiple(),\n required=False,\n )\n overturn = forms.CharField(\n required=False,\n label=_('Overturn'),\n help_text=_('Enter a comma separated list of users you want to '\n 'appoint to the position, even though did not apply for '\n 'the position.')\n )\n\n def __init__(self, position, *args, **kwargs):\n super(AppointmentForm, self).__init__(*args, **kwargs)\n self.position = position\n self.fields['appoint'].queryset = position.applications.filter(\n status__in=['submitted', 'approved', 'appointed', 'turned_down']\n )\n self.initial['appoint'] = position.applications.filter(\n status='appointed'\n )\n\n def clean_overturn(self):\n string = self.cleaned_data['overturn']\n string = string.replace(' ', '')\n if string == '':\n return []\n else:\n users = string.split(',')\n for u in users:\n if not get_user_model().objects.filter(\n username=u\n ).exists():\n raise forms.ValidationError(\n _('No user with the username %(user)s exists.'),\n params={'user': u},\n )\n elif self.position.applications.filter(\n applicant__username=u\n ).exclude(\n status='draft'\n ).exists():\n raise forms.ValidationError(\n _('User %(user)s already applied for this position '\n 'and can not be appointed through the overturn '\n 'field.'),\n params={'user': u},\n )\n return users\n\n def clean(self):\n super(AppointmentForm, self).clean()\n appoint = self.cleaned_data.get('appoint', [])\n overturn = self.cleaned_data.get('overturn', [])\n nr_appointment = len(appoint) + len(overturn)\n if nr_appointment > self.position.appointments:\n raise forms.ValidationError(\n _('You cannot appoint %(current)s applicants. The maximum '\n 'for this position is %(max)s.'),\n params={\n 'current': nr_appointment,\n 'max': self.position.appointments,\n },\n )\n return self.cleaned_data\n\n def save(self):\n for application in self.fields['appoint'].queryset:\n if application in self.cleaned_data['appoint']:\n application.status = 'appointed'\n else:\n application.status = 'turned_down'\n application.save()\n\n for user in self.cleaned_data['overturn']:\n user = get_user_model().objects.get(\n username=user\n )\n appl, created = Application.objects.get_or_create(\n position=self.position,\n applicant=user,\n defaults={'status': 'appointed'}\n )\n if not created:\n appl.status = 'appointed'\n appl.save()\n", "path": "website/involvement/forms.py"}]} | 2,019 | 196 |
gh_patches_debug_24151 | rasdani/github-patches | git_diff | gammapy__gammapy-4924 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove deprecated features
This is a reminder issue to remove the features deprecated since 1.1 before the next release
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gammapy/utils/table.py`
Content:
```
1 # Licensed under a 3-clause BSD style license - see LICENSE.rst
2 """Table helper utilities."""
3 import numpy as np
4 from astropy.table import Table
5 from astropy.units import Quantity
6 from .deprecation import deprecated
7 from .units import standardise_unit
8
9 __all__ = [
10 "hstack_columns",
11 "table_from_row_data",
12 "table_row_to_dict",
13 "table_standardise_units_copy",
14 "table_standardise_units_inplace",
15 ]
16
17
18 def hstack_columns(table, table_other):
19 """Stack the column data horizontally.
20
21 Parameters
22 ----------
23 table : `~astropy.table.Table`
24 Input table.
25 table_other : `~astropy.table.Table`
26 Other input table.
27
28 Returns
29 -------
30 stacked : `~astropy.table.Table`
31 Stacked table.
32 """
33 stacked = Table()
34
35 for column in table.colnames:
36 data = np.hstack([table[column].data[0], table_other[column].data[0]])
37 stacked[column] = data[np.newaxis, :]
38 return stacked
39
40
41 def table_standardise_units_copy(table):
42 """Standardise units for all columns in a table in a copy.
43
44 Calls `~gammapy.utils.units.standardise_unit`.
45
46 Parameters
47 ----------
48 table : `~astropy.table.Table`
49 Input table (won't be modified).
50
51 Returns
52 -------
53 table : `~astropy.table.Table`
54 Copy of the input table with standardised column units.
55 """
56 # Note: we could add an `inplace` option (or variant of this function)
57 # See https://github.com/astropy/astropy/issues/6098
58 table = Table(table)
59 return table_standardise_units_inplace(table)
60
61
62 def table_standardise_units_inplace(table):
63 """Standardise units for all columns in a table in place."""
64 for column in table.columns.values():
65 if column.unit:
66 column.unit = standardise_unit(column.unit)
67
68 return table
69
70
71 def table_row_to_dict(row, make_quantity=True):
72 """Make one source data dictionary.
73
74 Parameters
75 ----------
76 row : `~astropy.table.Row`
77 Row.
78 make_quantity : bool, optional
79 Make quantity values for columns with units.
80 Default is True.
81
82 Returns
83 -------
84 data : dict
85 Row data.
86 """
87 data = {}
88 for name, col in row.columns.items():
89 val = row[name]
90
91 if make_quantity and col.unit:
92 val = Quantity(val, unit=col.unit)
93 data[name] = val
94 return data
95
96
97 @deprecated("v1.1", alternative="astropy.table.Table")
98 def table_from_row_data(rows, **kwargs):
99 """Helper function to create table objects from row data.
100
101 Works with quantities.
102
103 Parameters
104 ----------
105 rows : list
106 List of row data (each row a dictionary).
107 """
108 table = Table(**kwargs)
109
110 if len(rows) == 0:
111 return table
112
113 colnames = list(rows[0].keys())
114
115 for name in colnames:
116 coldata = [_[name] for _ in rows]
117 if isinstance(rows[0][name], Quantity):
118 coldata = Quantity(coldata, unit=rows[0][name].unit)
119 table[name] = coldata
120
121 return table
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gammapy/utils/table.py b/gammapy/utils/table.py
--- a/gammapy/utils/table.py
+++ b/gammapy/utils/table.py
@@ -3,12 +3,10 @@
import numpy as np
from astropy.table import Table
from astropy.units import Quantity
-from .deprecation import deprecated
from .units import standardise_unit
__all__ = [
"hstack_columns",
- "table_from_row_data",
"table_row_to_dict",
"table_standardise_units_copy",
"table_standardise_units_inplace",
@@ -92,30 +90,3 @@
val = Quantity(val, unit=col.unit)
data[name] = val
return data
-
-
-@deprecated("v1.1", alternative="astropy.table.Table")
-def table_from_row_data(rows, **kwargs):
- """Helper function to create table objects from row data.
-
- Works with quantities.
-
- Parameters
- ----------
- rows : list
- List of row data (each row a dictionary).
- """
- table = Table(**kwargs)
-
- if len(rows) == 0:
- return table
-
- colnames = list(rows[0].keys())
-
- for name in colnames:
- coldata = [_[name] for _ in rows]
- if isinstance(rows[0][name], Quantity):
- coldata = Quantity(coldata, unit=rows[0][name].unit)
- table[name] = coldata
-
- return table
| {"golden_diff": "diff --git a/gammapy/utils/table.py b/gammapy/utils/table.py\n--- a/gammapy/utils/table.py\n+++ b/gammapy/utils/table.py\n@@ -3,12 +3,10 @@\n import numpy as np\n from astropy.table import Table\n from astropy.units import Quantity\n-from .deprecation import deprecated\n from .units import standardise_unit\n \n __all__ = [\n \"hstack_columns\",\n- \"table_from_row_data\",\n \"table_row_to_dict\",\n \"table_standardise_units_copy\",\n \"table_standardise_units_inplace\",\n@@ -92,30 +90,3 @@\n val = Quantity(val, unit=col.unit)\n data[name] = val\n return data\n-\n-\n-@deprecated(\"v1.1\", alternative=\"astropy.table.Table\")\n-def table_from_row_data(rows, **kwargs):\n- \"\"\"Helper function to create table objects from row data.\n-\n- Works with quantities.\n-\n- Parameters\n- ----------\n- rows : list\n- List of row data (each row a dictionary).\n- \"\"\"\n- table = Table(**kwargs)\n-\n- if len(rows) == 0:\n- return table\n-\n- colnames = list(rows[0].keys())\n-\n- for name in colnames:\n- coldata = [_[name] for _ in rows]\n- if isinstance(rows[0][name], Quantity):\n- coldata = Quantity(coldata, unit=rows[0][name].unit)\n- table[name] = coldata\n-\n- return table\n", "issue": "Remove deprecated features\nThis is a reminder issue to remove the features deprecated since 1.1 before the next release\n", "before_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"Table helper utilities.\"\"\"\nimport numpy as np\nfrom astropy.table import Table\nfrom astropy.units import Quantity\nfrom .deprecation import deprecated\nfrom .units import standardise_unit\n\n__all__ = [\n \"hstack_columns\",\n \"table_from_row_data\",\n \"table_row_to_dict\",\n \"table_standardise_units_copy\",\n \"table_standardise_units_inplace\",\n]\n\n\ndef hstack_columns(table, table_other):\n \"\"\"Stack the column data horizontally.\n\n Parameters\n ----------\n table : `~astropy.table.Table`\n Input table.\n table_other : `~astropy.table.Table`\n Other input table.\n\n Returns\n -------\n stacked : `~astropy.table.Table`\n Stacked table.\n \"\"\"\n stacked = Table()\n\n for column in table.colnames:\n data = np.hstack([table[column].data[0], table_other[column].data[0]])\n stacked[column] = data[np.newaxis, :]\n return stacked\n\n\ndef table_standardise_units_copy(table):\n \"\"\"Standardise units for all columns in a table in a copy.\n\n Calls `~gammapy.utils.units.standardise_unit`.\n\n Parameters\n ----------\n table : `~astropy.table.Table`\n Input table (won't be modified).\n\n Returns\n -------\n table : `~astropy.table.Table`\n Copy of the input table with standardised column units.\n \"\"\"\n # Note: we could add an `inplace` option (or variant of this function)\n # See https://github.com/astropy/astropy/issues/6098\n table = Table(table)\n return table_standardise_units_inplace(table)\n\n\ndef table_standardise_units_inplace(table):\n \"\"\"Standardise units for all columns in a table in place.\"\"\"\n for column in table.columns.values():\n if column.unit:\n column.unit = standardise_unit(column.unit)\n\n return table\n\n\ndef table_row_to_dict(row, make_quantity=True):\n \"\"\"Make one source data dictionary.\n\n Parameters\n ----------\n row : `~astropy.table.Row`\n Row.\n make_quantity : bool, optional\n Make quantity values for columns with units.\n Default is True.\n\n Returns\n -------\n data : dict\n Row data.\n \"\"\"\n data = {}\n for name, col in row.columns.items():\n val = row[name]\n\n if make_quantity and col.unit:\n val = Quantity(val, unit=col.unit)\n data[name] = val\n return data\n\n\n@deprecated(\"v1.1\", alternative=\"astropy.table.Table\")\ndef table_from_row_data(rows, **kwargs):\n \"\"\"Helper function to create table objects from row data.\n\n Works with quantities.\n\n Parameters\n ----------\n rows : list\n List of row data (each row a dictionary).\n \"\"\"\n table = Table(**kwargs)\n\n if len(rows) == 0:\n return table\n\n colnames = list(rows[0].keys())\n\n for name in colnames:\n coldata = [_[name] for _ in rows]\n if isinstance(rows[0][name], Quantity):\n coldata = Quantity(coldata, unit=rows[0][name].unit)\n table[name] = coldata\n\n return table\n", "path": "gammapy/utils/table.py"}], "after_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"Table helper utilities.\"\"\"\nimport numpy as np\nfrom astropy.table import Table\nfrom astropy.units import Quantity\nfrom .units import standardise_unit\n\n__all__ = [\n \"hstack_columns\",\n \"table_row_to_dict\",\n \"table_standardise_units_copy\",\n \"table_standardise_units_inplace\",\n]\n\n\ndef hstack_columns(table, table_other):\n \"\"\"Stack the column data horizontally.\n\n Parameters\n ----------\n table : `~astropy.table.Table`\n Input table.\n table_other : `~astropy.table.Table`\n Other input table.\n\n Returns\n -------\n stacked : `~astropy.table.Table`\n Stacked table.\n \"\"\"\n stacked = Table()\n\n for column in table.colnames:\n data = np.hstack([table[column].data[0], table_other[column].data[0]])\n stacked[column] = data[np.newaxis, :]\n return stacked\n\n\ndef table_standardise_units_copy(table):\n \"\"\"Standardise units for all columns in a table in a copy.\n\n Calls `~gammapy.utils.units.standardise_unit`.\n\n Parameters\n ----------\n table : `~astropy.table.Table`\n Input table (won't be modified).\n\n Returns\n -------\n table : `~astropy.table.Table`\n Copy of the input table with standardised column units.\n \"\"\"\n # Note: we could add an `inplace` option (or variant of this function)\n # See https://github.com/astropy/astropy/issues/6098\n table = Table(table)\n return table_standardise_units_inplace(table)\n\n\ndef table_standardise_units_inplace(table):\n \"\"\"Standardise units for all columns in a table in place.\"\"\"\n for column in table.columns.values():\n if column.unit:\n column.unit = standardise_unit(column.unit)\n\n return table\n\n\ndef table_row_to_dict(row, make_quantity=True):\n \"\"\"Make one source data dictionary.\n\n Parameters\n ----------\n row : `~astropy.table.Row`\n Row.\n make_quantity : bool, optional\n Make quantity values for columns with units.\n Default is True.\n\n Returns\n -------\n data : dict\n Row data.\n \"\"\"\n data = {}\n for name, col in row.columns.items():\n val = row[name]\n\n if make_quantity and col.unit:\n val = Quantity(val, unit=col.unit)\n data[name] = val\n return data\n", "path": "gammapy/utils/table.py"}]} | 1,267 | 347 |
gh_patches_debug_476 | rasdani/github-patches | git_diff | rlworkgroup__garage-2133 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unpin cloudpickle instead of pinning it to 1.3
Currently, #1879 pins cloudpickle to 1.3 because tensorflow-probability 0.11 does so. When tfp unpins cloudpickle, we should unpin it too.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 """setuptools based setup module."""
2 import os
3
4 from setuptools import find_packages, setup
5
6 GARAGE_GH_TOKEN = os.environ.get('GARAGE_GH_TOKEN') or 'git'
7 GYM_VERSION = '0.17.2'
8
9 # Required dependencies
10 REQUIRED = [
11 # Please keep alphabetized
12 'akro',
13 'click>=2.0',
14 'cloudpickle==1.3',
15 'cma==2.7.0',
16 'dowel==0.0.3',
17 'numpy>=1.14.5',
18 'psutil',
19 'python-dateutil',
20 'ray',
21 'scikit-image',
22 'scipy',
23 'setproctitle>=1.0',
24 'tensorflow>=1.14',
25 'tensorflow-probability>=0.11.0',
26 'torch>=1.0.0,!=1.5.0',
27 'torchvision>=0.2.1',
28 ]
29
30 # Dependencies for optional features
31 EXTRAS = {}
32
33 EXTRAS['gym'] = [
34 f'gym[atari,box2d,classic_control]=={GYM_VERSION}',
35 ]
36
37 EXTRAS['mujoco'] = [
38 'mujoco-py>=2.0,<=2.0.2.8',
39 f'gym[all]=={GYM_VERSION}',
40 ]
41
42 EXTRAS['dm_control'] = [
43 # dm_control throws an error during install about not being able to
44 # find a build dependency (absl-py). Later pip executes the `install`
45 # command again and the install succeeds because absl-py has been
46 # installed. This is stupid, but harmless.
47 'dm_control',
48 ]
49
50 EXTRAS['bullet'] = ['mpi4py', 'pybullet>=2.8.7']
51
52 EXTRAS['all'] = list(set(sum(EXTRAS.values(), [])))
53
54 # Development dependencies (*not* included in 'all')
55 EXTRAS['dev'] = [
56 # Please keep alphabetized
57 'flake8',
58 'flake8-docstrings>=1.5.0',
59 'flake8-import-order',
60 f'metaworld @ https://{GARAGE_GH_TOKEN}@api.github.com/repos/rlworkgroup/metaworld/tarball/0875192baaa91c43523708f55866d98eaf3facaf', # noqa: E501
61 'isort>=4.3.21,<5.0.0',
62 'pep8-naming==0.7.0',
63 'pre-commit',
64 'pycodestyle>=2.5.0',
65 'pydocstyle>=4.0.0',
66 'pylint>=2.5.3',
67 'pytest>=4.5.0', # Required for strict-markers
68 'pytest-cov',
69 'pytest-rerunfailures',
70 'pytest-timeout',
71 'pytest-xdist',
72 'recommonmark',
73 'sphinx',
74 'sphinx-autoapi>=1.4.0',
75 'sphinx_rtd_theme',
76 'sphinxcontrib-bibtex',
77 'yapf==0.30.0',
78 ] # yapf: disable
79
80 with open('README.md') as f:
81 README = f.read()
82
83 # Get the package version dynamically
84 with open('VERSION') as v:
85 VERSION = v.read().strip()
86
87 setup(
88 name='garage',
89 version=VERSION,
90 author='Reinforcement Learning Working Group',
91 description='A toolkit for reproducible reinforcement learning research',
92 url='https://github.com/rlworkgroup/garage',
93 packages=find_packages(where='src'),
94 package_dir={'': 'src'},
95 scripts=['scripts/garage'],
96 python_requires='>=3.6',
97 install_requires=REQUIRED,
98 extras_require=EXTRAS,
99 license='MIT',
100 long_description=README,
101 long_description_content_type='text/markdown',
102 classifiers=[
103 'Development Status :: 4 - Beta',
104 'Intended Audience :: Developers',
105 'Intended Audience :: Education',
106 'Intended Audience :: Science/Research',
107 'License :: OSI Approved :: MIT License',
108 'Programming Language :: Python :: 3.6',
109 'Programming Language :: Python :: 3.7',
110 'Programming Language :: Python :: 3 :: Only',
111 'Topic :: Scientific/Engineering :: Artificial Intelligence',
112 'Topic :: Scientific/Engineering :: Mathematics',
113 'Topic :: Software Development :: Libraries',
114 ],
115 )
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -11,7 +11,7 @@
# Please keep alphabetized
'akro',
'click>=2.0',
- 'cloudpickle==1.3',
+ 'cloudpickle',
'cma==2.7.0',
'dowel==0.0.3',
'numpy>=1.14.5',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -11,7 +11,7 @@\n # Please keep alphabetized\n 'akro',\n 'click>=2.0',\n- 'cloudpickle==1.3',\n+ 'cloudpickle',\n 'cma==2.7.0',\n 'dowel==0.0.3',\n 'numpy>=1.14.5',\n", "issue": "Unpin cloudpickle instead of pinning it to 1.3\nCurrently, #1879 pins cloudpickle to 1.3 because tensorflow-probability 0.11 does so. When tfp unpins cloudpickle, we should unpin it too.\n", "before_files": [{"content": "\"\"\"setuptools based setup module.\"\"\"\nimport os\n\nfrom setuptools import find_packages, setup\n\nGARAGE_GH_TOKEN = os.environ.get('GARAGE_GH_TOKEN') or 'git'\nGYM_VERSION = '0.17.2'\n\n# Required dependencies\nREQUIRED = [\n # Please keep alphabetized\n 'akro',\n 'click>=2.0',\n 'cloudpickle==1.3',\n 'cma==2.7.0',\n 'dowel==0.0.3',\n 'numpy>=1.14.5',\n 'psutil',\n 'python-dateutil',\n 'ray',\n 'scikit-image',\n 'scipy',\n 'setproctitle>=1.0',\n 'tensorflow>=1.14',\n 'tensorflow-probability>=0.11.0',\n 'torch>=1.0.0,!=1.5.0',\n 'torchvision>=0.2.1',\n]\n\n# Dependencies for optional features\nEXTRAS = {}\n\nEXTRAS['gym'] = [\n f'gym[atari,box2d,classic_control]=={GYM_VERSION}',\n]\n\nEXTRAS['mujoco'] = [\n 'mujoco-py>=2.0,<=2.0.2.8',\n f'gym[all]=={GYM_VERSION}',\n]\n\nEXTRAS['dm_control'] = [\n # dm_control throws an error during install about not being able to\n # find a build dependency (absl-py). Later pip executes the `install`\n # command again and the install succeeds because absl-py has been\n # installed. This is stupid, but harmless.\n 'dm_control',\n]\n\nEXTRAS['bullet'] = ['mpi4py', 'pybullet>=2.8.7']\n\nEXTRAS['all'] = list(set(sum(EXTRAS.values(), [])))\n\n# Development dependencies (*not* included in 'all')\nEXTRAS['dev'] = [\n # Please keep alphabetized\n 'flake8',\n 'flake8-docstrings>=1.5.0',\n 'flake8-import-order',\n f'metaworld @ https://{GARAGE_GH_TOKEN}@api.github.com/repos/rlworkgroup/metaworld/tarball/0875192baaa91c43523708f55866d98eaf3facaf', # noqa: E501\n 'isort>=4.3.21,<5.0.0',\n 'pep8-naming==0.7.0',\n 'pre-commit',\n 'pycodestyle>=2.5.0',\n 'pydocstyle>=4.0.0',\n 'pylint>=2.5.3',\n 'pytest>=4.5.0', # Required for strict-markers\n 'pytest-cov',\n 'pytest-rerunfailures',\n 'pytest-timeout',\n 'pytest-xdist',\n 'recommonmark',\n 'sphinx',\n 'sphinx-autoapi>=1.4.0',\n 'sphinx_rtd_theme',\n 'sphinxcontrib-bibtex',\n 'yapf==0.30.0',\n] # yapf: disable\n\nwith open('README.md') as f:\n README = f.read()\n\n# Get the package version dynamically\nwith open('VERSION') as v:\n VERSION = v.read().strip()\n\nsetup(\n name='garage',\n version=VERSION,\n author='Reinforcement Learning Working Group',\n description='A toolkit for reproducible reinforcement learning research',\n url='https://github.com/rlworkgroup/garage',\n packages=find_packages(where='src'),\n package_dir={'': 'src'},\n scripts=['scripts/garage'],\n python_requires='>=3.6',\n install_requires=REQUIRED,\n extras_require=EXTRAS,\n license='MIT',\n long_description=README,\n long_description_content_type='text/markdown',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"setuptools based setup module.\"\"\"\nimport os\n\nfrom setuptools import find_packages, setup\n\nGARAGE_GH_TOKEN = os.environ.get('GARAGE_GH_TOKEN') or 'git'\nGYM_VERSION = '0.17.2'\n\n# Required dependencies\nREQUIRED = [\n # Please keep alphabetized\n 'akro',\n 'click>=2.0',\n 'cloudpickle',\n 'cma==2.7.0',\n 'dowel==0.0.3',\n 'numpy>=1.14.5',\n 'psutil',\n 'python-dateutil',\n 'ray',\n 'scikit-image',\n 'scipy',\n 'setproctitle>=1.0',\n 'tensorflow>=1.14',\n 'tensorflow-probability>=0.11.0',\n 'torch>=1.0.0,!=1.5.0',\n 'torchvision>=0.2.1',\n]\n\n# Dependencies for optional features\nEXTRAS = {}\n\nEXTRAS['gym'] = [\n f'gym[atari,box2d,classic_control]=={GYM_VERSION}',\n]\n\nEXTRAS['mujoco'] = [\n 'mujoco-py>=2.0,<=2.0.2.8',\n f'gym[all]=={GYM_VERSION}',\n]\n\nEXTRAS['dm_control'] = [\n # dm_control throws an error during install about not being able to\n # find a build dependency (absl-py). Later pip executes the `install`\n # command again and the install succeeds because absl-py has been\n # installed. This is stupid, but harmless.\n 'dm_control',\n]\n\nEXTRAS['bullet'] = ['mpi4py', 'pybullet>=2.8.7']\n\nEXTRAS['all'] = list(set(sum(EXTRAS.values(), [])))\n\n# Development dependencies (*not* included in 'all')\nEXTRAS['dev'] = [\n # Please keep alphabetized\n 'flake8',\n 'flake8-docstrings>=1.5.0',\n 'flake8-import-order',\n f'metaworld @ https://{GARAGE_GH_TOKEN}@api.github.com/repos/rlworkgroup/metaworld/tarball/0875192baaa91c43523708f55866d98eaf3facaf', # noqa: E501\n 'isort>=4.3.21,<5.0.0',\n 'pep8-naming==0.7.0',\n 'pre-commit',\n 'pycodestyle>=2.5.0',\n 'pydocstyle>=4.0.0',\n 'pylint>=2.5.3',\n 'pytest>=4.5.0', # Required for strict-markers\n 'pytest-cov',\n 'pytest-rerunfailures',\n 'pytest-timeout',\n 'pytest-xdist',\n 'recommonmark',\n 'sphinx',\n 'sphinx-autoapi>=1.4.0',\n 'sphinx_rtd_theme',\n 'sphinxcontrib-bibtex',\n 'yapf==0.30.0',\n] # yapf: disable\n\nwith open('README.md') as f:\n README = f.read()\n\n# Get the package version dynamically\nwith open('VERSION') as v:\n VERSION = v.read().strip()\n\nsetup(\n name='garage',\n version=VERSION,\n author='Reinforcement Learning Working Group',\n description='A toolkit for reproducible reinforcement learning research',\n url='https://github.com/rlworkgroup/garage',\n packages=find_packages(where='src'),\n package_dir={'': 'src'},\n scripts=['scripts/garage'],\n python_requires='>=3.6',\n install_requires=REQUIRED,\n extras_require=EXTRAS,\n license='MIT',\n long_description=README,\n long_description_content_type='text/markdown',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries',\n ],\n)\n", "path": "setup.py"}]} | 1,555 | 101 |
gh_patches_debug_26723 | rasdani/github-patches | git_diff | OpenCTI-Platform__connectors-51 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CVE] Download link to variable
## Description
Set the download CVE link to variable, because otherwise the tool can hardly be used offline. Offline we can host the CVEs on a link that is not : "https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-"
## Reproducible Steps
https://github.com/OpenCTI-Platform/connectors/blame/9d47ffdad1c2a7fbdd709565d5c3f670693b148f/cve/src/cve.py#L103
## Expected Output
Url as a variable in the .yml
## Actual Output
Permanent link : "https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-"
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cve/src/cve.py`
Content:
```
1 # coding: utf-8
2
3 import os
4 import yaml
5 import time
6 import urllib.request
7 import gzip
8 import shutil
9
10 from datetime import datetime
11 from pycti import OpenCTIConnectorHelper, get_config_variable
12 from cvetostix2 import convert
13
14
15 class Cve:
16 def __init__(self):
17 # Instantiate the connector helper from config
18 config_file_path = os.path.dirname(os.path.abspath(__file__)) + "/config.yml"
19 config = (
20 yaml.load(open(config_file_path), Loader=yaml.FullLoader)
21 if os.path.isfile(config_file_path)
22 else {}
23 )
24 self.helper = OpenCTIConnectorHelper(config)
25 # Extra config
26 self.cve_import_history = get_config_variable(
27 "CVE_IMPORT_HISTORY", ["cve", "import_history"], config, False
28 )
29 self.cve_nvd_data_feed = get_config_variable(
30 "CVE_NVD_DATA_FEED", ["cve", "nvd_data_feed"], config
31 )
32 self.cve_interval = get_config_variable(
33 "CVE_INTERVAL", ["cve", "interval"], config, True
34 )
35 self.update_existing_data = get_config_variable(
36 "CONNECTOR_UPDATE_EXISTING_DATA",
37 ["connector", "update_existing_data"],
38 config,
39 )
40
41 def get_interval(self):
42 return int(self.cve_interval) * 60 * 60 * 24
43
44 def convert_and_send(self, url):
45 try:
46 # Downloading json.gz file
47 self.helper.log_info("Requesting the file " + url)
48 urllib.request.urlretrieve(
49 self.cve_nvd_data_feed,
50 os.path.dirname(os.path.abspath(__file__)) + "/data.json.gz",
51 )
52 # Unzipping the file
53 self.helper.log_info("Unzipping the file")
54 with gzip.open("data.json.gz", "rb") as f_in:
55 with open("data.json", "wb") as f_out:
56 shutil.copyfileobj(f_in, f_out)
57 # Converting the file to stix2
58 self.helper.log_info("Converting the file")
59 convert("data.json", "data-stix2.json")
60 with open("data-stix2.json") as stix_json:
61 contents = stix_json.read()
62 self.helper.send_stix2_bundle(
63 contents, self.helper.connect_scope, self.update_existing_data
64 )
65 # Remove files
66 os.remove("data.json")
67 os.remove("data.json.gz")
68 os.remove("data-stix2.json")
69 except Exception as e:
70 self.helper.log_error(str(e))
71 time.sleep(60)
72
73 def run(self):
74 self.helper.log_info("Fetching CVE knowledge...")
75 while True:
76 try:
77 # Get the current timestamp and check
78 timestamp = int(time.time())
79 current_state = self.helper.get_state()
80 if current_state is not None and "last_run" in current_state:
81 last_run = current_state["last_run"]
82 self.helper.log_info(
83 "Connector last run: "
84 + datetime.utcfromtimestamp(last_run).strftime(
85 "%Y-%m-%d %H:%M:%S"
86 )
87 )
88 else:
89 last_run = None
90 self.helper.log_info("Connector has never run")
91 # If the last_run is more than interval-1 day
92 if last_run is None or (
93 (timestamp - last_run)
94 > ((int(self.cve_interval) - 1) * 60 * 60 * 24)
95 ):
96 self.convert_and_send(self.cve_nvd_data_feed)
97 # If import history and never run
98 if last_run is None and self.cve_import_history:
99 now = datetime.now()
100 years = list(range(2002, now.year))
101 for year in years:
102 self.convert_and_send(
103 "https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-"
104 + str(year)
105 + ".json.gz"
106 )
107
108 # Store the current timestamp as a last run
109 self.helper.log_info(
110 "Connector successfully run, storing last_run as "
111 + str(timestamp)
112 )
113 self.helper.set_state({"last_run": timestamp})
114 self.helper.log_info(
115 "Last_run stored, next run in: "
116 + str(round(self.get_interval() / 60 / 60 / 24, 2))
117 + " days"
118 )
119 time.sleep(60)
120 else:
121 new_interval = self.get_interval() - (timestamp - last_run)
122 self.helper.log_info(
123 "Connector will not run, next run in: "
124 + str(round(new_interval / 60 / 60 / 24, 2))
125 + " days"
126 )
127 time.sleep(60)
128 except (KeyboardInterrupt, SystemExit):
129 self.helper.log_info("Connector stop")
130 exit(0)
131 except Exception as e:
132 self.helper.log_error(str(e))
133 time.sleep(60)
134
135
136 if __name__ == "__main__":
137 try:
138 cveConnector = Cve()
139 cveConnector.run()
140 except Exception as e:
141 print(e)
142 time.sleep(10)
143 exit(0)
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cve/src/cve.py b/cve/src/cve.py
--- a/cve/src/cve.py
+++ b/cve/src/cve.py
@@ -29,6 +29,9 @@
self.cve_nvd_data_feed = get_config_variable(
"CVE_NVD_DATA_FEED", ["cve", "nvd_data_feed"], config
)
+ self.cve_history_data_feed = get_config_variable(
+ "CVE_HISTORY_DATA_FEED", ["cve", "history_data_feed"], config
+ )
self.cve_interval = get_config_variable(
"CVE_INTERVAL", ["cve", "interval"], config, True
)
@@ -97,12 +100,10 @@
# If import history and never run
if last_run is None and self.cve_import_history:
now = datetime.now()
- years = list(range(2002, now.year))
+ years = list(range(2002, now.year+1))
for year in years:
self.convert_and_send(
- "https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-"
- + str(year)
- + ".json.gz"
+ f"{self.cve_history_data_feed}nvdcve-1.1-{year}.json.gz"
)
# Store the current timestamp as a last run
| {"golden_diff": "diff --git a/cve/src/cve.py b/cve/src/cve.py\n--- a/cve/src/cve.py\n+++ b/cve/src/cve.py\n@@ -29,6 +29,9 @@\n self.cve_nvd_data_feed = get_config_variable(\n \"CVE_NVD_DATA_FEED\", [\"cve\", \"nvd_data_feed\"], config\n )\n+ self.cve_history_data_feed = get_config_variable(\n+ \"CVE_HISTORY_DATA_FEED\", [\"cve\", \"history_data_feed\"], config\n+ )\n self.cve_interval = get_config_variable(\n \"CVE_INTERVAL\", [\"cve\", \"interval\"], config, True\n )\n@@ -97,12 +100,10 @@\n # If import history and never run\n if last_run is None and self.cve_import_history:\n now = datetime.now()\n- years = list(range(2002, now.year))\n+ years = list(range(2002, now.year+1))\n for year in years:\n self.convert_and_send(\n- \"https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-\"\n- + str(year)\n- + \".json.gz\"\n+ f\"{self.cve_history_data_feed}nvdcve-1.1-{year}.json.gz\"\n )\n \n # Store the current timestamp as a last run\n", "issue": "[CVE] Download link to variable\n## Description\r\n\r\nSet the download CVE link to variable, because otherwise the tool can hardly be used offline. Offline we can host the CVEs on a link that is not : \"https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-\"\r\n\r\n## Reproducible Steps\r\n\r\nhttps://github.com/OpenCTI-Platform/connectors/blame/9d47ffdad1c2a7fbdd709565d5c3f670693b148f/cve/src/cve.py#L103\r\n\r\n## Expected Output\r\n\r\nUrl as a variable in the .yml\r\n\r\n## Actual Output\r\n\r\nPermanent link : \"https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-\"\r\n\n", "before_files": [{"content": "# coding: utf-8\n\nimport os\nimport yaml\nimport time\nimport urllib.request\nimport gzip\nimport shutil\n\nfrom datetime import datetime\nfrom pycti import OpenCTIConnectorHelper, get_config_variable\nfrom cvetostix2 import convert\n\n\nclass Cve:\n def __init__(self):\n # Instantiate the connector helper from config\n config_file_path = os.path.dirname(os.path.abspath(__file__)) + \"/config.yml\"\n config = (\n yaml.load(open(config_file_path), Loader=yaml.FullLoader)\n if os.path.isfile(config_file_path)\n else {}\n )\n self.helper = OpenCTIConnectorHelper(config)\n # Extra config\n self.cve_import_history = get_config_variable(\n \"CVE_IMPORT_HISTORY\", [\"cve\", \"import_history\"], config, False\n )\n self.cve_nvd_data_feed = get_config_variable(\n \"CVE_NVD_DATA_FEED\", [\"cve\", \"nvd_data_feed\"], config\n )\n self.cve_interval = get_config_variable(\n \"CVE_INTERVAL\", [\"cve\", \"interval\"], config, True\n )\n self.update_existing_data = get_config_variable(\n \"CONNECTOR_UPDATE_EXISTING_DATA\",\n [\"connector\", \"update_existing_data\"],\n config,\n )\n\n def get_interval(self):\n return int(self.cve_interval) * 60 * 60 * 24\n\n def convert_and_send(self, url):\n try:\n # Downloading json.gz file\n self.helper.log_info(\"Requesting the file \" + url)\n urllib.request.urlretrieve(\n self.cve_nvd_data_feed,\n os.path.dirname(os.path.abspath(__file__)) + \"/data.json.gz\",\n )\n # Unzipping the file\n self.helper.log_info(\"Unzipping the file\")\n with gzip.open(\"data.json.gz\", \"rb\") as f_in:\n with open(\"data.json\", \"wb\") as f_out:\n shutil.copyfileobj(f_in, f_out)\n # Converting the file to stix2\n self.helper.log_info(\"Converting the file\")\n convert(\"data.json\", \"data-stix2.json\")\n with open(\"data-stix2.json\") as stix_json:\n contents = stix_json.read()\n self.helper.send_stix2_bundle(\n contents, self.helper.connect_scope, self.update_existing_data\n )\n # Remove files\n os.remove(\"data.json\")\n os.remove(\"data.json.gz\")\n os.remove(\"data-stix2.json\")\n except Exception as e:\n self.helper.log_error(str(e))\n time.sleep(60)\n\n def run(self):\n self.helper.log_info(\"Fetching CVE knowledge...\")\n while True:\n try:\n # Get the current timestamp and check\n timestamp = int(time.time())\n current_state = self.helper.get_state()\n if current_state is not None and \"last_run\" in current_state:\n last_run = current_state[\"last_run\"]\n self.helper.log_info(\n \"Connector last run: \"\n + datetime.utcfromtimestamp(last_run).strftime(\n \"%Y-%m-%d %H:%M:%S\"\n )\n )\n else:\n last_run = None\n self.helper.log_info(\"Connector has never run\")\n # If the last_run is more than interval-1 day\n if last_run is None or (\n (timestamp - last_run)\n > ((int(self.cve_interval) - 1) * 60 * 60 * 24)\n ):\n self.convert_and_send(self.cve_nvd_data_feed)\n # If import history and never run\n if last_run is None and self.cve_import_history:\n now = datetime.now()\n years = list(range(2002, now.year))\n for year in years:\n self.convert_and_send(\n \"https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-\"\n + str(year)\n + \".json.gz\"\n )\n\n # Store the current timestamp as a last run\n self.helper.log_info(\n \"Connector successfully run, storing last_run as \"\n + str(timestamp)\n )\n self.helper.set_state({\"last_run\": timestamp})\n self.helper.log_info(\n \"Last_run stored, next run in: \"\n + str(round(self.get_interval() / 60 / 60 / 24, 2))\n + \" days\"\n )\n time.sleep(60)\n else:\n new_interval = self.get_interval() - (timestamp - last_run)\n self.helper.log_info(\n \"Connector will not run, next run in: \"\n + str(round(new_interval / 60 / 60 / 24, 2))\n + \" days\"\n )\n time.sleep(60)\n except (KeyboardInterrupt, SystemExit):\n self.helper.log_info(\"Connector stop\")\n exit(0)\n except Exception as e:\n self.helper.log_error(str(e))\n time.sleep(60)\n\n\nif __name__ == \"__main__\":\n try:\n cveConnector = Cve()\n cveConnector.run()\n except Exception as e:\n print(e)\n time.sleep(10)\n exit(0)\n", "path": "cve/src/cve.py"}], "after_files": [{"content": "# coding: utf-8\n\nimport os\nimport yaml\nimport time\nimport urllib.request\nimport gzip\nimport shutil\n\nfrom datetime import datetime\nfrom pycti import OpenCTIConnectorHelper, get_config_variable\nfrom cvetostix2 import convert\n\n\nclass Cve:\n def __init__(self):\n # Instantiate the connector helper from config\n config_file_path = os.path.dirname(os.path.abspath(__file__)) + \"/config.yml\"\n config = (\n yaml.load(open(config_file_path), Loader=yaml.FullLoader)\n if os.path.isfile(config_file_path)\n else {}\n )\n self.helper = OpenCTIConnectorHelper(config)\n # Extra config\n self.cve_import_history = get_config_variable(\n \"CVE_IMPORT_HISTORY\", [\"cve\", \"import_history\"], config, False\n )\n self.cve_nvd_data_feed = get_config_variable(\n \"CVE_NVD_DATA_FEED\", [\"cve\", \"nvd_data_feed\"], config\n )\n self.cve_history_data_feed = get_config_variable(\n \"CVE_HISTORY_DATA_FEED\", [\"cve\", \"history_data_feed\"], config\n )\n self.cve_interval = get_config_variable(\n \"CVE_INTERVAL\", [\"cve\", \"interval\"], config, True\n )\n self.update_existing_data = get_config_variable(\n \"CONNECTOR_UPDATE_EXISTING_DATA\",\n [\"connector\", \"update_existing_data\"],\n config,\n )\n\n def get_interval(self):\n return int(self.cve_interval) * 60 * 60 * 24\n\n def convert_and_send(self, url):\n try:\n # Downloading json.gz file\n self.helper.log_info(\"Requesting the file \" + url)\n urllib.request.urlretrieve(\n self.cve_nvd_data_feed,\n os.path.dirname(os.path.abspath(__file__)) + \"/data.json.gz\",\n )\n # Unzipping the file\n self.helper.log_info(\"Unzipping the file\")\n with gzip.open(\"data.json.gz\", \"rb\") as f_in:\n with open(\"data.json\", \"wb\") as f_out:\n shutil.copyfileobj(f_in, f_out)\n # Converting the file to stix2\n self.helper.log_info(\"Converting the file\")\n convert(\"data.json\", \"data-stix2.json\")\n with open(\"data-stix2.json\") as stix_json:\n contents = stix_json.read()\n self.helper.send_stix2_bundle(\n contents, self.helper.connect_scope, self.update_existing_data\n )\n # Remove files\n os.remove(\"data.json\")\n os.remove(\"data.json.gz\")\n os.remove(\"data-stix2.json\")\n except Exception as e:\n self.helper.log_error(str(e))\n time.sleep(60)\n\n def run(self):\n self.helper.log_info(\"Fetching CVE knowledge...\")\n while True:\n try:\n # Get the current timestamp and check\n timestamp = int(time.time())\n current_state = self.helper.get_state()\n if current_state is not None and \"last_run\" in current_state:\n last_run = current_state[\"last_run\"]\n self.helper.log_info(\n \"Connector last run: \"\n + datetime.utcfromtimestamp(last_run).strftime(\n \"%Y-%m-%d %H:%M:%S\"\n )\n )\n else:\n last_run = None\n self.helper.log_info(\"Connector has never run\")\n # If the last_run is more than interval-1 day\n if last_run is None or (\n (timestamp - last_run)\n > ((int(self.cve_interval) - 1) * 60 * 60 * 24)\n ):\n self.convert_and_send(self.cve_nvd_data_feed)\n # If import history and never run\n if last_run is None and self.cve_import_history:\n now = datetime.now()\n years = list(range(2002, now.year+1))\n for year in years:\n self.convert_and_send(\n f\"{self.cve_history_data_feed}nvdcve-1.1-{year}.json.gz\"\n )\n\n # Store the current timestamp as a last run\n self.helper.log_info(\n \"Connector successfully run, storing last_run as \"\n + str(timestamp)\n )\n self.helper.set_state({\"last_run\": timestamp})\n self.helper.log_info(\n \"Last_run stored, next run in: \"\n + str(round(self.get_interval() / 60 / 60 / 24, 2))\n + \" days\"\n )\n time.sleep(60)\n else:\n new_interval = self.get_interval() - (timestamp - last_run)\n self.helper.log_info(\n \"Connector will not run, next run in: \"\n + str(round(new_interval / 60 / 60 / 24, 2))\n + \" days\"\n )\n time.sleep(60)\n except (KeyboardInterrupt, SystemExit):\n self.helper.log_info(\"Connector stop\")\n exit(0)\n except Exception as e:\n self.helper.log_error(str(e))\n time.sleep(60)\n\n\nif __name__ == \"__main__\":\n try:\n cveConnector = Cve()\n cveConnector.run()\n except Exception as e:\n print(e)\n time.sleep(10)\n exit(0)\n", "path": "cve/src/cve.py"}]} | 1,902 | 313 |
gh_patches_debug_18827 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-1879 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sanic resource names gets grouped incorrectly
Hi!
The Endpoints gets grouped incorrectly in the UI when using the Sanic integration because the path parameter values are used in the resource name instead of the parameter names and thus creating one unique Endpoint for every unique method + request path.
Example:

Is this by design? Other integrations (node express for example) groups them by the paramater names which imo seems to be the proper way to do it.
I have created a PR to solve this: #1879
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/sanic/patch.py`
Content:
```
1 import asyncio
2 import ddtrace
3 import sanic
4 from ddtrace import config
5 from ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY
6 from ddtrace.ext import SpanTypes
7 from ddtrace.propagation.http import HTTPPropagator
8 from ddtrace.utils.wrappers import unwrap as _u
9 from ddtrace.vendor import wrapt
10 from ddtrace.vendor.wrapt import wrap_function_wrapper as _w
11
12 from .. import trace_utils
13 from ...internal.logger import get_logger
14
15 log = get_logger(__name__)
16
17 config._add("sanic", dict(_default_service="sanic", distributed_tracing=True))
18
19
20 def _wrap_response_callback(span, callback):
21 # wrap response callbacks (either sync or async function) to set span tags
22 # based on response and finish span before returning response
23
24 def update_span(response):
25 if isinstance(response, sanic.response.BaseHTTPResponse):
26 status_code = response.status
27 response_headers = response.headers
28 else:
29 # invalid response causes ServerError exception which must be handled
30 status_code = 500
31 response_headers = None
32 trace_utils.set_http_meta(span, config.sanic, status_code=status_code, response_headers=response_headers)
33 span.finish()
34
35 @wrapt.function_wrapper
36 def wrap_sync(wrapped, instance, args, kwargs):
37 r = wrapped(*args, **kwargs)
38 response = args[0]
39 update_span(response)
40 return r
41
42 @wrapt.function_wrapper
43 async def wrap_async(wrapped, instance, args, kwargs):
44 r = await wrapped(*args, **kwargs)
45 response = args[0]
46 update_span(response)
47 return r
48
49 if asyncio.iscoroutinefunction(callback):
50 return wrap_async(callback)
51
52 return wrap_sync(callback)
53
54
55 def patch():
56 """Patch the instrumented methods."""
57 if getattr(sanic, "__datadog_patch", False):
58 return
59 setattr(sanic, "__datadog_patch", True)
60 _w("sanic", "Sanic.handle_request", patch_handle_request)
61
62
63 def unpatch():
64 """Unpatch the instrumented methods."""
65 _u(sanic.Sanic, "handle_request")
66 if not getattr(sanic, "__datadog_patch", False):
67 return
68 setattr(sanic, "__datadog_patch", False)
69
70
71 async def patch_handle_request(wrapped, instance, args, kwargs):
72 """Wrapper for Sanic.handle_request"""
73 request = kwargs.get("request", args[0])
74 write_callback = kwargs.get("write_callback", args[1])
75 stream_callback = kwargs.get("stream_callback", args[2])
76
77 if request.scheme not in ("http", "https"):
78 return await wrapped(request, write_callback, stream_callback, **kwargs)
79
80 resource = "{} {}".format(request.method, request.path)
81
82 headers = request.headers.copy()
83
84 if config.sanic.distributed_tracing:
85 propagator = HTTPPropagator()
86 context = propagator.extract(headers)
87 if context.trace_id:
88 ddtrace.tracer.context_provider.activate(context)
89
90 span = ddtrace.tracer.trace(
91 "sanic.request",
92 service=trace_utils.int_service(None, config.sanic),
93 resource=resource,
94 span_type=SpanTypes.WEB,
95 )
96 sample_rate = config.sanic.get_analytics_sample_rate(use_global_config=True)
97 if sample_rate is not None:
98 span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)
99
100 method = request.method
101 url = "{scheme}://{host}{path}".format(scheme=request.scheme, host=request.host, path=request.path)
102 query_string = request.query_string
103 if isinstance(query_string, bytes):
104 query_string = query_string.decode()
105 trace_utils.set_http_meta(span, config.sanic, method=method, url=url, query=query_string, request_headers=headers)
106
107 if write_callback is not None:
108 write_callback = _wrap_response_callback(span, write_callback)
109 if stream_callback is not None:
110 stream_callback = _wrap_response_callback(span, stream_callback)
111
112 return await wrapped(request, write_callback, stream_callback, **kwargs)
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddtrace/contrib/sanic/patch.py b/ddtrace/contrib/sanic/patch.py
--- a/ddtrace/contrib/sanic/patch.py
+++ b/ddtrace/contrib/sanic/patch.py
@@ -52,6 +52,18 @@
return wrap_sync(callback)
+def _get_path(request):
+ """Get path and replace path parameter values with names if route exists."""
+ path = request.path
+ try:
+ match_info = request.match_info
+ except sanic.exceptions.SanicException:
+ return path
+ for key, value in match_info.items():
+ path = path.replace(value, f"<{key}>")
+ return path
+
+
def patch():
"""Patch the instrumented methods."""
if getattr(sanic, "__datadog_patch", False):
@@ -77,7 +89,7 @@
if request.scheme not in ("http", "https"):
return await wrapped(request, write_callback, stream_callback, **kwargs)
- resource = "{} {}".format(request.method, request.path)
+ resource = "{} {}".format(request.method, _get_path(request))
headers = request.headers.copy()
| {"golden_diff": "diff --git a/ddtrace/contrib/sanic/patch.py b/ddtrace/contrib/sanic/patch.py\n--- a/ddtrace/contrib/sanic/patch.py\n+++ b/ddtrace/contrib/sanic/patch.py\n@@ -52,6 +52,18 @@\n return wrap_sync(callback)\n \n \n+def _get_path(request):\n+ \"\"\"Get path and replace path parameter values with names if route exists.\"\"\"\n+ path = request.path\n+ try:\n+ match_info = request.match_info\n+ except sanic.exceptions.SanicException:\n+ return path\n+ for key, value in match_info.items():\n+ path = path.replace(value, f\"<{key}>\")\n+ return path\n+\n+\n def patch():\n \"\"\"Patch the instrumented methods.\"\"\"\n if getattr(sanic, \"__datadog_patch\", False):\n@@ -77,7 +89,7 @@\n if request.scheme not in (\"http\", \"https\"):\n return await wrapped(request, write_callback, stream_callback, **kwargs)\n \n- resource = \"{} {}\".format(request.method, request.path)\n+ resource = \"{} {}\".format(request.method, _get_path(request))\n \n headers = request.headers.copy()\n", "issue": "Sanic resource names gets grouped incorrectly\nHi!\r\n\r\nThe Endpoints gets grouped incorrectly in the UI when using the Sanic integration because the path parameter values are used in the resource name instead of the parameter names and thus creating one unique Endpoint for every unique method + request path.\r\n\r\nExample:\r\n\r\n\r\nIs this by design? Other integrations (node express for example) groups them by the paramater names which imo seems to be the proper way to do it.\r\n\r\nI have created a PR to solve this: #1879\n", "before_files": [{"content": "import asyncio\nimport ddtrace\nimport sanic\nfrom ddtrace import config\nfrom ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ddtrace.ext import SpanTypes\nfrom ddtrace.propagation.http import HTTPPropagator\nfrom ddtrace.utils.wrappers import unwrap as _u\nfrom ddtrace.vendor import wrapt\nfrom ddtrace.vendor.wrapt import wrap_function_wrapper as _w\n\nfrom .. import trace_utils\nfrom ...internal.logger import get_logger\n\nlog = get_logger(__name__)\n\nconfig._add(\"sanic\", dict(_default_service=\"sanic\", distributed_tracing=True))\n\n\ndef _wrap_response_callback(span, callback):\n # wrap response callbacks (either sync or async function) to set span tags\n # based on response and finish span before returning response\n\n def update_span(response):\n if isinstance(response, sanic.response.BaseHTTPResponse):\n status_code = response.status\n response_headers = response.headers\n else:\n # invalid response causes ServerError exception which must be handled\n status_code = 500\n response_headers = None\n trace_utils.set_http_meta(span, config.sanic, status_code=status_code, response_headers=response_headers)\n span.finish()\n\n @wrapt.function_wrapper\n def wrap_sync(wrapped, instance, args, kwargs):\n r = wrapped(*args, **kwargs)\n response = args[0]\n update_span(response)\n return r\n\n @wrapt.function_wrapper\n async def wrap_async(wrapped, instance, args, kwargs):\n r = await wrapped(*args, **kwargs)\n response = args[0]\n update_span(response)\n return r\n\n if asyncio.iscoroutinefunction(callback):\n return wrap_async(callback)\n\n return wrap_sync(callback)\n\n\ndef patch():\n \"\"\"Patch the instrumented methods.\"\"\"\n if getattr(sanic, \"__datadog_patch\", False):\n return\n setattr(sanic, \"__datadog_patch\", True)\n _w(\"sanic\", \"Sanic.handle_request\", patch_handle_request)\n\n\ndef unpatch():\n \"\"\"Unpatch the instrumented methods.\"\"\"\n _u(sanic.Sanic, \"handle_request\")\n if not getattr(sanic, \"__datadog_patch\", False):\n return\n setattr(sanic, \"__datadog_patch\", False)\n\n\nasync def patch_handle_request(wrapped, instance, args, kwargs):\n \"\"\"Wrapper for Sanic.handle_request\"\"\"\n request = kwargs.get(\"request\", args[0])\n write_callback = kwargs.get(\"write_callback\", args[1])\n stream_callback = kwargs.get(\"stream_callback\", args[2])\n\n if request.scheme not in (\"http\", \"https\"):\n return await wrapped(request, write_callback, stream_callback, **kwargs)\n\n resource = \"{} {}\".format(request.method, request.path)\n\n headers = request.headers.copy()\n\n if config.sanic.distributed_tracing:\n propagator = HTTPPropagator()\n context = propagator.extract(headers)\n if context.trace_id:\n ddtrace.tracer.context_provider.activate(context)\n\n span = ddtrace.tracer.trace(\n \"sanic.request\",\n service=trace_utils.int_service(None, config.sanic),\n resource=resource,\n span_type=SpanTypes.WEB,\n )\n sample_rate = config.sanic.get_analytics_sample_rate(use_global_config=True)\n if sample_rate is not None:\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)\n\n method = request.method\n url = \"{scheme}://{host}{path}\".format(scheme=request.scheme, host=request.host, path=request.path)\n query_string = request.query_string\n if isinstance(query_string, bytes):\n query_string = query_string.decode()\n trace_utils.set_http_meta(span, config.sanic, method=method, url=url, query=query_string, request_headers=headers)\n\n if write_callback is not None:\n write_callback = _wrap_response_callback(span, write_callback)\n if stream_callback is not None:\n stream_callback = _wrap_response_callback(span, stream_callback)\n\n return await wrapped(request, write_callback, stream_callback, **kwargs)\n", "path": "ddtrace/contrib/sanic/patch.py"}], "after_files": [{"content": "import asyncio\nimport ddtrace\nimport sanic\nfrom ddtrace import config\nfrom ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ddtrace.ext import SpanTypes\nfrom ddtrace.propagation.http import HTTPPropagator\nfrom ddtrace.utils.wrappers import unwrap as _u\nfrom ddtrace.vendor import wrapt\nfrom ddtrace.vendor.wrapt import wrap_function_wrapper as _w\n\nfrom .. import trace_utils\nfrom ...internal.logger import get_logger\n\nlog = get_logger(__name__)\n\nconfig._add(\"sanic\", dict(_default_service=\"sanic\", distributed_tracing=True))\n\n\ndef _wrap_response_callback(span, callback):\n # wrap response callbacks (either sync or async function) to set span tags\n # based on response and finish span before returning response\n\n def update_span(response):\n if isinstance(response, sanic.response.BaseHTTPResponse):\n status_code = response.status\n response_headers = response.headers\n else:\n # invalid response causes ServerError exception which must be handled\n status_code = 500\n response_headers = None\n trace_utils.set_http_meta(span, config.sanic, status_code=status_code, response_headers=response_headers)\n span.finish()\n\n @wrapt.function_wrapper\n def wrap_sync(wrapped, instance, args, kwargs):\n r = wrapped(*args, **kwargs)\n response = args[0]\n update_span(response)\n return r\n\n @wrapt.function_wrapper\n async def wrap_async(wrapped, instance, args, kwargs):\n r = await wrapped(*args, **kwargs)\n response = args[0]\n update_span(response)\n return r\n\n if asyncio.iscoroutinefunction(callback):\n return wrap_async(callback)\n\n return wrap_sync(callback)\n\n\ndef _get_path(request):\n \"\"\"Get path and replace path parameter values with names if route exists.\"\"\"\n path = request.path\n try:\n match_info = request.match_info\n except sanic.exceptions.SanicException:\n return path\n for key, value in match_info.items():\n path = path.replace(value, f\"<{key}>\")\n return path\n\n\ndef patch():\n \"\"\"Patch the instrumented methods.\"\"\"\n if getattr(sanic, \"__datadog_patch\", False):\n return\n setattr(sanic, \"__datadog_patch\", True)\n _w(\"sanic\", \"Sanic.handle_request\", patch_handle_request)\n\n\ndef unpatch():\n \"\"\"Unpatch the instrumented methods.\"\"\"\n _u(sanic.Sanic, \"handle_request\")\n if not getattr(sanic, \"__datadog_patch\", False):\n return\n setattr(sanic, \"__datadog_patch\", False)\n\n\nasync def patch_handle_request(wrapped, instance, args, kwargs):\n \"\"\"Wrapper for Sanic.handle_request\"\"\"\n request = kwargs.get(\"request\", args[0])\n write_callback = kwargs.get(\"write_callback\", args[1])\n stream_callback = kwargs.get(\"stream_callback\", args[2])\n\n if request.scheme not in (\"http\", \"https\"):\n return await wrapped(request, write_callback, stream_callback, **kwargs)\n\n resource = \"{} {}\".format(request.method, _get_path(request))\n\n headers = request.headers.copy()\n\n if config.sanic.distributed_tracing:\n propagator = HTTPPropagator()\n context = propagator.extract(headers)\n if context.trace_id:\n ddtrace.tracer.context_provider.activate(context)\n\n span = ddtrace.tracer.trace(\n \"sanic.request\",\n service=trace_utils.int_service(None, config.sanic),\n resource=resource,\n span_type=SpanTypes.WEB,\n )\n sample_rate = config.sanic.get_analytics_sample_rate(use_global_config=True)\n if sample_rate is not None:\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)\n\n method = request.method\n url = \"{scheme}://{host}{path}\".format(scheme=request.scheme, host=request.host, path=request.path)\n query_string = request.query_string\n if isinstance(query_string, bytes):\n query_string = query_string.decode()\n trace_utils.set_http_meta(span, config.sanic, method=method, url=url, query=query_string, request_headers=headers)\n\n if write_callback is not None:\n write_callback = _wrap_response_callback(span, write_callback)\n if stream_callback is not None:\n stream_callback = _wrap_response_callback(span, stream_callback)\n\n return await wrapped(request, write_callback, stream_callback, **kwargs)\n", "path": "ddtrace/contrib/sanic/patch.py"}]} | 1,540 | 259 |
gh_patches_debug_28758 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-1402 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[PORT] Add EndOfConversationCodes to EndOfConversation activity from Skill
> Port this change from botbuilder-dotnet/master branch:
https://github.com/microsoft/botbuilder-dotnet/pull/4235
Fixes https://github.com/microsoft/botframework-sdk/issues/5852
# Changed projects
* Microsoft.Bot.Builder.Dialogs
* Microsoft.Bot.Builder.Dialogs.Tests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from botbuilder.core import BotAdapter, StatePropertyAccessor, TurnContext
5 from botbuilder.core.skills import SkillHandler, SkillConversationReference
6
7 from botbuilder.dialogs import (
8 Dialog,
9 DialogEvents,
10 DialogSet,
11 DialogTurnStatus,
12 )
13 from botbuilder.schema import Activity, ActivityTypes
14 from botframework.connector.auth import (
15 ClaimsIdentity,
16 SkillValidation,
17 AuthenticationConstants,
18 GovernmentConstants,
19 )
20
21
22 class DialogExtensions:
23 @staticmethod
24 async def run_dialog(
25 dialog: Dialog, turn_context: TurnContext, accessor: StatePropertyAccessor
26 ):
27 """
28 Creates a dialog stack and starts a dialog, pushing it onto the stack.
29 """
30
31 dialog_set = DialogSet(accessor)
32 dialog_set.add(dialog)
33
34 dialog_context = await dialog_set.create_context(turn_context)
35
36 # Handle EoC and Reprompt event from a parent bot (can be root bot to skill or skill to skill)
37 if DialogExtensions.__is_from_parent_to_skill(turn_context):
38 # Handle remote cancellation request from parent.
39 if turn_context.activity.type == ActivityTypes.end_of_conversation:
40 if not dialog_context.stack:
41 # No dialogs to cancel, just return.
42 return
43
44 remote_cancel_text = "Skill was canceled through an EndOfConversation activity from the parent."
45 await turn_context.send_trace_activity(
46 f"Extension {Dialog.__name__}.run_dialog", label=remote_cancel_text,
47 )
48
49 # Send cancellation message to the dialog to ensure all the parents are canceled
50 # in the right order.
51 await dialog_context.cancel_all_dialogs()
52 return
53
54 # Handle a reprompt event sent from the parent.
55 if (
56 turn_context.activity.type == ActivityTypes.event
57 and turn_context.activity.name == DialogEvents.reprompt_dialog
58 ):
59 if not dialog_context.stack:
60 # No dialogs to reprompt, just return.
61 return
62
63 await dialog_context.reprompt_dialog()
64 return
65
66 # Continue or start the dialog.
67 result = await dialog_context.continue_dialog()
68 if result.status == DialogTurnStatus.Empty:
69 result = await dialog_context.begin_dialog(dialog.id)
70
71 # Skills should send EoC when the dialog completes.
72 if (
73 result.status == DialogTurnStatus.Complete
74 or result.status == DialogTurnStatus.Cancelled
75 ):
76 if DialogExtensions.__send_eoc_to_parent(turn_context):
77 end_message_text = (
78 f"Dialog {dialog.id} has **completed**. Sending EndOfConversation."
79 )
80 await turn_context.send_trace_activity(
81 f"Extension {Dialog.__name__}.run_dialog",
82 label=end_message_text,
83 value=result.result,
84 )
85
86 activity = Activity(
87 type=ActivityTypes.end_of_conversation,
88 value=result.result,
89 locale=turn_context.activity.locale,
90 )
91 await turn_context.send_activity(activity)
92
93 @staticmethod
94 def __is_from_parent_to_skill(turn_context: TurnContext) -> bool:
95 if turn_context.turn_state.get(SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY):
96 return False
97
98 claims_identity = turn_context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)
99 return isinstance(
100 claims_identity, ClaimsIdentity
101 ) and SkillValidation.is_skill_claim(claims_identity.claims)
102
103 @staticmethod
104 def __send_eoc_to_parent(turn_context: TurnContext) -> bool:
105 claims_identity = turn_context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)
106 if isinstance(
107 claims_identity, ClaimsIdentity
108 ) and SkillValidation.is_skill_claim(claims_identity.claims):
109 # EoC Activities returned by skills are bounced back to the bot by SkillHandler.
110 # In those cases we will have a SkillConversationReference instance in state.
111 skill_conversation_reference: SkillConversationReference = turn_context.turn_state.get(
112 SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY
113 )
114 if skill_conversation_reference:
115 # If the skillConversationReference.OAuthScope is for one of the supported channels,
116 # we are at the root and we should not send an EoC.
117 return (
118 skill_conversation_reference.oauth_scope
119 != AuthenticationConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE
120 and skill_conversation_reference.oauth_scope
121 != GovernmentConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE
122 )
123 return True
124
125 return False
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py b/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py
--- a/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py
+++ b/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py
@@ -1,22 +1,21 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
+from botframework.connector.auth import (
+ ClaimsIdentity,
+ SkillValidation,
+ AuthenticationConstants,
+ GovernmentConstants,
+)
from botbuilder.core import BotAdapter, StatePropertyAccessor, TurnContext
from botbuilder.core.skills import SkillHandler, SkillConversationReference
-
from botbuilder.dialogs import (
Dialog,
DialogEvents,
DialogSet,
DialogTurnStatus,
)
-from botbuilder.schema import Activity, ActivityTypes
-from botframework.connector.auth import (
- ClaimsIdentity,
- SkillValidation,
- AuthenticationConstants,
- GovernmentConstants,
-)
+from botbuilder.schema import Activity, ActivityTypes, EndOfConversationCodes
class DialogExtensions:
@@ -87,6 +86,9 @@
type=ActivityTypes.end_of_conversation,
value=result.result,
locale=turn_context.activity.locale,
+ code=EndOfConversationCodes.completed_successfully
+ if result.status == DialogTurnStatus.Complete
+ else EndOfConversationCodes.user_cancelled,
)
await turn_context.send_activity(activity)
| {"golden_diff": "diff --git a/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py b/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py\n--- a/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py\n+++ b/libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py\n@@ -1,22 +1,21 @@\n # Copyright (c) Microsoft Corporation. All rights reserved.\n # Licensed under the MIT License.\n \n+from botframework.connector.auth import (\n+ ClaimsIdentity,\n+ SkillValidation,\n+ AuthenticationConstants,\n+ GovernmentConstants,\n+)\n from botbuilder.core import BotAdapter, StatePropertyAccessor, TurnContext\n from botbuilder.core.skills import SkillHandler, SkillConversationReference\n-\n from botbuilder.dialogs import (\n Dialog,\n DialogEvents,\n DialogSet,\n DialogTurnStatus,\n )\n-from botbuilder.schema import Activity, ActivityTypes\n-from botframework.connector.auth import (\n- ClaimsIdentity,\n- SkillValidation,\n- AuthenticationConstants,\n- GovernmentConstants,\n-)\n+from botbuilder.schema import Activity, ActivityTypes, EndOfConversationCodes\n \n \n class DialogExtensions:\n@@ -87,6 +86,9 @@\n type=ActivityTypes.end_of_conversation,\n value=result.result,\n locale=turn_context.activity.locale,\n+ code=EndOfConversationCodes.completed_successfully\n+ if result.status == DialogTurnStatus.Complete\n+ else EndOfConversationCodes.user_cancelled,\n )\n await turn_context.send_activity(activity)\n", "issue": "[PORT] Add EndOfConversationCodes to EndOfConversation activity from Skill\n> Port this change from botbuilder-dotnet/master branch:\nhttps://github.com/microsoft/botbuilder-dotnet/pull/4235\n\nFixes https://github.com/microsoft/botframework-sdk/issues/5852\n\n\r\n# Changed projects\r\n* Microsoft.Bot.Builder.Dialogs\r\n* Microsoft.Bot.Builder.Dialogs.Tests\r\n\r\n\r\n\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom botbuilder.core import BotAdapter, StatePropertyAccessor, TurnContext\nfrom botbuilder.core.skills import SkillHandler, SkillConversationReference\n\nfrom botbuilder.dialogs import (\n Dialog,\n DialogEvents,\n DialogSet,\n DialogTurnStatus,\n)\nfrom botbuilder.schema import Activity, ActivityTypes\nfrom botframework.connector.auth import (\n ClaimsIdentity,\n SkillValidation,\n AuthenticationConstants,\n GovernmentConstants,\n)\n\n\nclass DialogExtensions:\n @staticmethod\n async def run_dialog(\n dialog: Dialog, turn_context: TurnContext, accessor: StatePropertyAccessor\n ):\n \"\"\"\n Creates a dialog stack and starts a dialog, pushing it onto the stack.\n \"\"\"\n\n dialog_set = DialogSet(accessor)\n dialog_set.add(dialog)\n\n dialog_context = await dialog_set.create_context(turn_context)\n\n # Handle EoC and Reprompt event from a parent bot (can be root bot to skill or skill to skill)\n if DialogExtensions.__is_from_parent_to_skill(turn_context):\n # Handle remote cancellation request from parent.\n if turn_context.activity.type == ActivityTypes.end_of_conversation:\n if not dialog_context.stack:\n # No dialogs to cancel, just return.\n return\n\n remote_cancel_text = \"Skill was canceled through an EndOfConversation activity from the parent.\"\n await turn_context.send_trace_activity(\n f\"Extension {Dialog.__name__}.run_dialog\", label=remote_cancel_text,\n )\n\n # Send cancellation message to the dialog to ensure all the parents are canceled\n # in the right order.\n await dialog_context.cancel_all_dialogs()\n return\n\n # Handle a reprompt event sent from the parent.\n if (\n turn_context.activity.type == ActivityTypes.event\n and turn_context.activity.name == DialogEvents.reprompt_dialog\n ):\n if not dialog_context.stack:\n # No dialogs to reprompt, just return.\n return\n\n await dialog_context.reprompt_dialog()\n return\n\n # Continue or start the dialog.\n result = await dialog_context.continue_dialog()\n if result.status == DialogTurnStatus.Empty:\n result = await dialog_context.begin_dialog(dialog.id)\n\n # Skills should send EoC when the dialog completes.\n if (\n result.status == DialogTurnStatus.Complete\n or result.status == DialogTurnStatus.Cancelled\n ):\n if DialogExtensions.__send_eoc_to_parent(turn_context):\n end_message_text = (\n f\"Dialog {dialog.id} has **completed**. Sending EndOfConversation.\"\n )\n await turn_context.send_trace_activity(\n f\"Extension {Dialog.__name__}.run_dialog\",\n label=end_message_text,\n value=result.result,\n )\n\n activity = Activity(\n type=ActivityTypes.end_of_conversation,\n value=result.result,\n locale=turn_context.activity.locale,\n )\n await turn_context.send_activity(activity)\n\n @staticmethod\n def __is_from_parent_to_skill(turn_context: TurnContext) -> bool:\n if turn_context.turn_state.get(SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY):\n return False\n\n claims_identity = turn_context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)\n return isinstance(\n claims_identity, ClaimsIdentity\n ) and SkillValidation.is_skill_claim(claims_identity.claims)\n\n @staticmethod\n def __send_eoc_to_parent(turn_context: TurnContext) -> bool:\n claims_identity = turn_context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)\n if isinstance(\n claims_identity, ClaimsIdentity\n ) and SkillValidation.is_skill_claim(claims_identity.claims):\n # EoC Activities returned by skills are bounced back to the bot by SkillHandler.\n # In those cases we will have a SkillConversationReference instance in state.\n skill_conversation_reference: SkillConversationReference = turn_context.turn_state.get(\n SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY\n )\n if skill_conversation_reference:\n # If the skillConversationReference.OAuthScope is for one of the supported channels,\n # we are at the root and we should not send an EoC.\n return (\n skill_conversation_reference.oauth_scope\n != AuthenticationConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE\n and skill_conversation_reference.oauth_scope\n != GovernmentConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE\n )\n return True\n\n return False\n", "path": "libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom botframework.connector.auth import (\n ClaimsIdentity,\n SkillValidation,\n AuthenticationConstants,\n GovernmentConstants,\n)\nfrom botbuilder.core import BotAdapter, StatePropertyAccessor, TurnContext\nfrom botbuilder.core.skills import SkillHandler, SkillConversationReference\nfrom botbuilder.dialogs import (\n Dialog,\n DialogEvents,\n DialogSet,\n DialogTurnStatus,\n)\nfrom botbuilder.schema import Activity, ActivityTypes, EndOfConversationCodes\n\n\nclass DialogExtensions:\n @staticmethod\n async def run_dialog(\n dialog: Dialog, turn_context: TurnContext, accessor: StatePropertyAccessor\n ):\n \"\"\"\n Creates a dialog stack and starts a dialog, pushing it onto the stack.\n \"\"\"\n\n dialog_set = DialogSet(accessor)\n dialog_set.add(dialog)\n\n dialog_context = await dialog_set.create_context(turn_context)\n\n # Handle EoC and Reprompt event from a parent bot (can be root bot to skill or skill to skill)\n if DialogExtensions.__is_from_parent_to_skill(turn_context):\n # Handle remote cancellation request from parent.\n if turn_context.activity.type == ActivityTypes.end_of_conversation:\n if not dialog_context.stack:\n # No dialogs to cancel, just return.\n return\n\n remote_cancel_text = \"Skill was canceled through an EndOfConversation activity from the parent.\"\n await turn_context.send_trace_activity(\n f\"Extension {Dialog.__name__}.run_dialog\", label=remote_cancel_text,\n )\n\n # Send cancellation message to the dialog to ensure all the parents are canceled\n # in the right order.\n await dialog_context.cancel_all_dialogs()\n return\n\n # Handle a reprompt event sent from the parent.\n if (\n turn_context.activity.type == ActivityTypes.event\n and turn_context.activity.name == DialogEvents.reprompt_dialog\n ):\n if not dialog_context.stack:\n # No dialogs to reprompt, just return.\n return\n\n await dialog_context.reprompt_dialog()\n return\n\n # Continue or start the dialog.\n result = await dialog_context.continue_dialog()\n if result.status == DialogTurnStatus.Empty:\n result = await dialog_context.begin_dialog(dialog.id)\n\n # Skills should send EoC when the dialog completes.\n if (\n result.status == DialogTurnStatus.Complete\n or result.status == DialogTurnStatus.Cancelled\n ):\n if DialogExtensions.__send_eoc_to_parent(turn_context):\n end_message_text = (\n f\"Dialog {dialog.id} has **completed**. Sending EndOfConversation.\"\n )\n await turn_context.send_trace_activity(\n f\"Extension {Dialog.__name__}.run_dialog\",\n label=end_message_text,\n value=result.result,\n )\n\n activity = Activity(\n type=ActivityTypes.end_of_conversation,\n value=result.result,\n locale=turn_context.activity.locale,\n code=EndOfConversationCodes.completed_successfully\n if result.status == DialogTurnStatus.Complete\n else EndOfConversationCodes.user_cancelled,\n )\n await turn_context.send_activity(activity)\n\n @staticmethod\n def __is_from_parent_to_skill(turn_context: TurnContext) -> bool:\n if turn_context.turn_state.get(SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY):\n return False\n\n claims_identity = turn_context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)\n return isinstance(\n claims_identity, ClaimsIdentity\n ) and SkillValidation.is_skill_claim(claims_identity.claims)\n\n @staticmethod\n def __send_eoc_to_parent(turn_context: TurnContext) -> bool:\n claims_identity = turn_context.turn_state.get(BotAdapter.BOT_IDENTITY_KEY)\n if isinstance(\n claims_identity, ClaimsIdentity\n ) and SkillValidation.is_skill_claim(claims_identity.claims):\n # EoC Activities returned by skills are bounced back to the bot by SkillHandler.\n # In those cases we will have a SkillConversationReference instance in state.\n skill_conversation_reference: SkillConversationReference = turn_context.turn_state.get(\n SkillHandler.SKILL_CONVERSATION_REFERENCE_KEY\n )\n if skill_conversation_reference:\n # If the skillConversationReference.OAuthScope is for one of the supported channels,\n # we are at the root and we should not send an EoC.\n return (\n skill_conversation_reference.oauth_scope\n != AuthenticationConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE\n and skill_conversation_reference.oauth_scope\n != GovernmentConstants.TO_CHANNEL_FROM_BOT_OAUTH_SCOPE\n )\n return True\n\n return False\n", "path": "libraries/botbuilder-dialogs/botbuilder/dialogs/dialog_extensions.py"}]} | 1,566 | 326 |
gh_patches_debug_40887 | rasdani/github-patches | git_diff | mozilla__pontoon-2853 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pretranslate access keys using the algorithm to extract candidate keys
Fluent Rich editor has a special UI for messages with access keys, which lists access key candidates.
We should use the same logic when pretranslating accesskeys and use the first candidate as the translation.
We should also take into account #2717.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pontoon/pretranslation/transformer.py`
Content:
```
1 from copy import deepcopy
2 from typing import Callable, Optional, cast
3
4 from fluent.syntax import ast as FTL
5 from fluent.syntax.serializer import serialize_expression
6 from fluent.syntax.visitor import Transformer
7
8 from pontoon.base.fluent import is_plural_expression
9 from pontoon.base.models import Locale
10
11
12 def flatten_select_expressions(pattern: FTL.Pattern):
13 """
14 If the pattern contains any select expressions,
15 flatten it to only contain select expressions.
16 Leading and trailing elements are copied into each variant,
17 and any single leading or trailing spaces are lifted out of the select expressions.
18 """
19
20 def isSelExp(el: FTL.PatternElement):
21 return isinstance(el, FTL.Placeable) and isinstance(
22 el.expression, FTL.SelectExpression
23 )
24
25 def patternStartsWithSpace(pat: list[FTL.PatternElement]):
26 return isinstance(pat[0], FTL.TextElement) and pat[0].value.startswith(" ")
27
28 def patternEndsWithSpace(pat: list[FTL.PatternElement]):
29 return isinstance(pat[-1], FTL.TextElement) and pat[-1].value.endswith(" ")
30
31 prev = -1
32 select = None
33 for idx, placeable in filter(lambda x: isSelExp(x[1]), enumerate(pattern.elements)):
34 before = pattern.elements[prev + 1 : idx]
35 if before:
36 select = cast(FTL.SelectExpression, placeable.expression)
37 for variant in select.variants:
38 variant.value.elements[0:0] = deepcopy(before)
39 prev = idx
40 if select:
41 after = pattern.elements[prev + 1 :]
42 if after:
43 for variant in select.variants:
44 variant.value.elements += deepcopy(after)
45
46 res: list[FTL.PatternElement] = []
47 for placeable in filter(isSelExp, pattern.elements):
48 patterns = tuple(
49 map(lambda var: var.value.elements, placeable.expression.variants)
50 )
51
52 # Collect leading spaces
53 if all(map(patternStartsWithSpace, patterns)):
54 res.append(FTL.Placeable(FTL.StringLiteral(" ")))
55 for pat in patterns:
56 pat[0].value = pat[0].value[1:]
57
58 res.append(placeable)
59
60 # Collect trailing spaces
61 if all(map(patternEndsWithSpace, patterns)):
62 res.append(FTL.Placeable(FTL.StringLiteral(" ")))
63 for pat in patterns:
64 pat[-1].value = pat[-1].value[:-1]
65 pattern.elements = res
66
67
68 def create_locale_plural_variants(node: FTL.SelectExpression, locale: Locale):
69 variants: list[FTL.Variant] = []
70 source_plurals: dict[str, FTL.Variant] = {}
71 default = cast(FTL.Variant, None)
72
73 for variant in node.variants:
74 key = variant.key
75 if isinstance(key, FTL.NumberLiteral):
76 variants.append(variant)
77 else:
78 source_plurals[key.name] = variant
79 if variant.default:
80 default = variant
81
82 for plural in locale.cldr_plurals_list():
83 if plural in source_plurals.keys():
84 variant = source_plurals[plural]
85 else:
86 variant = deepcopy(default)
87 variant.key.name = plural
88 variant.default = False
89 variants.append(variant)
90
91 variants[-1].default = True
92
93 node.variants = variants
94
95
96 class PreparePretranslation(Transformer):
97 """
98 Flattens the given Pattern, uplifting selectors to the highest possible level and
99 duplicating shared parts in the variants. Transforms plural variants to match the
100 locale.
101 """
102
103 def __init__(self, locale: Locale):
104 self.locale = locale
105
106 def visit_Attribute(self, node: FTL.Attribute):
107 flatten_select_expressions(node.value)
108 return self.generic_visit(node)
109
110 def visit_Message(self, node: FTL.Message):
111 if node.value:
112 flatten_select_expressions(node.value)
113 return self.generic_visit(node)
114
115 def visit_SelectExpression(self, node: FTL.SelectExpression):
116 if is_plural_expression(node):
117 create_locale_plural_variants(node, self.locale)
118 return self.generic_visit(node)
119
120
121 class ApplyPretranslation(Transformer):
122 """
123 During `visit()`, calls `callback(source, locale) -> (translation, service)` for each pattern.
124 """
125
126 def __init__(
127 self,
128 locale: Locale,
129 entry: FTL.EntryType,
130 callback: Callable[[str, str], tuple[Optional[str], str]],
131 ):
132 prep = PreparePretranslation(locale)
133 prep.visit(entry)
134 self.callback = callback
135 self.locale = locale
136 self.services: list[str] = []
137
138 def visit_Attribute(self, node):
139 if (
140 node.id.name.endswith("accesskey")
141 and not self.locale.accesskey_localization
142 ):
143 return node
144 return self.generic_visit(node)
145
146 def visit_Pattern(self, node: FTL.Pattern):
147 has_selects = False
148 source = ""
149 for el in node.elements:
150 if isinstance(el, FTL.TextElement):
151 source += el.value
152 elif isinstance(el.expression, FTL.SelectExpression):
153 self.generic_visit(el.expression)
154 has_selects = True
155 else:
156 source += serialize_expression(el)
157 if not has_selects and source != "":
158 # Machine translation treats each line as a separate sentence,
159 # hence we replace newline characters with spaces.
160 source = source.replace("\n", " ")
161
162 translation, service = self.callback(source, self.locale)
163 if translation is None:
164 raise ValueError(
165 f"Pretranslation for `{source}` to {self.locale.code} not available."
166 )
167 node.elements = [FTL.TextElement(translation)]
168 self.services.append(service)
169 return node
170
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pontoon/pretranslation/transformer.py b/pontoon/pretranslation/transformer.py
--- a/pontoon/pretranslation/transformer.py
+++ b/pontoon/pretranslation/transformer.py
@@ -1,3 +1,5 @@
+import re
+
from copy import deepcopy
from typing import Callable, Optional, cast
@@ -93,6 +95,51 @@
node.variants = variants
+def extract_accesskey_candidates(message: FTL.Message, label: str, variant_name=None):
+ def get_source(names):
+ for attribute in message.attributes:
+ if attribute.id.name in names:
+ element = attribute.value.elements[0]
+
+ if isinstance(element, FTL.TextElement):
+ return element.value
+ elif isinstance(element.expression, FTL.SelectExpression):
+ variants = element.expression.variants
+ variant = next(
+ (v for v in variants if v.key.name == variant_name), variants[0]
+ )
+ variant_element = variant.value.elements[0]
+
+ if isinstance(variant_element, FTL.TextElement):
+ return variant_element.value
+
+ return None
+
+ prefix_end = label.index("accesskey")
+ prefix = label[0:prefix_end]
+
+ # Generate access key candidates:
+ if prefix:
+ # From a prefixed "label" attribute
+ name = f"{prefix}label"
+ source = get_source([name])
+ else:
+ # From a pre-defined list of attribute names
+ source = get_source(["label", "value", "aria-label"])
+ # From a message value
+ if not source and message.value:
+ source = message.value.elements[0].value
+
+ if not source:
+ return []
+
+ # Exclude placeables (message is flat). See bug 1447103 for details.
+ keys = re.sub(r"(?s){.*?}|[\W_]", "", source)
+
+ # Extract unique candidates
+ return list(dict.fromkeys(keys))
+
+
class PreparePretranslation(Transformer):
"""
Flattens the given Pattern, uplifting selectors to the highest possible level and
@@ -132,15 +179,43 @@
prep = PreparePretranslation(locale)
prep.visit(entry)
self.callback = callback
+ self.entry = entry
self.locale = locale
self.services: list[str] = []
- def visit_Attribute(self, node):
- if (
- node.id.name.endswith("accesskey")
- and not self.locale.accesskey_localization
- ):
- return node
+ def visit_Attribute(self, node: FTL.Pattern):
+ name = node.id.name
+
+ def set_accesskey(element, variant_name=None):
+ if isinstance(element, FTL.TextElement) and len(element.value) <= 1:
+ candidates = extract_accesskey_candidates(
+ self.entry, name, variant_name
+ )
+ if candidates:
+ element.value = candidates[0]
+ return True
+
+ if name.endswith("accesskey"):
+ if self.locale.accesskey_localization:
+ element = node.value.elements[0]
+
+ if set_accesskey(element):
+ return node
+ elif isinstance(element, FTL.Placeable) and isinstance(
+ element.expression, FTL.SelectExpression
+ ):
+ variants = element.expression.variants
+ processed_variants = 0
+ for variant in variants:
+ variant_element = variant.value.elements[0]
+ if set_accesskey(variant_element, variant.key.name):
+ processed_variants += 1
+ if processed_variants == len(variants):
+ return node
+
+ else:
+ return node
+
return self.generic_visit(node)
def visit_Pattern(self, node: FTL.Pattern):
| {"golden_diff": "diff --git a/pontoon/pretranslation/transformer.py b/pontoon/pretranslation/transformer.py\n--- a/pontoon/pretranslation/transformer.py\n+++ b/pontoon/pretranslation/transformer.py\n@@ -1,3 +1,5 @@\n+import re\n+\n from copy import deepcopy\n from typing import Callable, Optional, cast\n \n@@ -93,6 +95,51 @@\n node.variants = variants\n \n \n+def extract_accesskey_candidates(message: FTL.Message, label: str, variant_name=None):\n+ def get_source(names):\n+ for attribute in message.attributes:\n+ if attribute.id.name in names:\n+ element = attribute.value.elements[0]\n+\n+ if isinstance(element, FTL.TextElement):\n+ return element.value\n+ elif isinstance(element.expression, FTL.SelectExpression):\n+ variants = element.expression.variants\n+ variant = next(\n+ (v for v in variants if v.key.name == variant_name), variants[0]\n+ )\n+ variant_element = variant.value.elements[0]\n+\n+ if isinstance(variant_element, FTL.TextElement):\n+ return variant_element.value\n+\n+ return None\n+\n+ prefix_end = label.index(\"accesskey\")\n+ prefix = label[0:prefix_end]\n+\n+ # Generate access key candidates:\n+ if prefix:\n+ # From a prefixed \"label\" attribute\n+ name = f\"{prefix}label\"\n+ source = get_source([name])\n+ else:\n+ # From a pre-defined list of attribute names\n+ source = get_source([\"label\", \"value\", \"aria-label\"])\n+ # From a message value\n+ if not source and message.value:\n+ source = message.value.elements[0].value\n+\n+ if not source:\n+ return []\n+\n+ # Exclude placeables (message is flat). See bug 1447103 for details.\n+ keys = re.sub(r\"(?s){.*?}|[\\W_]\", \"\", source)\n+\n+ # Extract unique candidates\n+ return list(dict.fromkeys(keys))\n+\n+\n class PreparePretranslation(Transformer):\n \"\"\"\n Flattens the given Pattern, uplifting selectors to the highest possible level and\n@@ -132,15 +179,43 @@\n prep = PreparePretranslation(locale)\n prep.visit(entry)\n self.callback = callback\n+ self.entry = entry\n self.locale = locale\n self.services: list[str] = []\n \n- def visit_Attribute(self, node):\n- if (\n- node.id.name.endswith(\"accesskey\")\n- and not self.locale.accesskey_localization\n- ):\n- return node\n+ def visit_Attribute(self, node: FTL.Pattern):\n+ name = node.id.name\n+\n+ def set_accesskey(element, variant_name=None):\n+ if isinstance(element, FTL.TextElement) and len(element.value) <= 1:\n+ candidates = extract_accesskey_candidates(\n+ self.entry, name, variant_name\n+ )\n+ if candidates:\n+ element.value = candidates[0]\n+ return True\n+\n+ if name.endswith(\"accesskey\"):\n+ if self.locale.accesskey_localization:\n+ element = node.value.elements[0]\n+\n+ if set_accesskey(element):\n+ return node\n+ elif isinstance(element, FTL.Placeable) and isinstance(\n+ element.expression, FTL.SelectExpression\n+ ):\n+ variants = element.expression.variants\n+ processed_variants = 0\n+ for variant in variants:\n+ variant_element = variant.value.elements[0]\n+ if set_accesskey(variant_element, variant.key.name):\n+ processed_variants += 1\n+ if processed_variants == len(variants):\n+ return node\n+\n+ else:\n+ return node\n+\n return self.generic_visit(node)\n \n def visit_Pattern(self, node: FTL.Pattern):\n", "issue": "Pretranslate access keys using the algorithm to extract candidate keys\nFluent Rich editor has a special UI for messages with access keys, which lists access key candidates.\r\n\r\nWe should use the same logic when pretranslating accesskeys and use the first candidate as the translation.\r\n\r\nWe should also take into account #2717.\n", "before_files": [{"content": "from copy import deepcopy\nfrom typing import Callable, Optional, cast\n\nfrom fluent.syntax import ast as FTL\nfrom fluent.syntax.serializer import serialize_expression\nfrom fluent.syntax.visitor import Transformer\n\nfrom pontoon.base.fluent import is_plural_expression\nfrom pontoon.base.models import Locale\n\n\ndef flatten_select_expressions(pattern: FTL.Pattern):\n \"\"\"\n If the pattern contains any select expressions,\n flatten it to only contain select expressions.\n Leading and trailing elements are copied into each variant,\n and any single leading or trailing spaces are lifted out of the select expressions.\n \"\"\"\n\n def isSelExp(el: FTL.PatternElement):\n return isinstance(el, FTL.Placeable) and isinstance(\n el.expression, FTL.SelectExpression\n )\n\n def patternStartsWithSpace(pat: list[FTL.PatternElement]):\n return isinstance(pat[0], FTL.TextElement) and pat[0].value.startswith(\" \")\n\n def patternEndsWithSpace(pat: list[FTL.PatternElement]):\n return isinstance(pat[-1], FTL.TextElement) and pat[-1].value.endswith(\" \")\n\n prev = -1\n select = None\n for idx, placeable in filter(lambda x: isSelExp(x[1]), enumerate(pattern.elements)):\n before = pattern.elements[prev + 1 : idx]\n if before:\n select = cast(FTL.SelectExpression, placeable.expression)\n for variant in select.variants:\n variant.value.elements[0:0] = deepcopy(before)\n prev = idx\n if select:\n after = pattern.elements[prev + 1 :]\n if after:\n for variant in select.variants:\n variant.value.elements += deepcopy(after)\n\n res: list[FTL.PatternElement] = []\n for placeable in filter(isSelExp, pattern.elements):\n patterns = tuple(\n map(lambda var: var.value.elements, placeable.expression.variants)\n )\n\n # Collect leading spaces\n if all(map(patternStartsWithSpace, patterns)):\n res.append(FTL.Placeable(FTL.StringLiteral(\" \")))\n for pat in patterns:\n pat[0].value = pat[0].value[1:]\n\n res.append(placeable)\n\n # Collect trailing spaces\n if all(map(patternEndsWithSpace, patterns)):\n res.append(FTL.Placeable(FTL.StringLiteral(\" \")))\n for pat in patterns:\n pat[-1].value = pat[-1].value[:-1]\n pattern.elements = res\n\n\ndef create_locale_plural_variants(node: FTL.SelectExpression, locale: Locale):\n variants: list[FTL.Variant] = []\n source_plurals: dict[str, FTL.Variant] = {}\n default = cast(FTL.Variant, None)\n\n for variant in node.variants:\n key = variant.key\n if isinstance(key, FTL.NumberLiteral):\n variants.append(variant)\n else:\n source_plurals[key.name] = variant\n if variant.default:\n default = variant\n\n for plural in locale.cldr_plurals_list():\n if plural in source_plurals.keys():\n variant = source_plurals[plural]\n else:\n variant = deepcopy(default)\n variant.key.name = plural\n variant.default = False\n variants.append(variant)\n\n variants[-1].default = True\n\n node.variants = variants\n\n\nclass PreparePretranslation(Transformer):\n \"\"\"\n Flattens the given Pattern, uplifting selectors to the highest possible level and\n duplicating shared parts in the variants. Transforms plural variants to match the\n locale.\n \"\"\"\n\n def __init__(self, locale: Locale):\n self.locale = locale\n\n def visit_Attribute(self, node: FTL.Attribute):\n flatten_select_expressions(node.value)\n return self.generic_visit(node)\n\n def visit_Message(self, node: FTL.Message):\n if node.value:\n flatten_select_expressions(node.value)\n return self.generic_visit(node)\n\n def visit_SelectExpression(self, node: FTL.SelectExpression):\n if is_plural_expression(node):\n create_locale_plural_variants(node, self.locale)\n return self.generic_visit(node)\n\n\nclass ApplyPretranslation(Transformer):\n \"\"\"\n During `visit()`, calls `callback(source, locale) -> (translation, service)` for each pattern.\n \"\"\"\n\n def __init__(\n self,\n locale: Locale,\n entry: FTL.EntryType,\n callback: Callable[[str, str], tuple[Optional[str], str]],\n ):\n prep = PreparePretranslation(locale)\n prep.visit(entry)\n self.callback = callback\n self.locale = locale\n self.services: list[str] = []\n\n def visit_Attribute(self, node):\n if (\n node.id.name.endswith(\"accesskey\")\n and not self.locale.accesskey_localization\n ):\n return node\n return self.generic_visit(node)\n\n def visit_Pattern(self, node: FTL.Pattern):\n has_selects = False\n source = \"\"\n for el in node.elements:\n if isinstance(el, FTL.TextElement):\n source += el.value\n elif isinstance(el.expression, FTL.SelectExpression):\n self.generic_visit(el.expression)\n has_selects = True\n else:\n source += serialize_expression(el)\n if not has_selects and source != \"\":\n # Machine translation treats each line as a separate sentence,\n # hence we replace newline characters with spaces.\n source = source.replace(\"\\n\", \" \")\n\n translation, service = self.callback(source, self.locale)\n if translation is None:\n raise ValueError(\n f\"Pretranslation for `{source}` to {self.locale.code} not available.\"\n )\n node.elements = [FTL.TextElement(translation)]\n self.services.append(service)\n return node\n", "path": "pontoon/pretranslation/transformer.py"}], "after_files": [{"content": "import re\n\nfrom copy import deepcopy\nfrom typing import Callable, Optional, cast\n\nfrom fluent.syntax import ast as FTL\nfrom fluent.syntax.serializer import serialize_expression\nfrom fluent.syntax.visitor import Transformer\n\nfrom pontoon.base.fluent import is_plural_expression\nfrom pontoon.base.models import Locale\n\n\ndef flatten_select_expressions(pattern: FTL.Pattern):\n \"\"\"\n If the pattern contains any select expressions,\n flatten it to only contain select expressions.\n Leading and trailing elements are copied into each variant,\n and any single leading or trailing spaces are lifted out of the select expressions.\n \"\"\"\n\n def isSelExp(el: FTL.PatternElement):\n return isinstance(el, FTL.Placeable) and isinstance(\n el.expression, FTL.SelectExpression\n )\n\n def patternStartsWithSpace(pat: list[FTL.PatternElement]):\n return isinstance(pat[0], FTL.TextElement) and pat[0].value.startswith(\" \")\n\n def patternEndsWithSpace(pat: list[FTL.PatternElement]):\n return isinstance(pat[-1], FTL.TextElement) and pat[-1].value.endswith(\" \")\n\n prev = -1\n select = None\n for idx, placeable in filter(lambda x: isSelExp(x[1]), enumerate(pattern.elements)):\n before = pattern.elements[prev + 1 : idx]\n if before:\n select = cast(FTL.SelectExpression, placeable.expression)\n for variant in select.variants:\n variant.value.elements[0:0] = deepcopy(before)\n prev = idx\n if select:\n after = pattern.elements[prev + 1 :]\n if after:\n for variant in select.variants:\n variant.value.elements += deepcopy(after)\n\n res: list[FTL.PatternElement] = []\n for placeable in filter(isSelExp, pattern.elements):\n patterns = tuple(\n map(lambda var: var.value.elements, placeable.expression.variants)\n )\n\n # Collect leading spaces\n if all(map(patternStartsWithSpace, patterns)):\n res.append(FTL.Placeable(FTL.StringLiteral(\" \")))\n for pat in patterns:\n pat[0].value = pat[0].value[1:]\n\n res.append(placeable)\n\n # Collect trailing spaces\n if all(map(patternEndsWithSpace, patterns)):\n res.append(FTL.Placeable(FTL.StringLiteral(\" \")))\n for pat in patterns:\n pat[-1].value = pat[-1].value[:-1]\n pattern.elements = res\n\n\ndef create_locale_plural_variants(node: FTL.SelectExpression, locale: Locale):\n variants: list[FTL.Variant] = []\n source_plurals: dict[str, FTL.Variant] = {}\n default = cast(FTL.Variant, None)\n\n for variant in node.variants:\n key = variant.key\n if isinstance(key, FTL.NumberLiteral):\n variants.append(variant)\n else:\n source_plurals[key.name] = variant\n if variant.default:\n default = variant\n\n for plural in locale.cldr_plurals_list():\n if plural in source_plurals.keys():\n variant = source_plurals[plural]\n else:\n variant = deepcopy(default)\n variant.key.name = plural\n variant.default = False\n variants.append(variant)\n\n variants[-1].default = True\n\n node.variants = variants\n\n\ndef extract_accesskey_candidates(message: FTL.Message, label: str, variant_name=None):\n def get_source(names):\n for attribute in message.attributes:\n if attribute.id.name in names:\n element = attribute.value.elements[0]\n\n if isinstance(element, FTL.TextElement):\n return element.value\n elif isinstance(element.expression, FTL.SelectExpression):\n variants = element.expression.variants\n variant = next(\n (v for v in variants if v.key.name == variant_name), variants[0]\n )\n variant_element = variant.value.elements[0]\n\n if isinstance(variant_element, FTL.TextElement):\n return variant_element.value\n\n return None\n\n prefix_end = label.index(\"accesskey\")\n prefix = label[0:prefix_end]\n\n # Generate access key candidates:\n if prefix:\n # From a prefixed \"label\" attribute\n name = f\"{prefix}label\"\n source = get_source([name])\n else:\n # From a pre-defined list of attribute names\n source = get_source([\"label\", \"value\", \"aria-label\"])\n # From a message value\n if not source and message.value:\n source = message.value.elements[0].value\n\n if not source:\n return []\n\n # Exclude placeables (message is flat). See bug 1447103 for details.\n keys = re.sub(r\"(?s){.*?}|[\\W_]\", \"\", source)\n\n # Extract unique candidates\n return list(dict.fromkeys(keys))\n\n\nclass PreparePretranslation(Transformer):\n \"\"\"\n Flattens the given Pattern, uplifting selectors to the highest possible level and\n duplicating shared parts in the variants. Transforms plural variants to match the\n locale.\n \"\"\"\n\n def __init__(self, locale: Locale):\n self.locale = locale\n\n def visit_Attribute(self, node: FTL.Attribute):\n flatten_select_expressions(node.value)\n return self.generic_visit(node)\n\n def visit_Message(self, node: FTL.Message):\n if node.value:\n flatten_select_expressions(node.value)\n return self.generic_visit(node)\n\n def visit_SelectExpression(self, node: FTL.SelectExpression):\n if is_plural_expression(node):\n create_locale_plural_variants(node, self.locale)\n return self.generic_visit(node)\n\n\nclass ApplyPretranslation(Transformer):\n \"\"\"\n During `visit()`, calls `callback(source, locale) -> (translation, service)` for each pattern.\n \"\"\"\n\n def __init__(\n self,\n locale: Locale,\n entry: FTL.EntryType,\n callback: Callable[[str, str], tuple[Optional[str], str]],\n ):\n prep = PreparePretranslation(locale)\n prep.visit(entry)\n self.callback = callback\n self.entry = entry\n self.locale = locale\n self.services: list[str] = []\n\n def visit_Attribute(self, node: FTL.Pattern):\n name = node.id.name\n\n def set_accesskey(element, variant_name=None):\n if isinstance(element, FTL.TextElement) and len(element.value) <= 1:\n candidates = extract_accesskey_candidates(\n self.entry, name, variant_name\n )\n if candidates:\n element.value = candidates[0]\n return True\n\n if name.endswith(\"accesskey\"):\n if self.locale.accesskey_localization:\n element = node.value.elements[0]\n\n if set_accesskey(element):\n return node\n elif isinstance(element, FTL.Placeable) and isinstance(\n element.expression, FTL.SelectExpression\n ):\n variants = element.expression.variants\n processed_variants = 0\n for variant in variants:\n variant_element = variant.value.elements[0]\n if set_accesskey(variant_element, variant.key.name):\n processed_variants += 1\n if processed_variants == len(variants):\n return node\n\n else:\n return node\n\n return self.generic_visit(node)\n\n def visit_Pattern(self, node: FTL.Pattern):\n has_selects = False\n source = \"\"\n for el in node.elements:\n if isinstance(el, FTL.TextElement):\n source += el.value\n elif isinstance(el.expression, FTL.SelectExpression):\n self.generic_visit(el.expression)\n has_selects = True\n else:\n source += serialize_expression(el)\n if not has_selects and source != \"\":\n # Machine translation treats each line as a separate sentence,\n # hence we replace newline characters with spaces.\n source = source.replace(\"\\n\", \" \")\n\n translation, service = self.callback(source, self.locale)\n if translation is None:\n raise ValueError(\n f\"Pretranslation for `{source}` to {self.locale.code} not available.\"\n )\n node.elements = [FTL.TextElement(translation)]\n self.services.append(service)\n return node\n", "path": "pontoon/pretranslation/transformer.py"}]} | 1,969 | 863 |
gh_patches_debug_39375 | rasdani/github-patches | git_diff | opendatacube__datacube-core-694 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
write_geotiff helper function fails if CRS is string, not object
### Expected behaviour
The write_geotiff helper function assumes that you will pass the function a datacube crs object. If you are writing out a geotiff from data that was not loaded using `dc.load`, this crs object is not present.
E.g. I read a Geotiff file produced by datacube-stats into a Notebook, ran some analysis on it, and wanted to write it back out to Geotiff. I have a crs string from the attributes of my original Geotiff, but no datacube crs object, so the write_geotiff function fails.
### Error
```
AttributeError Traceback (most recent call last)
<ipython-input-41-736bab55bae5> in <module>()
3 Differenceds.attrs['crs'] = (GeotiffData.crs)
4
----> 5 write_geotiff(PercentileConfidence, Differenceds)
/g/data/v10/public/modules/dea/20180515/lib/python3.6/site-packages/datacube/helpers.py in write_geotiff(filename, dataset, profile_override, time_index)
44 profile = DEFAULT_PROFILE.copy()
45 profile.update({
---> 46 'width': dataset.dims[dataset.crs.dimensions[1]],
47 'height': dataset.dims[dataset.crs.dimensions[0]],
48 'transform': dataset.affine,
AttributeError: 'str' object has no attribute 'dimensions'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `datacube/helpers.py`
Content:
```
1 """
2 Useful functions for Datacube users
3
4 Not used internally, those should go in `utils.py`
5 """
6
7 import numpy as np
8 import rasterio
9
10 DEFAULT_PROFILE = {
11 'blockxsize': 256,
12 'blockysize': 256,
13 'compress': 'lzw',
14 'driver': 'GTiff',
15 'interleave': 'band',
16 'nodata': 0.0,
17 'tiled': True}
18
19
20 def write_geotiff(filename, dataset, profile_override=None, time_index=None):
21 """
22 Write an ODC style xarray.Dataset to a GeoTIFF file.
23
24 :param filename: Output filename
25 :param dataset: xarray dataset containing one or more bands to write to a file.
26 :param profile_override: option dict, overrides rasterio file creation options.
27 :param time_index: DEPRECATED
28 """
29 profile_override = profile_override or {}
30
31 if time_index is not None:
32 raise ValueError('''The write_geotiff function no longer supports passing in `time_index`.
33 The same function can be achieved by calling `dataset.isel(time=<time_index>)` before passing
34 in your dataset. It was removed because it made the function much less useful for more advanced cases.''')
35
36 try:
37 dtypes = {val.dtype for val in dataset.data_vars.values()}
38 assert len(dtypes) == 1 # Check for multiple dtypes
39 except AttributeError:
40 dtypes = [dataset.dtype]
41
42 profile = DEFAULT_PROFILE.copy()
43 profile.update({
44 'width': dataset.dims[dataset.crs.dimensions[1]],
45 'height': dataset.dims[dataset.crs.dimensions[0]],
46 'transform': dataset.affine,
47 'crs': dataset.crs.crs_str,
48 'count': len(dataset.data_vars),
49 'dtype': str(dtypes.pop())
50 })
51 profile.update(profile_override)
52
53 _calculate_blocksize(profile)
54
55 with rasterio.open(str(filename), 'w', **profile) as dest:
56 if hasattr(dataset, 'data_vars'):
57 for bandnum, data in enumerate(dataset.data_vars.values(), start=1):
58 dest.write(data.data, bandnum)
59
60
61 def _calculate_blocksize(profile):
62 # Block size must be smaller than the image size, and for geotiffs must be divisible by 16
63 # Fix for small images.
64 if profile['blockxsize'] > profile['width']:
65 if profile['width'] % 16 == 0 or profile['width'] < 16:
66 profile['blockxsize'] = profile['width']
67 else:
68 profile['blockxsize'] = 16
69
70 if profile['blockysize'] > profile['height']:
71 if profile['height'] % 16 == 0 or profile['height'] < 16:
72 profile['blockysize'] = profile['height']
73 else:
74 profile['blockysize'] = 16
75
76
77 def ga_pq_fuser(dest, src):
78 """
79 Fuse two Geoscience Australia Pixel Quality ndarrays
80
81 To be used as a `fuse_func` when loaded `grouped` data, for example when grouping
82 by solar day to avoid duplicate data from scene overlaps.
83 """
84 valid_bit = 8
85 valid_val = (1 << valid_bit)
86
87 no_data_dest_mask = ~(dest & valid_val).astype(bool)
88 np.copyto(dest, src, where=no_data_dest_mask)
89
90 both_data_mask = (valid_val & dest & src).astype(bool)
91 np.copyto(dest, src & dest, where=both_data_mask)
92
```
Path: `datacube/utils/xarray_geoextensions.py`
Content:
```
1 """
2 Add geometric extensions to :class:`xarray.Dataset` and :class:`xarray.DataArray` for use
3 with Data Cube by Monkey Patching those classes.
4
5 This extension is reliant on an `xarray` object having a `.crs` property of type
6 :class:`datacube.utils.geometry.CRS`. This is used to inspect the spatial dimensions of the
7 :class:`Dataset` or :class:`DataArray`, and provide new attributes for accessing a
8 :class:`datacube.utils.geometry.GeoBox`, affine transform and extent for the dataset as
9 `.geobox`, `.affine` and `.extent` respectively.
10
11 """
12
13 import xarray
14 from affine import Affine
15
16 from datacube.utils import data_resolution_and_offset, geometry
17
18
19 def _xarray_affine(obj):
20 dims = obj.crs.dimensions
21 xres, xoff = data_resolution_and_offset(obj[dims[1]].values)
22 yres, yoff = data_resolution_and_offset(obj[dims[0]].values)
23 return Affine.translation(xoff, yoff) * Affine.scale(xres, yres)
24
25
26 def _xarray_extent(obj):
27 return obj.geobox.extent
28
29
30 def _xarray_geobox(obj):
31 dims = obj.crs.dimensions
32 return geometry.GeoBox(obj[dims[1]].size, obj[dims[0]].size, obj.affine, obj.crs)
33
34
35 xarray.Dataset.geobox = property(_xarray_geobox)
36 xarray.Dataset.affine = property(_xarray_affine)
37 xarray.Dataset.extent = property(_xarray_extent)
38 xarray.DataArray.geobox = property(_xarray_geobox)
39 xarray.DataArray.affine = property(_xarray_affine)
40 xarray.DataArray.extent = property(_xarray_extent)
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/datacube/helpers.py b/datacube/helpers.py
--- a/datacube/helpers.py
+++ b/datacube/helpers.py
@@ -26,12 +26,17 @@
:param profile_override: option dict, overrides rasterio file creation options.
:param time_index: DEPRECATED
"""
- profile_override = profile_override or {}
-
if time_index is not None:
raise ValueError('''The write_geotiff function no longer supports passing in `time_index`.
The same function can be achieved by calling `dataset.isel(time=<time_index>)` before passing
- in your dataset. It was removed because it made the function much less useful for more advanced cases.''')
+ in your dataset. It was removed because it made the function much less useful for more advanced cases.''')
+
+ profile_override = profile_override or {}
+
+ geobox = getattr(dataset, 'geobox', None)
+
+ if geobox is None:
+ raise ValueError('Can only write datasets with specified `crs` attribute')
try:
dtypes = {val.dtype for val in dataset.data_vars.values()}
@@ -40,11 +45,13 @@
dtypes = [dataset.dtype]
profile = DEFAULT_PROFILE.copy()
+ height, width = geobox.shape
+
profile.update({
- 'width': dataset.dims[dataset.crs.dimensions[1]],
- 'height': dataset.dims[dataset.crs.dimensions[0]],
- 'transform': dataset.affine,
- 'crs': dataset.crs.crs_str,
+ 'width': width,
+ 'height': height,
+ 'transform': geobox.affine,
+ 'crs': geobox.crs.crs_str,
'count': len(dataset.data_vars),
'dtype': str(dtypes.pop())
})
diff --git a/datacube/utils/xarray_geoextensions.py b/datacube/utils/xarray_geoextensions.py
--- a/datacube/utils/xarray_geoextensions.py
+++ b/datacube/utils/xarray_geoextensions.py
@@ -16,20 +16,38 @@
from datacube.utils import data_resolution_and_offset, geometry
+def _norm_crs(crs):
+ if crs is None or isinstance(crs, geometry.CRS):
+ return crs
+ elif isinstance(crs, str):
+ return geometry.CRS(crs)
+ else:
+ raise ValueError('Can not interpret {} as CRS'.format(type(crs)))
+
+
def _xarray_affine(obj):
- dims = obj.crs.dimensions
+ crs = _norm_crs(obj.crs)
+ if crs is None:
+ return None
+
+ dims = crs.dimensions
xres, xoff = data_resolution_and_offset(obj[dims[1]].values)
yres, yoff = data_resolution_and_offset(obj[dims[0]].values)
return Affine.translation(xoff, yoff) * Affine.scale(xres, yres)
def _xarray_extent(obj):
- return obj.geobox.extent
+ geobox = obj.geobox
+ return None if geobox is None else geobox.extent
def _xarray_geobox(obj):
- dims = obj.crs.dimensions
- return geometry.GeoBox(obj[dims[1]].size, obj[dims[0]].size, obj.affine, obj.crs)
+ crs = _norm_crs(obj.crs)
+ if crs is None:
+ return None
+
+ dims = crs.dimensions
+ return geometry.GeoBox(obj[dims[1]].size, obj[dims[0]].size, obj.affine, crs)
xarray.Dataset.geobox = property(_xarray_geobox)
| {"golden_diff": "diff --git a/datacube/helpers.py b/datacube/helpers.py\n--- a/datacube/helpers.py\n+++ b/datacube/helpers.py\n@@ -26,12 +26,17 @@\n :param profile_override: option dict, overrides rasterio file creation options.\n :param time_index: DEPRECATED\n \"\"\"\n- profile_override = profile_override or {}\n-\n if time_index is not None:\n raise ValueError('''The write_geotiff function no longer supports passing in `time_index`.\n The same function can be achieved by calling `dataset.isel(time=<time_index>)` before passing\n- in your dataset. It was removed because it made the function much less useful for more advanced cases.''')\n+ in your dataset. It was removed because it made the function much less useful for more advanced cases.''')\n+\n+ profile_override = profile_override or {}\n+\n+ geobox = getattr(dataset, 'geobox', None)\n+\n+ if geobox is None:\n+ raise ValueError('Can only write datasets with specified `crs` attribute')\n \n try:\n dtypes = {val.dtype for val in dataset.data_vars.values()}\n@@ -40,11 +45,13 @@\n dtypes = [dataset.dtype]\n \n profile = DEFAULT_PROFILE.copy()\n+ height, width = geobox.shape\n+\n profile.update({\n- 'width': dataset.dims[dataset.crs.dimensions[1]],\n- 'height': dataset.dims[dataset.crs.dimensions[0]],\n- 'transform': dataset.affine,\n- 'crs': dataset.crs.crs_str,\n+ 'width': width,\n+ 'height': height,\n+ 'transform': geobox.affine,\n+ 'crs': geobox.crs.crs_str,\n 'count': len(dataset.data_vars),\n 'dtype': str(dtypes.pop())\n })\ndiff --git a/datacube/utils/xarray_geoextensions.py b/datacube/utils/xarray_geoextensions.py\n--- a/datacube/utils/xarray_geoextensions.py\n+++ b/datacube/utils/xarray_geoextensions.py\n@@ -16,20 +16,38 @@\n from datacube.utils import data_resolution_and_offset, geometry\n \n \n+def _norm_crs(crs):\n+ if crs is None or isinstance(crs, geometry.CRS):\n+ return crs\n+ elif isinstance(crs, str):\n+ return geometry.CRS(crs)\n+ else:\n+ raise ValueError('Can not interpret {} as CRS'.format(type(crs)))\n+\n+\n def _xarray_affine(obj):\n- dims = obj.crs.dimensions\n+ crs = _norm_crs(obj.crs)\n+ if crs is None:\n+ return None\n+\n+ dims = crs.dimensions\n xres, xoff = data_resolution_and_offset(obj[dims[1]].values)\n yres, yoff = data_resolution_and_offset(obj[dims[0]].values)\n return Affine.translation(xoff, yoff) * Affine.scale(xres, yres)\n \n \n def _xarray_extent(obj):\n- return obj.geobox.extent\n+ geobox = obj.geobox\n+ return None if geobox is None else geobox.extent\n \n \n def _xarray_geobox(obj):\n- dims = obj.crs.dimensions\n- return geometry.GeoBox(obj[dims[1]].size, obj[dims[0]].size, obj.affine, obj.crs)\n+ crs = _norm_crs(obj.crs)\n+ if crs is None:\n+ return None\n+\n+ dims = crs.dimensions\n+ return geometry.GeoBox(obj[dims[1]].size, obj[dims[0]].size, obj.affine, crs)\n \n \n xarray.Dataset.geobox = property(_xarray_geobox)\n", "issue": "write_geotiff helper function fails if CRS is string, not object\n### Expected behaviour\r\nThe write_geotiff helper function assumes that you will pass the function a datacube crs object. If you are writing out a geotiff from data that was not loaded using `dc.load`, this crs object is not present. \r\nE.g. I read a Geotiff file produced by datacube-stats into a Notebook, ran some analysis on it, and wanted to write it back out to Geotiff. I have a crs string from the attributes of my original Geotiff, but no datacube crs object, so the write_geotiff function fails.\r\n\r\n### Error\r\n```\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-41-736bab55bae5> in <module>()\r\n 3 Differenceds.attrs['crs'] = (GeotiffData.crs)\r\n 4 \r\n----> 5 write_geotiff(PercentileConfidence, Differenceds)\r\n\r\n/g/data/v10/public/modules/dea/20180515/lib/python3.6/site-packages/datacube/helpers.py in write_geotiff(filename, dataset, profile_override, time_index)\r\n 44 profile = DEFAULT_PROFILE.copy()\r\n 45 profile.update({\r\n---> 46 'width': dataset.dims[dataset.crs.dimensions[1]],\r\n 47 'height': dataset.dims[dataset.crs.dimensions[0]],\r\n 48 'transform': dataset.affine,\r\n\r\nAttributeError: 'str' object has no attribute 'dimensions'\r\n```\n", "before_files": [{"content": "\"\"\"\nUseful functions for Datacube users\n\nNot used internally, those should go in `utils.py`\n\"\"\"\n\nimport numpy as np\nimport rasterio\n\nDEFAULT_PROFILE = {\n 'blockxsize': 256,\n 'blockysize': 256,\n 'compress': 'lzw',\n 'driver': 'GTiff',\n 'interleave': 'band',\n 'nodata': 0.0,\n 'tiled': True}\n\n\ndef write_geotiff(filename, dataset, profile_override=None, time_index=None):\n \"\"\"\n Write an ODC style xarray.Dataset to a GeoTIFF file.\n\n :param filename: Output filename\n :param dataset: xarray dataset containing one or more bands to write to a file.\n :param profile_override: option dict, overrides rasterio file creation options.\n :param time_index: DEPRECATED\n \"\"\"\n profile_override = profile_override or {}\n\n if time_index is not None:\n raise ValueError('''The write_geotiff function no longer supports passing in `time_index`.\n The same function can be achieved by calling `dataset.isel(time=<time_index>)` before passing\n in your dataset. It was removed because it made the function much less useful for more advanced cases.''')\n\n try:\n dtypes = {val.dtype for val in dataset.data_vars.values()}\n assert len(dtypes) == 1 # Check for multiple dtypes\n except AttributeError:\n dtypes = [dataset.dtype]\n\n profile = DEFAULT_PROFILE.copy()\n profile.update({\n 'width': dataset.dims[dataset.crs.dimensions[1]],\n 'height': dataset.dims[dataset.crs.dimensions[0]],\n 'transform': dataset.affine,\n 'crs': dataset.crs.crs_str,\n 'count': len(dataset.data_vars),\n 'dtype': str(dtypes.pop())\n })\n profile.update(profile_override)\n\n _calculate_blocksize(profile)\n\n with rasterio.open(str(filename), 'w', **profile) as dest:\n if hasattr(dataset, 'data_vars'):\n for bandnum, data in enumerate(dataset.data_vars.values(), start=1):\n dest.write(data.data, bandnum)\n\n\ndef _calculate_blocksize(profile):\n # Block size must be smaller than the image size, and for geotiffs must be divisible by 16\n # Fix for small images.\n if profile['blockxsize'] > profile['width']:\n if profile['width'] % 16 == 0 or profile['width'] < 16:\n profile['blockxsize'] = profile['width']\n else:\n profile['blockxsize'] = 16\n\n if profile['blockysize'] > profile['height']:\n if profile['height'] % 16 == 0 or profile['height'] < 16:\n profile['blockysize'] = profile['height']\n else:\n profile['blockysize'] = 16\n\n\ndef ga_pq_fuser(dest, src):\n \"\"\"\n Fuse two Geoscience Australia Pixel Quality ndarrays\n\n To be used as a `fuse_func` when loaded `grouped` data, for example when grouping\n by solar day to avoid duplicate data from scene overlaps.\n \"\"\"\n valid_bit = 8\n valid_val = (1 << valid_bit)\n\n no_data_dest_mask = ~(dest & valid_val).astype(bool)\n np.copyto(dest, src, where=no_data_dest_mask)\n\n both_data_mask = (valid_val & dest & src).astype(bool)\n np.copyto(dest, src & dest, where=both_data_mask)\n", "path": "datacube/helpers.py"}, {"content": "\"\"\"\nAdd geometric extensions to :class:`xarray.Dataset` and :class:`xarray.DataArray` for use\nwith Data Cube by Monkey Patching those classes.\n\nThis extension is reliant on an `xarray` object having a `.crs` property of type\n:class:`datacube.utils.geometry.CRS`. This is used to inspect the spatial dimensions of the\n:class:`Dataset` or :class:`DataArray`, and provide new attributes for accessing a\n:class:`datacube.utils.geometry.GeoBox`, affine transform and extent for the dataset as\n`.geobox`, `.affine` and `.extent` respectively.\n\n\"\"\"\n\nimport xarray\nfrom affine import Affine\n\nfrom datacube.utils import data_resolution_and_offset, geometry\n\n\ndef _xarray_affine(obj):\n dims = obj.crs.dimensions\n xres, xoff = data_resolution_and_offset(obj[dims[1]].values)\n yres, yoff = data_resolution_and_offset(obj[dims[0]].values)\n return Affine.translation(xoff, yoff) * Affine.scale(xres, yres)\n\n\ndef _xarray_extent(obj):\n return obj.geobox.extent\n\n\ndef _xarray_geobox(obj):\n dims = obj.crs.dimensions\n return geometry.GeoBox(obj[dims[1]].size, obj[dims[0]].size, obj.affine, obj.crs)\n\n\nxarray.Dataset.geobox = property(_xarray_geobox)\nxarray.Dataset.affine = property(_xarray_affine)\nxarray.Dataset.extent = property(_xarray_extent)\nxarray.DataArray.geobox = property(_xarray_geobox)\nxarray.DataArray.affine = property(_xarray_affine)\nxarray.DataArray.extent = property(_xarray_extent)\n", "path": "datacube/utils/xarray_geoextensions.py"}], "after_files": [{"content": "\"\"\"\nUseful functions for Datacube users\n\nNot used internally, those should go in `utils.py`\n\"\"\"\n\nimport numpy as np\nimport rasterio\n\nDEFAULT_PROFILE = {\n 'blockxsize': 256,\n 'blockysize': 256,\n 'compress': 'lzw',\n 'driver': 'GTiff',\n 'interleave': 'band',\n 'nodata': 0.0,\n 'tiled': True}\n\n\ndef write_geotiff(filename, dataset, profile_override=None, time_index=None):\n \"\"\"\n Write an ODC style xarray.Dataset to a GeoTIFF file.\n\n :param filename: Output filename\n :param dataset: xarray dataset containing one or more bands to write to a file.\n :param profile_override: option dict, overrides rasterio file creation options.\n :param time_index: DEPRECATED\n \"\"\"\n if time_index is not None:\n raise ValueError('''The write_geotiff function no longer supports passing in `time_index`.\n The same function can be achieved by calling `dataset.isel(time=<time_index>)` before passing\n in your dataset. It was removed because it made the function much less useful for more advanced cases.''')\n\n profile_override = profile_override or {}\n\n geobox = getattr(dataset, 'geobox', None)\n\n if geobox is None:\n raise ValueError('Can only write datasets with specified `crs` attribute')\n\n try:\n dtypes = {val.dtype for val in dataset.data_vars.values()}\n assert len(dtypes) == 1 # Check for multiple dtypes\n except AttributeError:\n dtypes = [dataset.dtype]\n\n profile = DEFAULT_PROFILE.copy()\n height, width = geobox.shape\n\n profile.update({\n 'width': width,\n 'height': height,\n 'transform': geobox.affine,\n 'crs': geobox.crs.crs_str,\n 'count': len(dataset.data_vars),\n 'dtype': str(dtypes.pop())\n })\n profile.update(profile_override)\n\n _calculate_blocksize(profile)\n\n with rasterio.open(str(filename), 'w', **profile) as dest:\n if hasattr(dataset, 'data_vars'):\n for bandnum, data in enumerate(dataset.data_vars.values(), start=1):\n dest.write(data.data, bandnum)\n\n\ndef _calculate_blocksize(profile):\n # Block size must be smaller than the image size, and for geotiffs must be divisible by 16\n # Fix for small images.\n if profile['blockxsize'] > profile['width']:\n if profile['width'] % 16 == 0 or profile['width'] < 16:\n profile['blockxsize'] = profile['width']\n else:\n profile['blockxsize'] = 16\n\n if profile['blockysize'] > profile['height']:\n if profile['height'] % 16 == 0 or profile['height'] < 16:\n profile['blockysize'] = profile['height']\n else:\n profile['blockysize'] = 16\n\n\ndef ga_pq_fuser(dest, src):\n \"\"\"\n Fuse two Geoscience Australia Pixel Quality ndarrays\n\n To be used as a `fuse_func` when loaded `grouped` data, for example when grouping\n by solar day to avoid duplicate data from scene overlaps.\n \"\"\"\n valid_bit = 8\n valid_val = (1 << valid_bit)\n\n no_data_dest_mask = ~(dest & valid_val).astype(bool)\n np.copyto(dest, src, where=no_data_dest_mask)\n\n both_data_mask = (valid_val & dest & src).astype(bool)\n np.copyto(dest, src & dest, where=both_data_mask)\n", "path": "datacube/helpers.py"}, {"content": "\"\"\"\nAdd geometric extensions to :class:`xarray.Dataset` and :class:`xarray.DataArray` for use\nwith Data Cube by Monkey Patching those classes.\n\nThis extension is reliant on an `xarray` object having a `.crs` property of type\n:class:`datacube.utils.geometry.CRS`. This is used to inspect the spatial dimensions of the\n:class:`Dataset` or :class:`DataArray`, and provide new attributes for accessing a\n:class:`datacube.utils.geometry.GeoBox`, affine transform and extent for the dataset as\n`.geobox`, `.affine` and `.extent` respectively.\n\n\"\"\"\n\nimport xarray\nfrom affine import Affine\n\nfrom datacube.utils import data_resolution_and_offset, geometry\n\n\ndef _norm_crs(crs):\n if crs is None or isinstance(crs, geometry.CRS):\n return crs\n elif isinstance(crs, str):\n return geometry.CRS(crs)\n else:\n raise ValueError('Can not interpret {} as CRS'.format(type(crs)))\n\n\ndef _xarray_affine(obj):\n crs = _norm_crs(obj.crs)\n if crs is None:\n return None\n\n dims = crs.dimensions\n xres, xoff = data_resolution_and_offset(obj[dims[1]].values)\n yres, yoff = data_resolution_and_offset(obj[dims[0]].values)\n return Affine.translation(xoff, yoff) * Affine.scale(xres, yres)\n\n\ndef _xarray_extent(obj):\n geobox = obj.geobox\n return None if geobox is None else geobox.extent\n\n\ndef _xarray_geobox(obj):\n crs = _norm_crs(obj.crs)\n if crs is None:\n return None\n\n dims = crs.dimensions\n return geometry.GeoBox(obj[dims[1]].size, obj[dims[0]].size, obj.affine, crs)\n\n\nxarray.Dataset.geobox = property(_xarray_geobox)\nxarray.Dataset.affine = property(_xarray_affine)\nxarray.Dataset.extent = property(_xarray_extent)\nxarray.DataArray.geobox = property(_xarray_geobox)\nxarray.DataArray.affine = property(_xarray_affine)\nxarray.DataArray.extent = property(_xarray_extent)\n", "path": "datacube/utils/xarray_geoextensions.py"}]} | 2,037 | 825 |
gh_patches_debug_41475 | rasdani/github-patches | git_diff | automl__auto-sklearn-1407 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
run_wrapper() got an unexpected keyword argument 'pure'
While running a fitting on a classifier (*output[0]* with data *output[1]* to *output[4]*), I get the following error; I think it's packages issue;
> output[0].fit(output[1], output[2], output[3], output[4])
> File "/usr/local/lib/python3.8/dist-packages/autosklearn/estimators.py", line 1045, in fit
super().fit(
> File "/usr/local/lib/python3.8/dist-packages/autosklearn/estimators.py", line 375, in fit
self.automl_.fit(load_models=self.load_models, **kwargs)
> File "/usr/local/lib/python3.8/dist-packages/autosklearn/automl.py", line 2056, in fit
return super().fit(
> File "/usr/local/lib/python3.8/dist-packages/autosklearn/automl.py", line 931, in fit
_proc_smac.run_smbo()
> File "/usr/local/lib/python3.8/dist-packages/autosklearn/smbo.py", line 498, in run_smbo
smac.optimize()
> File "/usr/local/lib/python3.8/dist-packages/smac/facade/smac_ac_facade.py", line 720, in optimize
incumbent = self.solver.run()
> File "/usr/local/lib/python3.8/dist-packages/smac/optimizer/smbo.py", line 287, in run
self.tae_runner.submit_run(run_info=run_info)
> File "/usr/local/lib/python3.8/dist-packages/smac/tae/dask_runner.py", line 166, in submit_run
self.client.submit(
> File "/usr/local/lib/python3.8/dist-packages/autosklearn/util/single_thread_client.py", line 59, in submit
return DummyFuture(func(*args, **kwargs))
> **TypeError: run_wrapper() got an unexpected keyword argument 'pure'**
It seems like it's an error with Dask. Here are the installed packages on ubuntu 18.04
pandas==1.3.0
scikit-learn==0.24
dask==2021.12.0
auto-sklearn==0.14.5 #AutoML
tensorflow==2.8.0
I've tried all versions of dask from 2021.12.0 to 2022.02.0 (Current) and nothing seems to work. Downgrading to auto-sklearn 0.14.4 and lower didn't solve the problem.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `autosklearn/util/single_thread_client.py`
Content:
```
1 import typing
2 from pathlib import Path
3
4 import dask.distributed
5
6
7 class DummyFuture(dask.distributed.Future):
8 """
9 A class that mimics a distributed Future, the outcome of
10 performing submit on a distributed client.
11 """
12 def __init__(self, result: typing.Any) -> None:
13 self._result = result # type: typing.Any
14
15 def result(self, timeout: typing.Optional[int] = None) -> typing.Any:
16 return self._result
17
18 def cancel(self) -> None:
19 pass
20
21 def done(self) -> bool:
22 return True
23
24 def __repr__(self) -> str:
25 return "DummyFuture: {}".format(self._result)
26
27 def __del__(self) -> None:
28 pass
29
30
31 class SingleThreadedClient(dask.distributed.Client):
32 """
33 A class to Mock the Distributed Client class, in case
34 Auto-Sklearn is meant to run in the current Thread.
35 """
36 def __init__(self) -> None:
37
38 # Raise a not implemented error if using a method from Client
39 implemented_methods = ['submit', 'close', 'shutdown', 'write_scheduler_file',
40 '_get_scheduler_info', 'nthreads']
41 method_list = [func for func in dir(dask.distributed.Client) if callable(
42 getattr(dask.distributed.Client, func)) and not func.startswith('__')]
43 for method in method_list:
44 if method in implemented_methods:
45 continue
46 setattr(self, method, self._unsupported_method)
47 pass
48
49 def _unsupported_method(self) -> None:
50 raise NotImplementedError()
51
52 def submit(
53 self,
54 func: typing.Callable,
55 *args: typing.List,
56 priority: int = 0,
57 **kwargs: typing.Dict,
58 ) -> typing.Any:
59 return DummyFuture(func(*args, **kwargs))
60
61 def close(self) -> None:
62 pass
63
64 def shutdown(self) -> None:
65 pass
66
67 def write_scheduler_file(self, scheduler_file: str) -> None:
68 Path(scheduler_file).touch()
69 return
70
71 def _get_scheduler_info(self) -> typing.Dict:
72 return {
73 'workers': ['127.0.0.1'],
74 'type': 'Scheduler',
75 }
76
77 def nthreads(self) -> typing.Dict:
78 return {
79 '127.0.0.1': 1,
80 }
81
82 def __repr__(self) -> str:
83 return 'SingleThreadedClient()'
84
85 def __del__(self) -> None:
86 pass
87
```
Path: `autosklearn/__version__.py`
Content:
```
1 """Version information."""
2
3 # The following line *must* be the last in the module, exactly as formatted:
4 __version__ = "0.14.4"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/autosklearn/__version__.py b/autosklearn/__version__.py
--- a/autosklearn/__version__.py
+++ b/autosklearn/__version__.py
@@ -1,4 +1,4 @@
"""Version information."""
# The following line *must* be the last in the module, exactly as formatted:
-__version__ = "0.14.4"
+__version__ = "0.14.6"
diff --git a/autosklearn/util/single_thread_client.py b/autosklearn/util/single_thread_client.py
--- a/autosklearn/util/single_thread_client.py
+++ b/autosklearn/util/single_thread_client.py
@@ -1,5 +1,6 @@
import typing
from pathlib import Path
+from typing import Any
import dask.distributed
@@ -9,6 +10,7 @@
A class that mimics a distributed Future, the outcome of
performing submit on a distributed client.
"""
+
def __init__(self, result: typing.Any) -> None:
self._result = result # type: typing.Any
@@ -33,13 +35,24 @@
A class to Mock the Distributed Client class, in case
Auto-Sklearn is meant to run in the current Thread.
"""
+
def __init__(self) -> None:
# Raise a not implemented error if using a method from Client
- implemented_methods = ['submit', 'close', 'shutdown', 'write_scheduler_file',
- '_get_scheduler_info', 'nthreads']
- method_list = [func for func in dir(dask.distributed.Client) if callable(
- getattr(dask.distributed.Client, func)) and not func.startswith('__')]
+ implemented_methods = [
+ "submit",
+ "close",
+ "shutdown",
+ "write_scheduler_file",
+ "_get_scheduler_info",
+ "nthreads",
+ ]
+ method_list = [
+ func
+ for func in dir(dask.distributed.Client)
+ if callable(getattr(dask.distributed.Client, func))
+ and not func.startswith("__")
+ ]
for method in method_list:
if method in implemented_methods:
continue
@@ -54,8 +67,24 @@
func: typing.Callable,
*args: typing.List,
priority: int = 0,
- **kwargs: typing.Dict,
+ key: Any = None,
+ workers: Any = None,
+ resources: Any = None,
+ retries: Any = None,
+ fifo_timeout: Any = "100 ms",
+ allow_other_workers: Any = False,
+ actor: Any = False,
+ actors: Any = False,
+ pure: Any = None,
+ **kwargs: Any,
) -> typing.Any:
+ """
+ Note
+ ----
+ The keyword arguments caught in `dask.distributed.Client` need to
+ be specified here so they don't get passed in as ``**kwargs`` to the
+ ``func``.
+ """
return DummyFuture(func(*args, **kwargs))
def close(self) -> None:
@@ -70,17 +99,17 @@
def _get_scheduler_info(self) -> typing.Dict:
return {
- 'workers': ['127.0.0.1'],
- 'type': 'Scheduler',
+ "workers": ["127.0.0.1"],
+ "type": "Scheduler",
}
def nthreads(self) -> typing.Dict:
return {
- '127.0.0.1': 1,
+ "127.0.0.1": 1,
}
def __repr__(self) -> str:
- return 'SingleThreadedClient()'
+ return "SingleThreadedClient()"
def __del__(self) -> None:
pass
| {"golden_diff": "diff --git a/autosklearn/__version__.py b/autosklearn/__version__.py\n--- a/autosklearn/__version__.py\n+++ b/autosklearn/__version__.py\n@@ -1,4 +1,4 @@\n \"\"\"Version information.\"\"\"\n \n # The following line *must* be the last in the module, exactly as formatted:\n-__version__ = \"0.14.4\"\n+__version__ = \"0.14.6\"\ndiff --git a/autosklearn/util/single_thread_client.py b/autosklearn/util/single_thread_client.py\n--- a/autosklearn/util/single_thread_client.py\n+++ b/autosklearn/util/single_thread_client.py\n@@ -1,5 +1,6 @@\n import typing\n from pathlib import Path\n+from typing import Any\n \n import dask.distributed\n \n@@ -9,6 +10,7 @@\n A class that mimics a distributed Future, the outcome of\n performing submit on a distributed client.\n \"\"\"\n+\n def __init__(self, result: typing.Any) -> None:\n self._result = result # type: typing.Any\n \n@@ -33,13 +35,24 @@\n A class to Mock the Distributed Client class, in case\n Auto-Sklearn is meant to run in the current Thread.\n \"\"\"\n+\n def __init__(self) -> None:\n \n # Raise a not implemented error if using a method from Client\n- implemented_methods = ['submit', 'close', 'shutdown', 'write_scheduler_file',\n- '_get_scheduler_info', 'nthreads']\n- method_list = [func for func in dir(dask.distributed.Client) if callable(\n- getattr(dask.distributed.Client, func)) and not func.startswith('__')]\n+ implemented_methods = [\n+ \"submit\",\n+ \"close\",\n+ \"shutdown\",\n+ \"write_scheduler_file\",\n+ \"_get_scheduler_info\",\n+ \"nthreads\",\n+ ]\n+ method_list = [\n+ func\n+ for func in dir(dask.distributed.Client)\n+ if callable(getattr(dask.distributed.Client, func))\n+ and not func.startswith(\"__\")\n+ ]\n for method in method_list:\n if method in implemented_methods:\n continue\n@@ -54,8 +67,24 @@\n func: typing.Callable,\n *args: typing.List,\n priority: int = 0,\n- **kwargs: typing.Dict,\n+ key: Any = None,\n+ workers: Any = None,\n+ resources: Any = None,\n+ retries: Any = None,\n+ fifo_timeout: Any = \"100 ms\",\n+ allow_other_workers: Any = False,\n+ actor: Any = False,\n+ actors: Any = False,\n+ pure: Any = None,\n+ **kwargs: Any,\n ) -> typing.Any:\n+ \"\"\"\n+ Note\n+ ----\n+ The keyword arguments caught in `dask.distributed.Client` need to\n+ be specified here so they don't get passed in as ``**kwargs`` to the\n+ ``func``.\n+ \"\"\"\n return DummyFuture(func(*args, **kwargs))\n \n def close(self) -> None:\n@@ -70,17 +99,17 @@\n \n def _get_scheduler_info(self) -> typing.Dict:\n return {\n- 'workers': ['127.0.0.1'],\n- 'type': 'Scheduler',\n+ \"workers\": [\"127.0.0.1\"],\n+ \"type\": \"Scheduler\",\n }\n \n def nthreads(self) -> typing.Dict:\n return {\n- '127.0.0.1': 1,\n+ \"127.0.0.1\": 1,\n }\n \n def __repr__(self) -> str:\n- return 'SingleThreadedClient()'\n+ return \"SingleThreadedClient()\"\n \n def __del__(self) -> None:\n pass\n", "issue": "run_wrapper() got an unexpected keyword argument 'pure'\nWhile running a fitting on a classifier (*output[0]* with data *output[1]* to *output[4]*), I get the following error; I think it's packages issue;\r\n\r\n > output[0].fit(output[1], output[2], output[3], output[4])\r\n > File \"/usr/local/lib/python3.8/dist-packages/autosklearn/estimators.py\", line 1045, in fit\r\n super().fit(\r\n > File \"/usr/local/lib/python3.8/dist-packages/autosklearn/estimators.py\", line 375, in fit\r\n self.automl_.fit(load_models=self.load_models, **kwargs)\r\n > File \"/usr/local/lib/python3.8/dist-packages/autosklearn/automl.py\", line 2056, in fit\r\n return super().fit(\r\n > File \"/usr/local/lib/python3.8/dist-packages/autosklearn/automl.py\", line 931, in fit\r\n _proc_smac.run_smbo()\r\n > File \"/usr/local/lib/python3.8/dist-packages/autosklearn/smbo.py\", line 498, in run_smbo\r\n smac.optimize()\r\n > File \"/usr/local/lib/python3.8/dist-packages/smac/facade/smac_ac_facade.py\", line 720, in optimize\r\n incumbent = self.solver.run()\r\n > File \"/usr/local/lib/python3.8/dist-packages/smac/optimizer/smbo.py\", line 287, in run\r\n self.tae_runner.submit_run(run_info=run_info)\r\n > File \"/usr/local/lib/python3.8/dist-packages/smac/tae/dask_runner.py\", line 166, in submit_run\r\n self.client.submit(\r\n > File \"/usr/local/lib/python3.8/dist-packages/autosklearn/util/single_thread_client.py\", line 59, in submit\r\n return DummyFuture(func(*args, **kwargs))\r\n > **TypeError: run_wrapper() got an unexpected keyword argument 'pure'**\r\n\r\nIt seems like it's an error with Dask. Here are the installed packages on ubuntu 18.04\r\n\r\npandas==1.3.0\r\nscikit-learn==0.24\r\ndask==2021.12.0\r\nauto-sklearn==0.14.5 #AutoML\r\ntensorflow==2.8.0\r\n\r\nI've tried all versions of dask from 2021.12.0 to 2022.02.0 (Current) and nothing seems to work. Downgrading to auto-sklearn 0.14.4 and lower didn't solve the problem.\n", "before_files": [{"content": "import typing\nfrom pathlib import Path\n\nimport dask.distributed\n\n\nclass DummyFuture(dask.distributed.Future):\n \"\"\"\n A class that mimics a distributed Future, the outcome of\n performing submit on a distributed client.\n \"\"\"\n def __init__(self, result: typing.Any) -> None:\n self._result = result # type: typing.Any\n\n def result(self, timeout: typing.Optional[int] = None) -> typing.Any:\n return self._result\n\n def cancel(self) -> None:\n pass\n\n def done(self) -> bool:\n return True\n\n def __repr__(self) -> str:\n return \"DummyFuture: {}\".format(self._result)\n\n def __del__(self) -> None:\n pass\n\n\nclass SingleThreadedClient(dask.distributed.Client):\n \"\"\"\n A class to Mock the Distributed Client class, in case\n Auto-Sklearn is meant to run in the current Thread.\n \"\"\"\n def __init__(self) -> None:\n\n # Raise a not implemented error if using a method from Client\n implemented_methods = ['submit', 'close', 'shutdown', 'write_scheduler_file',\n '_get_scheduler_info', 'nthreads']\n method_list = [func for func in dir(dask.distributed.Client) if callable(\n getattr(dask.distributed.Client, func)) and not func.startswith('__')]\n for method in method_list:\n if method in implemented_methods:\n continue\n setattr(self, method, self._unsupported_method)\n pass\n\n def _unsupported_method(self) -> None:\n raise NotImplementedError()\n\n def submit(\n self,\n func: typing.Callable,\n *args: typing.List,\n priority: int = 0,\n **kwargs: typing.Dict,\n ) -> typing.Any:\n return DummyFuture(func(*args, **kwargs))\n\n def close(self) -> None:\n pass\n\n def shutdown(self) -> None:\n pass\n\n def write_scheduler_file(self, scheduler_file: str) -> None:\n Path(scheduler_file).touch()\n return\n\n def _get_scheduler_info(self) -> typing.Dict:\n return {\n 'workers': ['127.0.0.1'],\n 'type': 'Scheduler',\n }\n\n def nthreads(self) -> typing.Dict:\n return {\n '127.0.0.1': 1,\n }\n\n def __repr__(self) -> str:\n return 'SingleThreadedClient()'\n\n def __del__(self) -> None:\n pass\n", "path": "autosklearn/util/single_thread_client.py"}, {"content": "\"\"\"Version information.\"\"\"\n\n# The following line *must* be the last in the module, exactly as formatted:\n__version__ = \"0.14.4\"\n", "path": "autosklearn/__version__.py"}], "after_files": [{"content": "import typing\nfrom pathlib import Path\nfrom typing import Any\n\nimport dask.distributed\n\n\nclass DummyFuture(dask.distributed.Future):\n \"\"\"\n A class that mimics a distributed Future, the outcome of\n performing submit on a distributed client.\n \"\"\"\n\n def __init__(self, result: typing.Any) -> None:\n self._result = result # type: typing.Any\n\n def result(self, timeout: typing.Optional[int] = None) -> typing.Any:\n return self._result\n\n def cancel(self) -> None:\n pass\n\n def done(self) -> bool:\n return True\n\n def __repr__(self) -> str:\n return \"DummyFuture: {}\".format(self._result)\n\n def __del__(self) -> None:\n pass\n\n\nclass SingleThreadedClient(dask.distributed.Client):\n \"\"\"\n A class to Mock the Distributed Client class, in case\n Auto-Sklearn is meant to run in the current Thread.\n \"\"\"\n\n def __init__(self) -> None:\n\n # Raise a not implemented error if using a method from Client\n implemented_methods = [\n \"submit\",\n \"close\",\n \"shutdown\",\n \"write_scheduler_file\",\n \"_get_scheduler_info\",\n \"nthreads\",\n ]\n method_list = [\n func\n for func in dir(dask.distributed.Client)\n if callable(getattr(dask.distributed.Client, func))\n and not func.startswith(\"__\")\n ]\n for method in method_list:\n if method in implemented_methods:\n continue\n setattr(self, method, self._unsupported_method)\n pass\n\n def _unsupported_method(self) -> None:\n raise NotImplementedError()\n\n def submit(\n self,\n func: typing.Callable,\n *args: typing.List,\n priority: int = 0,\n key: Any = None,\n workers: Any = None,\n resources: Any = None,\n retries: Any = None,\n fifo_timeout: Any = \"100 ms\",\n allow_other_workers: Any = False,\n actor: Any = False,\n actors: Any = False,\n pure: Any = None,\n **kwargs: Any,\n ) -> typing.Any:\n \"\"\"\n Note\n ----\n The keyword arguments caught in `dask.distributed.Client` need to\n be specified here so they don't get passed in as ``**kwargs`` to the\n ``func``.\n \"\"\"\n return DummyFuture(func(*args, **kwargs))\n\n def close(self) -> None:\n pass\n\n def shutdown(self) -> None:\n pass\n\n def write_scheduler_file(self, scheduler_file: str) -> None:\n Path(scheduler_file).touch()\n return\n\n def _get_scheduler_info(self) -> typing.Dict:\n return {\n \"workers\": [\"127.0.0.1\"],\n \"type\": \"Scheduler\",\n }\n\n def nthreads(self) -> typing.Dict:\n return {\n \"127.0.0.1\": 1,\n }\n\n def __repr__(self) -> str:\n return \"SingleThreadedClient()\"\n\n def __del__(self) -> None:\n pass\n", "path": "autosklearn/util/single_thread_client.py"}, {"content": "\"\"\"Version information.\"\"\"\n\n# The following line *must* be the last in the module, exactly as formatted:\n__version__ = \"0.14.6\"\n", "path": "autosklearn/__version__.py"}]} | 1,634 | 873 |
gh_patches_debug_16694 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-2069 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Entering Palette options crashes mitmproxy
##### Steps to reproduce the problem:
1. Press 'O' for options
2. Select 'Palette'
3. mitmproxy will crash
##### Any other comments? What have you tried so far?
```
Traceback (most recent call last):
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/master.py", line 281, in run
self.loop.run()
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py", line 278, in run
self._run()
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py", line 376, in _run
self.event_loop.run()
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py", line 682, in run
self._loop()
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py", line 719, in _loop
self._watch_files[fd]()
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/raw_display.py", line 393, in <lambda>
event_loop, callback, self.get_available_raw_input())
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/raw_display.py", line 493, in parse_input
callback(processed, processed_codes)
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py", line 403, in _update
self.process_input(keys)
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py", line 503, in process_input
k = self._topmost_widget.keypress(self.screen_size, k)
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/window.py", line 84, in keypress
k = super().keypress(size, k)
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/container.py", line 1128, in keypress
return self.body.keypress( (maxcol, remaining), key )
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/container.py", line 1128, in keypress
return self.body.keypress( (maxcol, remaining), key )
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/select.py", line 114, in keypress
self.get_focus()[0].option.activate()
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/palettepicker.py", line 46, in <lambda>
lambda: setattr(self.master.options, "palette", name)
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/optmanager.py", line 114, in __setattr__
self.update(**{attr: value})
File "/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/optmanager.py", line 141, in update
raise KeyError("No such option: %s" % k)
KeyError: 'No such option: palette'
```
The option names in mitmproxy/options.py were prefixed with 'console_', but line 46 and line 62 of mitmproxy/tools/console/palettepicker.py were not updated to include this prefix.
This appears to have been broken by commit [35aff3b](https://github.com/mitmproxy/mitmproxy/commit/35aff3b7838f8df718cc574d2643f1355849fa8e)
##### System information
Mitmproxy version: 2.0.0 (release version)
Python version: 3.6.0
Platform: Darwin-16.4.0-x86_64-i386-64bit
SSL version: OpenSSL 1.1.0e 16 Feb 2017
Mac version: 10.12.3 ('', '', '') x86_64
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/tools/console/palettepicker.py`
Content:
```
1 import urwid
2
3 from mitmproxy.tools.console import common
4 from mitmproxy.tools.console import palettes
5 from mitmproxy.tools.console import select
6
7 footer = [
8 ('heading_key', "enter/space"), ":select",
9 ]
10
11
12 def _mkhelp():
13 text = []
14 keys = [
15 ("enter/space", "select"),
16 ]
17 text.extend(common.format_keyvals(keys, key="key", val="text", indent=4))
18 return text
19
20
21 help_context = _mkhelp()
22
23
24 class PalettePicker(urwid.WidgetWrap):
25
26 def __init__(self, master):
27 self.master = master
28 low, high = [], []
29 for k, v in palettes.palettes.items():
30 if v.high:
31 high.append(k)
32 else:
33 low.append(k)
34 high.sort()
35 low.sort()
36
37 options = [
38 select.Heading("High Colour")
39 ]
40
41 def mkopt(name):
42 return select.Option(
43 i,
44 None,
45 lambda: self.master.options.console_palette == name,
46 lambda: setattr(self.master.options, "palette", name)
47 )
48
49 for i in high:
50 options.append(mkopt(i))
51 options.append(select.Heading("Low Colour"))
52 for i in low:
53 options.append(mkopt(i))
54
55 options.extend(
56 [
57 select.Heading("Options"),
58 select.Option(
59 "Transparent",
60 "T",
61 lambda: master.options.console_palette_transparent,
62 master.options.toggler("palette_transparent")
63 )
64 ]
65 )
66
67 self.lb = select.Select(options)
68 title = urwid.Text("Palettes")
69 title = urwid.Padding(title, align="left", width=("relative", 100))
70 title = urwid.AttrWrap(title, "heading")
71 self._w = urwid.Frame(
72 self.lb,
73 header = title
74 )
75 master.options.changed.connect(self.sig_options_changed)
76
77 def sig_options_changed(self, options, updated):
78 self.lb.walker._modified()
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mitmproxy/tools/console/palettepicker.py b/mitmproxy/tools/console/palettepicker.py
--- a/mitmproxy/tools/console/palettepicker.py
+++ b/mitmproxy/tools/console/palettepicker.py
@@ -43,7 +43,7 @@
i,
None,
lambda: self.master.options.console_palette == name,
- lambda: setattr(self.master.options, "palette", name)
+ lambda: setattr(self.master.options, "console_palette", name)
)
for i in high:
@@ -59,7 +59,7 @@
"Transparent",
"T",
lambda: master.options.console_palette_transparent,
- master.options.toggler("palette_transparent")
+ master.options.toggler("console_palette_transparent")
)
]
)
| {"golden_diff": "diff --git a/mitmproxy/tools/console/palettepicker.py b/mitmproxy/tools/console/palettepicker.py\n--- a/mitmproxy/tools/console/palettepicker.py\n+++ b/mitmproxy/tools/console/palettepicker.py\n@@ -43,7 +43,7 @@\n i,\n None,\n lambda: self.master.options.console_palette == name,\n- lambda: setattr(self.master.options, \"palette\", name)\n+ lambda: setattr(self.master.options, \"console_palette\", name)\n )\n \n for i in high:\n@@ -59,7 +59,7 @@\n \"Transparent\",\n \"T\",\n lambda: master.options.console_palette_transparent,\n- master.options.toggler(\"palette_transparent\")\n+ master.options.toggler(\"console_palette_transparent\")\n )\n ]\n )\n", "issue": "Entering Palette options crashes mitmproxy\n##### Steps to reproduce the problem:\r\n\r\n1. Press 'O' for options\r\n2. Select 'Palette'\r\n3. mitmproxy will crash\r\n\r\n\r\n##### Any other comments? What have you tried so far?\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/master.py\", line 281, in run\r\n self.loop.run()\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py\", line 278, in run\r\n self._run()\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py\", line 376, in _run\r\n self.event_loop.run()\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py\", line 682, in run\r\n self._loop()\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py\", line 719, in _loop\r\n self._watch_files[fd]()\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/raw_display.py\", line 393, in <lambda>\r\n event_loop, callback, self.get_available_raw_input())\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/raw_display.py\", line 493, in parse_input\r\n callback(processed, processed_codes)\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py\", line 403, in _update\r\n self.process_input(keys)\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/main_loop.py\", line 503, in process_input\r\n k = self._topmost_widget.keypress(self.screen_size, k)\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/window.py\", line 84, in keypress\r\n k = super().keypress(size, k)\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/container.py\", line 1128, in keypress\r\n return self.body.keypress( (maxcol, remaining), key )\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/urwid/container.py\", line 1128, in keypress\r\n return self.body.keypress( (maxcol, remaining), key )\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/select.py\", line 114, in keypress\r\n self.get_focus()[0].option.activate()\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/tools/console/palettepicker.py\", line 46, in <lambda>\r\n lambda: setattr(self.master.options, \"palette\", name)\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/optmanager.py\", line 114, in __setattr__\r\n self.update(**{attr: value})\r\n File \"/usr/local/Cellar/mitmproxy/2.0.0/libexec/lib/python3.6/site-packages/mitmproxy/optmanager.py\", line 141, in update\r\n raise KeyError(\"No such option: %s\" % k)\r\nKeyError: 'No such option: palette'\r\n\r\n```\r\nThe option names in mitmproxy/options.py were prefixed with 'console_', but line 46 and line 62 of mitmproxy/tools/console/palettepicker.py were not updated to include this prefix. \r\n\r\nThis appears to have been broken by commit [35aff3b](https://github.com/mitmproxy/mitmproxy/commit/35aff3b7838f8df718cc574d2643f1355849fa8e)\r\n\r\n##### System information\r\n\r\nMitmproxy version: 2.0.0 (release version) \r\nPython version: 3.6.0\r\nPlatform: Darwin-16.4.0-x86_64-i386-64bit\r\nSSL version: OpenSSL 1.1.0e 16 Feb 2017\r\nMac version: 10.12.3 ('', '', '') x86_64\r\n\n", "before_files": [{"content": "import urwid\n\nfrom mitmproxy.tools.console import common\nfrom mitmproxy.tools.console import palettes\nfrom mitmproxy.tools.console import select\n\nfooter = [\n ('heading_key', \"enter/space\"), \":select\",\n]\n\n\ndef _mkhelp():\n text = []\n keys = [\n (\"enter/space\", \"select\"),\n ]\n text.extend(common.format_keyvals(keys, key=\"key\", val=\"text\", indent=4))\n return text\n\n\nhelp_context = _mkhelp()\n\n\nclass PalettePicker(urwid.WidgetWrap):\n\n def __init__(self, master):\n self.master = master\n low, high = [], []\n for k, v in palettes.palettes.items():\n if v.high:\n high.append(k)\n else:\n low.append(k)\n high.sort()\n low.sort()\n\n options = [\n select.Heading(\"High Colour\")\n ]\n\n def mkopt(name):\n return select.Option(\n i,\n None,\n lambda: self.master.options.console_palette == name,\n lambda: setattr(self.master.options, \"palette\", name)\n )\n\n for i in high:\n options.append(mkopt(i))\n options.append(select.Heading(\"Low Colour\"))\n for i in low:\n options.append(mkopt(i))\n\n options.extend(\n [\n select.Heading(\"Options\"),\n select.Option(\n \"Transparent\",\n \"T\",\n lambda: master.options.console_palette_transparent,\n master.options.toggler(\"palette_transparent\")\n )\n ]\n )\n\n self.lb = select.Select(options)\n title = urwid.Text(\"Palettes\")\n title = urwid.Padding(title, align=\"left\", width=(\"relative\", 100))\n title = urwid.AttrWrap(title, \"heading\")\n self._w = urwid.Frame(\n self.lb,\n header = title\n )\n master.options.changed.connect(self.sig_options_changed)\n\n def sig_options_changed(self, options, updated):\n self.lb.walker._modified()\n", "path": "mitmproxy/tools/console/palettepicker.py"}], "after_files": [{"content": "import urwid\n\nfrom mitmproxy.tools.console import common\nfrom mitmproxy.tools.console import palettes\nfrom mitmproxy.tools.console import select\n\nfooter = [\n ('heading_key', \"enter/space\"), \":select\",\n]\n\n\ndef _mkhelp():\n text = []\n keys = [\n (\"enter/space\", \"select\"),\n ]\n text.extend(common.format_keyvals(keys, key=\"key\", val=\"text\", indent=4))\n return text\n\n\nhelp_context = _mkhelp()\n\n\nclass PalettePicker(urwid.WidgetWrap):\n\n def __init__(self, master):\n self.master = master\n low, high = [], []\n for k, v in palettes.palettes.items():\n if v.high:\n high.append(k)\n else:\n low.append(k)\n high.sort()\n low.sort()\n\n options = [\n select.Heading(\"High Colour\")\n ]\n\n def mkopt(name):\n return select.Option(\n i,\n None,\n lambda: self.master.options.console_palette == name,\n lambda: setattr(self.master.options, \"console_palette\", name)\n )\n\n for i in high:\n options.append(mkopt(i))\n options.append(select.Heading(\"Low Colour\"))\n for i in low:\n options.append(mkopt(i))\n\n options.extend(\n [\n select.Heading(\"Options\"),\n select.Option(\n \"Transparent\",\n \"T\",\n lambda: master.options.console_palette_transparent,\n master.options.toggler(\"console_palette_transparent\")\n )\n ]\n )\n\n self.lb = select.Select(options)\n title = urwid.Text(\"Palettes\")\n title = urwid.Padding(title, align=\"left\", width=(\"relative\", 100))\n title = urwid.AttrWrap(title, \"heading\")\n self._w = urwid.Frame(\n self.lb,\n header = title\n )\n master.options.changed.connect(self.sig_options_changed)\n\n def sig_options_changed(self, options, updated):\n self.lb.walker._modified()\n", "path": "mitmproxy/tools/console/palettepicker.py"}]} | 1,974 | 174 |
gh_patches_debug_10877 | rasdani/github-patches | git_diff | ivy-llc__ivy-20857 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fftshift
This subtask is to add the `fftshift` function to the Jax frontend
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/jax/numpy/fft.py`
Content:
```
1 # local
2 import ivy
3 from ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back
4
5
6 @to_ivy_arrays_and_back
7 def fft(a, n=None, axis=-1, norm=None):
8 if norm is None:
9 norm = "backward"
10 return ivy.fft(a, axis, norm=norm, n=n)
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/jax/numpy/fft.py b/ivy/functional/frontends/jax/numpy/fft.py
--- a/ivy/functional/frontends/jax/numpy/fft.py
+++ b/ivy/functional/frontends/jax/numpy/fft.py
@@ -1,6 +1,25 @@
# local
import ivy
from ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back
+from ivy.func_wrapper import with_unsupported_dtypes
+
+
+@to_ivy_arrays_and_back
+@with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
+def fftshift(x, axes=None, name=None):
+ shape = x.shape
+
+ if axes is None:
+ axes = tuple(range(x.ndim))
+ shifts = [(dim // 2) for dim in shape]
+ elif isinstance(axes, int):
+ shifts = shape[axes] // 2
+ else:
+ shifts = [shape[ax] // 2 for ax in axes]
+
+ roll = ivy.roll(x, shifts, axis=axes)
+
+ return roll
@to_ivy_arrays_and_back
| {"golden_diff": "diff --git a/ivy/functional/frontends/jax/numpy/fft.py b/ivy/functional/frontends/jax/numpy/fft.py\n--- a/ivy/functional/frontends/jax/numpy/fft.py\n+++ b/ivy/functional/frontends/jax/numpy/fft.py\n@@ -1,6 +1,25 @@\n # local\n import ivy\n from ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back\n+from ivy.func_wrapper import with_unsupported_dtypes\n+\n+\n+@to_ivy_arrays_and_back\n+@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n+def fftshift(x, axes=None, name=None):\n+ shape = x.shape\n+\n+ if axes is None:\n+ axes = tuple(range(x.ndim))\n+ shifts = [(dim // 2) for dim in shape]\n+ elif isinstance(axes, int):\n+ shifts = shape[axes] // 2\n+ else:\n+ shifts = [shape[ax] // 2 for ax in axes]\n+\n+ roll = ivy.roll(x, shifts, axis=axes)\n+\n+ return roll\n \n \n @to_ivy_arrays_and_back\n", "issue": "fftshift\nThis subtask is to add the `fftshift` function to the Jax frontend\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back\n\n\n@to_ivy_arrays_and_back\ndef fft(a, n=None, axis=-1, norm=None):\n if norm is None:\n norm = \"backward\"\n return ivy.fft(a, axis, norm=norm, n=n)\n", "path": "ivy/functional/frontends/jax/numpy/fft.py"}], "after_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back\nfrom ivy.func_wrapper import with_unsupported_dtypes\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\ndef fftshift(x, axes=None, name=None):\n shape = x.shape\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shifts = [(dim // 2) for dim in shape]\n elif isinstance(axes, int):\n shifts = shape[axes] // 2\n else:\n shifts = [shape[ax] // 2 for ax in axes]\n\n roll = ivy.roll(x, shifts, axis=axes)\n\n return roll\n\n\n@to_ivy_arrays_and_back\ndef fft(a, n=None, axis=-1, norm=None):\n if norm is None:\n norm = \"backward\"\n return ivy.fft(a, axis, norm=norm, n=n)\n", "path": "ivy/functional/frontends/jax/numpy/fft.py"}]} | 377 | 280 |
gh_patches_debug_20482 | rasdani/github-patches | git_diff | crytic__slither-546 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
function-id not zero-padding function signature
```
ERC20:
+---------------------------------------+------------+
| Name | ID |
+---------------------------------------+------------+
| totalSupply() | 0x18160ddd |
| balanceOf(address) | 0x70a08231 |
| allowance(address,address) | 0xdd62ed3e |
| transfer(address,uint256) | 0xa9059cbb |
| transferFrom(address,address,uint256) | 0x23b872dd |
| approve(address,uint256) | 0x95ea7b3 |
+---------------------------------------+------------+
```
It's a minor annoyance, but for approve it outputs `0x95ea7b3` instead of `0x095ea7b3`. It is the same numerically, of course, but the function signature is more of an opaque 4-byte identifier than something numerically quantified.
function-id not zero-padding function signature
```
ERC20:
+---------------------------------------+------------+
| Name | ID |
+---------------------------------------+------------+
| totalSupply() | 0x18160ddd |
| balanceOf(address) | 0x70a08231 |
| allowance(address,address) | 0xdd62ed3e |
| transfer(address,uint256) | 0xa9059cbb |
| transferFrom(address,address,uint256) | 0x23b872dd |
| approve(address,uint256) | 0x95ea7b3 |
+---------------------------------------+------------+
```
It's a minor annoyance, but for approve it outputs `0x95ea7b3` instead of `0x095ea7b3`. It is the same numerically, of course, but the function signature is more of an opaque 4-byte identifier than something numerically quantified.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `slither/printers/summary/function_ids.py`
Content:
```
1 """
2 Module printing summary of the contract
3 """
4 from slither.printers.abstract_printer import AbstractPrinter
5 from slither.utils.function import get_function_id
6 from slither.utils.myprettytable import MyPrettyTable
7
8
9 class FunctionIds(AbstractPrinter):
10
11 ARGUMENT = 'function-id'
12 HELP = 'Print the keccack256 signature of the functions'
13
14 WIKI = 'https://github.com/trailofbits/slither/wiki/Printer-documentation#function-id'
15
16 def output(self, _filename):
17 """
18 _filename is not used
19 Args:
20 _filename(string)
21 """
22
23 txt = ''
24 all_tables = []
25 for contract in self.slither.contracts_derived:
26 txt += '\n{}:\n'.format(contract.name)
27 table = MyPrettyTable(['Name', 'ID'])
28 for function in contract.functions:
29 if function.visibility in ['public', 'external']:
30 table.add_row([function.solidity_signature, hex(get_function_id(function.solidity_signature))])
31 for variable in contract.state_variables:
32 if variable.visibility in ['public']:
33 sig = variable.function_name
34 table.add_row([sig, hex(get_function_id(sig))])
35 txt += str(table) + '\n'
36 all_tables.append((contract.name, table))
37
38 self.info(txt)
39
40 res = self.generate_output(txt)
41 for name, table in all_tables:
42 res.add_pretty_table(table, name)
43
44 return res
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/slither/printers/summary/function_ids.py b/slither/printers/summary/function_ids.py
--- a/slither/printers/summary/function_ids.py
+++ b/slither/printers/summary/function_ids.py
@@ -27,11 +27,13 @@
table = MyPrettyTable(['Name', 'ID'])
for function in contract.functions:
if function.visibility in ['public', 'external']:
- table.add_row([function.solidity_signature, hex(get_function_id(function.solidity_signature))])
+ function_id = get_function_id(function.solidity_signature)
+ table.add_row([function.solidity_signature, f"{function_id:#0{10}x}"])
for variable in contract.state_variables:
if variable.visibility in ['public']:
sig = variable.function_name
- table.add_row([sig, hex(get_function_id(sig))])
+ function_id = get_function_id(sig)
+ table.add_row([sig, f"{function_id:#0{10}x}"])
txt += str(table) + '\n'
all_tables.append((contract.name, table))
| {"golden_diff": "diff --git a/slither/printers/summary/function_ids.py b/slither/printers/summary/function_ids.py\n--- a/slither/printers/summary/function_ids.py\n+++ b/slither/printers/summary/function_ids.py\n@@ -27,11 +27,13 @@\n table = MyPrettyTable(['Name', 'ID'])\n for function in contract.functions:\n if function.visibility in ['public', 'external']:\n- table.add_row([function.solidity_signature, hex(get_function_id(function.solidity_signature))])\n+ function_id = get_function_id(function.solidity_signature)\n+ table.add_row([function.solidity_signature, f\"{function_id:#0{10}x}\"])\n for variable in contract.state_variables:\n if variable.visibility in ['public']:\n sig = variable.function_name\n- table.add_row([sig, hex(get_function_id(sig))])\n+ function_id = get_function_id(sig)\n+ table.add_row([sig, f\"{function_id:#0{10}x}\"])\n txt += str(table) + '\\n'\n all_tables.append((contract.name, table))\n", "issue": "function-id not zero-padding function signature \n```\r\nERC20:\r\n+---------------------------------------+------------+\r\n| Name | ID |\r\n+---------------------------------------+------------+\r\n| totalSupply() | 0x18160ddd |\r\n| balanceOf(address) | 0x70a08231 |\r\n| allowance(address,address) | 0xdd62ed3e |\r\n| transfer(address,uint256) | 0xa9059cbb |\r\n| transferFrom(address,address,uint256) | 0x23b872dd |\r\n| approve(address,uint256) | 0x95ea7b3 |\r\n+---------------------------------------+------------+\r\n\r\n```\r\n\r\nIt's a minor annoyance, but for approve it outputs `0x95ea7b3` instead of `0x095ea7b3`. It is the same numerically, of course, but the function signature is more of an opaque 4-byte identifier than something numerically quantified.\r\n\r\n\nfunction-id not zero-padding function signature \n```\r\nERC20:\r\n+---------------------------------------+------------+\r\n| Name | ID |\r\n+---------------------------------------+------------+\r\n| totalSupply() | 0x18160ddd |\r\n| balanceOf(address) | 0x70a08231 |\r\n| allowance(address,address) | 0xdd62ed3e |\r\n| transfer(address,uint256) | 0xa9059cbb |\r\n| transferFrom(address,address,uint256) | 0x23b872dd |\r\n| approve(address,uint256) | 0x95ea7b3 |\r\n+---------------------------------------+------------+\r\n\r\n```\r\n\r\nIt's a minor annoyance, but for approve it outputs `0x95ea7b3` instead of `0x095ea7b3`. It is the same numerically, of course, but the function signature is more of an opaque 4-byte identifier than something numerically quantified.\r\n\r\n\n", "before_files": [{"content": "\"\"\"\n Module printing summary of the contract\n\"\"\"\nfrom slither.printers.abstract_printer import AbstractPrinter\nfrom slither.utils.function import get_function_id\nfrom slither.utils.myprettytable import MyPrettyTable\n\n\nclass FunctionIds(AbstractPrinter):\n\n ARGUMENT = 'function-id'\n HELP = 'Print the keccack256 signature of the functions'\n\n WIKI = 'https://github.com/trailofbits/slither/wiki/Printer-documentation#function-id'\n\n def output(self, _filename):\n \"\"\"\n _filename is not used\n Args:\n _filename(string)\n \"\"\"\n\n txt = ''\n all_tables = []\n for contract in self.slither.contracts_derived:\n txt += '\\n{}:\\n'.format(contract.name)\n table = MyPrettyTable(['Name', 'ID'])\n for function in contract.functions:\n if function.visibility in ['public', 'external']:\n table.add_row([function.solidity_signature, hex(get_function_id(function.solidity_signature))])\n for variable in contract.state_variables:\n if variable.visibility in ['public']:\n sig = variable.function_name\n table.add_row([sig, hex(get_function_id(sig))])\n txt += str(table) + '\\n'\n all_tables.append((contract.name, table))\n\n self.info(txt)\n\n res = self.generate_output(txt)\n for name, table in all_tables:\n res.add_pretty_table(table, name)\n\n return res", "path": "slither/printers/summary/function_ids.py"}], "after_files": [{"content": "\"\"\"\n Module printing summary of the contract\n\"\"\"\nfrom slither.printers.abstract_printer import AbstractPrinter\nfrom slither.utils.function import get_function_id\nfrom slither.utils.myprettytable import MyPrettyTable\n\n\nclass FunctionIds(AbstractPrinter):\n\n ARGUMENT = 'function-id'\n HELP = 'Print the keccack256 signature of the functions'\n\n WIKI = 'https://github.com/trailofbits/slither/wiki/Printer-documentation#function-id'\n\n def output(self, _filename):\n \"\"\"\n _filename is not used\n Args:\n _filename(string)\n \"\"\"\n\n txt = ''\n all_tables = []\n for contract in self.slither.contracts_derived:\n txt += '\\n{}:\\n'.format(contract.name)\n table = MyPrettyTable(['Name', 'ID'])\n for function in contract.functions:\n if function.visibility in ['public', 'external']:\n function_id = get_function_id(function.solidity_signature)\n table.add_row([function.solidity_signature, f\"{function_id:#0{10}x}\"])\n for variable in contract.state_variables:\n if variable.visibility in ['public']:\n sig = variable.function_name\n function_id = get_function_id(sig)\n table.add_row([sig, f\"{function_id:#0{10}x}\"])\n txt += str(table) + '\\n'\n all_tables.append((contract.name, table))\n\n self.info(txt)\n\n res = self.generate_output(txt)\n for name, table in all_tables:\n res.add_pretty_table(table, name)\n\n return res", "path": "slither/printers/summary/function_ids.py"}]} | 1,117 | 243 |
gh_patches_debug_34752 | rasdani/github-patches | git_diff | litestar-org__litestar-288 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Partial doesn't work with inherited fields
```python
from starlite import Partial, get
from pydantic import BaseModel
class Parent(BaseModel):
foo: int
class Child(Parent):
bar: int
@get("/test")
def example(obj: Partial[Child]) -> None:
print(obj)
```
In the above example, `Partial[Child]` only accepts the field `bar: Optional[int]` and ignores all fields from the superclass. I couldn't find this behaviour documented anywhere so I assume this isn't intended?
```python
Python 3.10.5 (main, Jun 23 2022, 17:14:57) [Clang 13.1.6 (clang-1316.0.21.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from starlite import Partial
>>> from pydantic import BaseModel
>>> class Parent(BaseModel):
... foo: int
...
>>> class Child(Parent):
... bar: int
...
>>> PartialChild = Partial[Child]
>>> PartialChild.__annotations__
{'bar': typing.Optional[int]}
>>>
```
This behaviour can also be seen above
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlite/types.py`
Content:
```
1 from typing import (
2 TYPE_CHECKING,
3 Any,
4 Awaitable,
5 Callable,
6 Dict,
7 Generic,
8 Optional,
9 Tuple,
10 Type,
11 TypeVar,
12 Union,
13 cast,
14 )
15
16 from openapi_schema_pydantic.v3.v3_1_0.header import Header
17 from pydantic import BaseModel, create_model
18 from pydantic.typing import AnyCallable
19 from starlette.exceptions import HTTPException as StarletteHTTPException
20 from starlette.middleware import Middleware as StarletteMiddleware
21 from starlette.middleware.base import BaseHTTPMiddleware
22 from starlette.requests import HTTPConnection
23 from starlette.responses import Response as StarletteResponse
24 from typing_extensions import Literal, Protocol, runtime_checkable
25
26 from starlite.exceptions import HTTPException
27 from starlite.response import Response
28
29 try:
30 # python 3.9 changed these variable
31 from typing import _UnionGenericAlias as GenericAlias # type: ignore
32 except ImportError: # pragma: no cover
33 from typing import _GenericAlias as GenericAlias # type: ignore
34
35 if TYPE_CHECKING:
36 from starlette.types import ASGIApp, Receive, Scope, Send
37
38 from starlite.connection import Request # noqa: TC004
39 from starlite.controller import Controller # noqa: TC004
40 from starlite.datastructures import State # noqa: TC004
41 from starlite.handlers import BaseRouteHandler # noqa: TC004
42 from starlite.router import Router # noqa: TC004
43 else:
44 Request = Any
45 WebSocket = Any
46 BaseRouteHandler = Any
47 Controller = Any
48 Router = Any
49 State = Any
50
51 T = TypeVar("T", bound=BaseModel)
52 H = TypeVar("H", bound=HTTPConnection)
53
54 ExceptionHandler = Callable[
55 [Request, Union[Exception, HTTPException, StarletteHTTPException]], Union[Response, StarletteResponse]
56 ]
57 LifeCycleHandler = Union[
58 Callable[[], Any],
59 Callable[[State], Any],
60 Callable[[], Awaitable[Any]],
61 Callable[[State], Awaitable[Any]],
62 ]
63 Guard = Union[Callable[[H, BaseRouteHandler], Awaitable[None]], Callable[[H, BaseRouteHandler], None]]
64 Method = Union[Literal["GET"], Literal["POST"], Literal["DELETE"], Literal["PATCH"], Literal["PUT"], Literal["HEAD"]]
65 ReservedKwargs = Union[
66 Literal["request"],
67 Literal["socket"],
68 Literal["headers"],
69 Literal["query"],
70 Literal["cookies"],
71 Literal["state"],
72 Literal["data"],
73 ]
74 ControllerRouterHandler = Union[Type[Controller], BaseRouteHandler, Router, AnyCallable]
75
76 # connection-lifecycle hook handlers
77 BeforeRequestHandler = Union[Callable[[Request], Any], Callable[[Request], Awaitable[Any]]]
78 AfterRequestHandler = Union[
79 Callable[[Response], Response],
80 Callable[[Response], Awaitable[Response]],
81 Callable[[StarletteResponse], StarletteResponse],
82 Callable[[StarletteResponse], Awaitable[StarletteResponse]],
83 ]
84 AfterResponseHandler = Union[Callable[[Request], None], Callable[[Request], Awaitable[None]]]
85
86 AsyncAnyCallable = Callable[..., Awaitable[Any]]
87 CacheKeyBuilder = Callable[[Request], str]
88
89
90 @runtime_checkable
91 class MiddlewareProtocol(Protocol):
92 def __init__(self, app: "ASGIApp"): # pragma: no cover
93 ...
94
95 async def __call__(self, scope: "Scope", receive: "Receive", send: "Send") -> None: # pragma: no cover
96 ...
97
98
99 class Partial(Generic[T]):
100 _models: Dict[Type[T], Any] = {}
101
102 def __class_getitem__(cls, item: Type[T]) -> Type[T]:
103 """
104 Modifies a given T subclass of BaseModel to be all optional
105 """
106 if not cls._models.get(item):
107 field_definitions: Dict[str, Tuple[Any, None]] = {}
108 for field_name, field_type in item.__annotations__.items():
109 # we modify the field annotations to make it optional
110 if not isinstance(field_type, GenericAlias) or type(None) not in field_type.__args__:
111 field_definitions[field_name] = (Optional[field_type], None)
112 else:
113 field_definitions[field_name] = (field_type, None)
114 cls._models[item] = create_model("Partial" + item.__name__, **field_definitions) # type: ignore
115 return cast(Type[T], cls._models.get(item))
116
117
118 class ResponseHeader(Header):
119 value: Any = ...
120
121
122 Middleware = Union[StarletteMiddleware, Type[BaseHTTPMiddleware], Type[MiddlewareProtocol]]
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlite/types.py b/starlite/types.py
--- a/starlite/types.py
+++ b/starlite/types.py
@@ -11,6 +11,7 @@
TypeVar,
Union,
cast,
+ get_type_hints,
)
from openapi_schema_pydantic.v3.v3_1_0.header import Header
@@ -23,7 +24,7 @@
from starlette.responses import Response as StarletteResponse
from typing_extensions import Literal, Protocol, runtime_checkable
-from starlite.exceptions import HTTPException
+from starlite.exceptions import HTTPException, ImproperlyConfiguredException
from starlite.response import Response
try:
@@ -103,15 +104,23 @@
"""
Modifies a given T subclass of BaseModel to be all optional
"""
+ if not issubclass(item, BaseModel):
+ raise ImproperlyConfiguredException(f"Partial[{item}] must be a subclass of BaseModel")
if not cls._models.get(item):
field_definitions: Dict[str, Tuple[Any, None]] = {}
- for field_name, field_type in item.__annotations__.items():
- # we modify the field annotations to make it optional
- if not isinstance(field_type, GenericAlias) or type(None) not in field_type.__args__:
- field_definitions[field_name] = (Optional[field_type], None)
+ # traverse the object's mro and get all annotations
+ # until we find a BaseModel.
+ for obj in item.mro():
+ if issubclass(obj, BaseModel):
+ for field_name, field_type in get_type_hints(obj).items():
+ # we modify the field annotations to make it optional
+ if not isinstance(field_type, GenericAlias) or type(None) not in field_type.__args__:
+ field_definitions[field_name] = (Optional[field_type], None)
+ else:
+ field_definitions[field_name] = (field_type, None)
else:
- field_definitions[field_name] = (field_type, None)
- cls._models[item] = create_model("Partial" + item.__name__, **field_definitions) # type: ignore
+ break
+ cls._models[item] = create_model(f"Partial{item.__name__}", **field_definitions) # type: ignore
return cast(Type[T], cls._models.get(item))
| {"golden_diff": "diff --git a/starlite/types.py b/starlite/types.py\n--- a/starlite/types.py\n+++ b/starlite/types.py\n@@ -11,6 +11,7 @@\n TypeVar,\n Union,\n cast,\n+ get_type_hints,\n )\n \n from openapi_schema_pydantic.v3.v3_1_0.header import Header\n@@ -23,7 +24,7 @@\n from starlette.responses import Response as StarletteResponse\n from typing_extensions import Literal, Protocol, runtime_checkable\n \n-from starlite.exceptions import HTTPException\n+from starlite.exceptions import HTTPException, ImproperlyConfiguredException\n from starlite.response import Response\n \n try:\n@@ -103,15 +104,23 @@\n \"\"\"\n Modifies a given T subclass of BaseModel to be all optional\n \"\"\"\n+ if not issubclass(item, BaseModel):\n+ raise ImproperlyConfiguredException(f\"Partial[{item}] must be a subclass of BaseModel\")\n if not cls._models.get(item):\n field_definitions: Dict[str, Tuple[Any, None]] = {}\n- for field_name, field_type in item.__annotations__.items():\n- # we modify the field annotations to make it optional\n- if not isinstance(field_type, GenericAlias) or type(None) not in field_type.__args__:\n- field_definitions[field_name] = (Optional[field_type], None)\n+ # traverse the object's mro and get all annotations\n+ # until we find a BaseModel.\n+ for obj in item.mro():\n+ if issubclass(obj, BaseModel):\n+ for field_name, field_type in get_type_hints(obj).items():\n+ # we modify the field annotations to make it optional\n+ if not isinstance(field_type, GenericAlias) or type(None) not in field_type.__args__:\n+ field_definitions[field_name] = (Optional[field_type], None)\n+ else:\n+ field_definitions[field_name] = (field_type, None)\n else:\n- field_definitions[field_name] = (field_type, None)\n- cls._models[item] = create_model(\"Partial\" + item.__name__, **field_definitions) # type: ignore\n+ break\n+ cls._models[item] = create_model(f\"Partial{item.__name__}\", **field_definitions) # type: ignore\n return cast(Type[T], cls._models.get(item))\n", "issue": "Partial doesn't work with inherited fields\n```python\r\nfrom starlite import Partial, get\r\nfrom pydantic import BaseModel\r\n\r\nclass Parent(BaseModel):\r\n foo: int\r\n\r\nclass Child(Parent):\r\n bar: int\r\n\r\n@get(\"/test\")\r\ndef example(obj: Partial[Child]) -> None:\r\n print(obj)\r\n```\r\n\r\nIn the above example, `Partial[Child]` only accepts the field `bar: Optional[int]` and ignores all fields from the superclass. I couldn't find this behaviour documented anywhere so I assume this isn't intended?\r\n\r\n```python\r\nPython 3.10.5 (main, Jun 23 2022, 17:14:57) [Clang 13.1.6 (clang-1316.0.21.2.5)] on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from starlite import Partial\r\n>>> from pydantic import BaseModel\r\n>>> class Parent(BaseModel):\r\n... foo: int\r\n...\r\n>>> class Child(Parent):\r\n... bar: int\r\n...\r\n>>> PartialChild = Partial[Child]\r\n>>> PartialChild.__annotations__\r\n{'bar': typing.Optional[int]}\r\n>>>\r\n```\r\n\r\nThis behaviour can also be seen above\r\n\n", "before_files": [{"content": "from typing import (\n TYPE_CHECKING,\n Any,\n Awaitable,\n Callable,\n Dict,\n Generic,\n Optional,\n Tuple,\n Type,\n TypeVar,\n Union,\n cast,\n)\n\nfrom openapi_schema_pydantic.v3.v3_1_0.header import Header\nfrom pydantic import BaseModel, create_model\nfrom pydantic.typing import AnyCallable\nfrom starlette.exceptions import HTTPException as StarletteHTTPException\nfrom starlette.middleware import Middleware as StarletteMiddleware\nfrom starlette.middleware.base import BaseHTTPMiddleware\nfrom starlette.requests import HTTPConnection\nfrom starlette.responses import Response as StarletteResponse\nfrom typing_extensions import Literal, Protocol, runtime_checkable\n\nfrom starlite.exceptions import HTTPException\nfrom starlite.response import Response\n\ntry:\n # python 3.9 changed these variable\n from typing import _UnionGenericAlias as GenericAlias # type: ignore\nexcept ImportError: # pragma: no cover\n from typing import _GenericAlias as GenericAlias # type: ignore\n\nif TYPE_CHECKING:\n from starlette.types import ASGIApp, Receive, Scope, Send\n\n from starlite.connection import Request # noqa: TC004\n from starlite.controller import Controller # noqa: TC004\n from starlite.datastructures import State # noqa: TC004\n from starlite.handlers import BaseRouteHandler # noqa: TC004\n from starlite.router import Router # noqa: TC004\nelse:\n Request = Any\n WebSocket = Any\n BaseRouteHandler = Any\n Controller = Any\n Router = Any\n State = Any\n\nT = TypeVar(\"T\", bound=BaseModel)\nH = TypeVar(\"H\", bound=HTTPConnection)\n\nExceptionHandler = Callable[\n [Request, Union[Exception, HTTPException, StarletteHTTPException]], Union[Response, StarletteResponse]\n]\nLifeCycleHandler = Union[\n Callable[[], Any],\n Callable[[State], Any],\n Callable[[], Awaitable[Any]],\n Callable[[State], Awaitable[Any]],\n]\nGuard = Union[Callable[[H, BaseRouteHandler], Awaitable[None]], Callable[[H, BaseRouteHandler], None]]\nMethod = Union[Literal[\"GET\"], Literal[\"POST\"], Literal[\"DELETE\"], Literal[\"PATCH\"], Literal[\"PUT\"], Literal[\"HEAD\"]]\nReservedKwargs = Union[\n Literal[\"request\"],\n Literal[\"socket\"],\n Literal[\"headers\"],\n Literal[\"query\"],\n Literal[\"cookies\"],\n Literal[\"state\"],\n Literal[\"data\"],\n]\nControllerRouterHandler = Union[Type[Controller], BaseRouteHandler, Router, AnyCallable]\n\n# connection-lifecycle hook handlers\nBeforeRequestHandler = Union[Callable[[Request], Any], Callable[[Request], Awaitable[Any]]]\nAfterRequestHandler = Union[\n Callable[[Response], Response],\n Callable[[Response], Awaitable[Response]],\n Callable[[StarletteResponse], StarletteResponse],\n Callable[[StarletteResponse], Awaitable[StarletteResponse]],\n]\nAfterResponseHandler = Union[Callable[[Request], None], Callable[[Request], Awaitable[None]]]\n\nAsyncAnyCallable = Callable[..., Awaitable[Any]]\nCacheKeyBuilder = Callable[[Request], str]\n\n\n@runtime_checkable\nclass MiddlewareProtocol(Protocol):\n def __init__(self, app: \"ASGIApp\"): # pragma: no cover\n ...\n\n async def __call__(self, scope: \"Scope\", receive: \"Receive\", send: \"Send\") -> None: # pragma: no cover\n ...\n\n\nclass Partial(Generic[T]):\n _models: Dict[Type[T], Any] = {}\n\n def __class_getitem__(cls, item: Type[T]) -> Type[T]:\n \"\"\"\n Modifies a given T subclass of BaseModel to be all optional\n \"\"\"\n if not cls._models.get(item):\n field_definitions: Dict[str, Tuple[Any, None]] = {}\n for field_name, field_type in item.__annotations__.items():\n # we modify the field annotations to make it optional\n if not isinstance(field_type, GenericAlias) or type(None) not in field_type.__args__:\n field_definitions[field_name] = (Optional[field_type], None)\n else:\n field_definitions[field_name] = (field_type, None)\n cls._models[item] = create_model(\"Partial\" + item.__name__, **field_definitions) # type: ignore\n return cast(Type[T], cls._models.get(item))\n\n\nclass ResponseHeader(Header):\n value: Any = ...\n\n\nMiddleware = Union[StarletteMiddleware, Type[BaseHTTPMiddleware], Type[MiddlewareProtocol]]\n", "path": "starlite/types.py"}], "after_files": [{"content": "from typing import (\n TYPE_CHECKING,\n Any,\n Awaitable,\n Callable,\n Dict,\n Generic,\n Optional,\n Tuple,\n Type,\n TypeVar,\n Union,\n cast,\n get_type_hints,\n)\n\nfrom openapi_schema_pydantic.v3.v3_1_0.header import Header\nfrom pydantic import BaseModel, create_model\nfrom pydantic.typing import AnyCallable\nfrom starlette.exceptions import HTTPException as StarletteHTTPException\nfrom starlette.middleware import Middleware as StarletteMiddleware\nfrom starlette.middleware.base import BaseHTTPMiddleware\nfrom starlette.requests import HTTPConnection\nfrom starlette.responses import Response as StarletteResponse\nfrom typing_extensions import Literal, Protocol, runtime_checkable\n\nfrom starlite.exceptions import HTTPException, ImproperlyConfiguredException\nfrom starlite.response import Response\n\ntry:\n # python 3.9 changed these variable\n from typing import _UnionGenericAlias as GenericAlias # type: ignore\nexcept ImportError: # pragma: no cover\n from typing import _GenericAlias as GenericAlias # type: ignore\n\nif TYPE_CHECKING:\n from starlette.types import ASGIApp, Receive, Scope, Send\n\n from starlite.connection import Request # noqa: TC004\n from starlite.controller import Controller # noqa: TC004\n from starlite.datastructures import State # noqa: TC004\n from starlite.handlers import BaseRouteHandler # noqa: TC004\n from starlite.router import Router # noqa: TC004\nelse:\n Request = Any\n WebSocket = Any\n BaseRouteHandler = Any\n Controller = Any\n Router = Any\n State = Any\n\nT = TypeVar(\"T\", bound=BaseModel)\nH = TypeVar(\"H\", bound=HTTPConnection)\n\nExceptionHandler = Callable[\n [Request, Union[Exception, HTTPException, StarletteHTTPException]], Union[Response, StarletteResponse]\n]\nLifeCycleHandler = Union[\n Callable[[], Any],\n Callable[[State], Any],\n Callable[[], Awaitable[Any]],\n Callable[[State], Awaitable[Any]],\n]\nGuard = Union[Callable[[H, BaseRouteHandler], Awaitable[None]], Callable[[H, BaseRouteHandler], None]]\nMethod = Union[Literal[\"GET\"], Literal[\"POST\"], Literal[\"DELETE\"], Literal[\"PATCH\"], Literal[\"PUT\"], Literal[\"HEAD\"]]\nReservedKwargs = Union[\n Literal[\"request\"],\n Literal[\"socket\"],\n Literal[\"headers\"],\n Literal[\"query\"],\n Literal[\"cookies\"],\n Literal[\"state\"],\n Literal[\"data\"],\n]\nControllerRouterHandler = Union[Type[Controller], BaseRouteHandler, Router, AnyCallable]\n\n# connection-lifecycle hook handlers\nBeforeRequestHandler = Union[Callable[[Request], Any], Callable[[Request], Awaitable[Any]]]\nAfterRequestHandler = Union[\n Callable[[Response], Response],\n Callable[[Response], Awaitable[Response]],\n Callable[[StarletteResponse], StarletteResponse],\n Callable[[StarletteResponse], Awaitable[StarletteResponse]],\n]\nAfterResponseHandler = Union[Callable[[Request], None], Callable[[Request], Awaitable[None]]]\n\nAsyncAnyCallable = Callable[..., Awaitable[Any]]\nCacheKeyBuilder = Callable[[Request], str]\n\n\n@runtime_checkable\nclass MiddlewareProtocol(Protocol):\n def __init__(self, app: \"ASGIApp\"): # pragma: no cover\n ...\n\n async def __call__(self, scope: \"Scope\", receive: \"Receive\", send: \"Send\") -> None: # pragma: no cover\n ...\n\n\nclass Partial(Generic[T]):\n _models: Dict[Type[T], Any] = {}\n\n def __class_getitem__(cls, item: Type[T]) -> Type[T]:\n \"\"\"\n Modifies a given T subclass of BaseModel to be all optional\n \"\"\"\n if not issubclass(item, BaseModel):\n raise ImproperlyConfiguredException(f\"Partial[{item}] must be a subclass of BaseModel\")\n if not cls._models.get(item):\n field_definitions: Dict[str, Tuple[Any, None]] = {}\n # traverse the object's mro and get all annotations\n # until we find a BaseModel.\n for obj in item.mro():\n if issubclass(obj, BaseModel):\n for field_name, field_type in get_type_hints(obj).items():\n # we modify the field annotations to make it optional\n if not isinstance(field_type, GenericAlias) or type(None) not in field_type.__args__:\n field_definitions[field_name] = (Optional[field_type], None)\n else:\n field_definitions[field_name] = (field_type, None)\n else:\n break\n cls._models[item] = create_model(f\"Partial{item.__name__}\", **field_definitions) # type: ignore\n return cast(Type[T], cls._models.get(item))\n\n\nclass ResponseHeader(Header):\n value: Any = ...\n\n\nMiddleware = Union[StarletteMiddleware, Type[BaseHTTPMiddleware], Type[MiddlewareProtocol]]\n", "path": "starlite/types.py"}]} | 1,791 | 518 |
gh_patches_debug_64987 | rasdani/github-patches | git_diff | googleapis__google-auth-library-python-937 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
new cachetools version (5.0.0+) can't work with google-auth
`cachetools` has released a new version (5.0.0) which conflicts with google-auth requirements of it being <5, this prevents updates to the `cachetools` package and pose a potential security concern (as updates are no longer possible to it)
```
The conflict is caused by:
The user requested cachetools==5.0.0
google-auth 2.3.3 depends on cachetools<5.0 and >=2.0.0
```
issue seems in https://github.com/googleapis/google-auth-library-python/blob/3c3fbf40b07e090f2be7fac5b304dbf438b5cd6c/setup.py#L23
#### Environment details
- OS: alpine3.11
- Python version: python:3.8.6
- pip version: 20.3.3
- `google-auth` version: 2.3.3 (latest at time of writing)
#### Steps to reproduce
1. try pip install using latest `cachetools` with latest `google-auth`
2. pip fails
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2014 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 from setuptools import find_packages
19 from setuptools import setup
20
21
22 DEPENDENCIES = (
23 "cachetools>=2.0.0,<5.0",
24 "pyasn1-modules>=0.2.1",
25 # rsa==4.5 is the last version to support 2.7
26 # https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233
27 'rsa<4.6; python_version < "3.6"',
28 'rsa>=3.1.4,<5; python_version >= "3.6"',
29 # install enum34 to support 2.7. enum34 only works up to python version 3.3.
30 'enum34>=1.1.10; python_version < "3.4"',
31 "six>=1.9.0",
32 )
33
34 extras = {
35 "aiohttp": [
36 "aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'",
37 "requests >= 2.20.0, < 3.0.0dev",
38 ],
39 "pyopenssl": "pyopenssl>=20.0.0",
40 "reauth": "pyu2f>=0.1.5",
41 }
42
43 with io.open("README.rst", "r") as fh:
44 long_description = fh.read()
45
46 package_root = os.path.abspath(os.path.dirname(__file__))
47
48 version = {}
49 with open(os.path.join(package_root, "google/auth/version.py")) as fp:
50 exec(fp.read(), version)
51 version = version["__version__"]
52
53 setup(
54 name="google-auth",
55 version=version,
56 author="Google Cloud Platform",
57 author_email="[email protected]",
58 description="Google Authentication Library",
59 long_description=long_description,
60 url="https://github.com/googleapis/google-auth-library-python",
61 packages=find_packages(exclude=("tests*", "system_tests*")),
62 namespace_packages=("google",),
63 install_requires=DEPENDENCIES,
64 extras_require=extras,
65 python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*",
66 license="Apache 2.0",
67 keywords="google auth oauth client",
68 classifiers=[
69 "Programming Language :: Python :: 3",
70 "Programming Language :: Python :: 3.6",
71 "Programming Language :: Python :: 3.7",
72 "Programming Language :: Python :: 3.8",
73 "Programming Language :: Python :: 3.9",
74 "Programming Language :: Python :: 3.10",
75 "Development Status :: 5 - Production/Stable",
76 "Intended Audience :: Developers",
77 "License :: OSI Approved :: Apache Software License",
78 "Operating System :: POSIX",
79 "Operating System :: Microsoft :: Windows",
80 "Operating System :: MacOS :: MacOS X",
81 "Operating System :: OS Independent",
82 "Topic :: Internet :: WWW/HTTP",
83 ],
84 )
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -20,7 +20,7 @@
DEPENDENCIES = (
- "cachetools>=2.0.0,<5.0",
+ "cachetools>=2.0.0,<6.0",
"pyasn1-modules>=0.2.1",
# rsa==4.5 is the last version to support 2.7
# https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -20,7 +20,7 @@\n \n \n DEPENDENCIES = (\n- \"cachetools>=2.0.0,<5.0\",\n+ \"cachetools>=2.0.0,<6.0\",\n \"pyasn1-modules>=0.2.1\",\n # rsa==4.5 is the last version to support 2.7\n # https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233\n", "issue": "new cachetools version (5.0.0+) can't work with google-auth\n`cachetools` has released a new version (5.0.0) which conflicts with google-auth requirements of it being <5, this prevents updates to the `cachetools` package and pose a potential security concern (as updates are no longer possible to it)\r\n\r\n```\r\nThe conflict is caused by:\r\n The user requested cachetools==5.0.0\r\n google-auth 2.3.3 depends on cachetools<5.0 and >=2.0.0\r\n```\r\n\r\nissue seems in https://github.com/googleapis/google-auth-library-python/blob/3c3fbf40b07e090f2be7fac5b304dbf438b5cd6c/setup.py#L23 \r\n\r\n#### Environment details\r\n\r\n - OS: alpine3.11\r\n - Python version: python:3.8.6\r\n - pip version: 20.3.3\r\n - `google-auth` version: 2.3.3 (latest at time of writing)\r\n\r\n#### Steps to reproduce\r\n\r\n 1. try pip install using latest `cachetools` with latest `google-auth`\r\n 2. pip fails\r\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\nDEPENDENCIES = (\n \"cachetools>=2.0.0,<5.0\",\n \"pyasn1-modules>=0.2.1\",\n # rsa==4.5 is the last version to support 2.7\n # https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233\n 'rsa<4.6; python_version < \"3.6\"',\n 'rsa>=3.1.4,<5; python_version >= \"3.6\"',\n # install enum34 to support 2.7. enum34 only works up to python version 3.3.\n 'enum34>=1.1.10; python_version < \"3.4\"',\n \"six>=1.9.0\",\n)\n\nextras = {\n \"aiohttp\": [\n \"aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'\",\n \"requests >= 2.20.0, < 3.0.0dev\",\n ],\n \"pyopenssl\": \"pyopenssl>=20.0.0\",\n \"reauth\": \"pyu2f>=0.1.5\",\n}\n\nwith io.open(\"README.rst\", \"r\") as fh:\n long_description = fh.read()\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nversion = {}\nwith open(os.path.join(package_root, \"google/auth/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\nsetup(\n name=\"google-auth\",\n version=version,\n author=\"Google Cloud Platform\",\n author_email=\"[email protected]\",\n description=\"Google Authentication Library\",\n long_description=long_description,\n url=\"https://github.com/googleapis/google-auth-library-python\",\n packages=find_packages(exclude=(\"tests*\", \"system_tests*\")),\n namespace_packages=(\"google\",),\n install_requires=DEPENDENCIES,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*\",\n license=\"Apache 2.0\",\n keywords=\"google auth oauth client\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\nDEPENDENCIES = (\n \"cachetools>=2.0.0,<6.0\",\n \"pyasn1-modules>=0.2.1\",\n # rsa==4.5 is the last version to support 2.7\n # https://github.com/sybrenstuvel/python-rsa/issues/152#issuecomment-643470233\n 'rsa<4.6; python_version < \"3.6\"',\n 'rsa>=3.1.4,<5; python_version >= \"3.6\"',\n # install enum34 to support 2.7. enum34 only works up to python version 3.3.\n 'enum34>=1.1.10; python_version < \"3.4\"',\n \"six>=1.9.0\",\n)\n\nextras = {\n \"aiohttp\": [\n \"aiohttp >= 3.6.2, < 4.0.0dev; python_version>='3.6'\",\n \"requests >= 2.20.0, < 3.0.0dev\",\n ],\n \"pyopenssl\": \"pyopenssl>=20.0.0\",\n \"reauth\": \"pyu2f>=0.1.5\",\n}\n\nwith io.open(\"README.rst\", \"r\") as fh:\n long_description = fh.read()\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nversion = {}\nwith open(os.path.join(package_root, \"google/auth/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\nsetup(\n name=\"google-auth\",\n version=version,\n author=\"Google Cloud Platform\",\n author_email=\"[email protected]\",\n description=\"Google Authentication Library\",\n long_description=long_description,\n url=\"https://github.com/googleapis/google-auth-library-python\",\n packages=find_packages(exclude=(\"tests*\", \"system_tests*\")),\n namespace_packages=(\"google\",),\n install_requires=DEPENDENCIES,\n extras_require=extras,\n python_requires=\">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*\",\n license=\"Apache 2.0\",\n keywords=\"google auth oauth client\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py"}]} | 1,500 | 138 |
gh_patches_debug_20415 | rasdani/github-patches | git_diff | ansible__awx-12803 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Token and Session Expiration never run after the first time
### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Bug Summary
Looks like when we implemented token and session cleanup way back in https://github.com/ansible/awx/pull/3856
We populated the recurrence rule incorrectly:
https://github.com/ansible/awx/blob/8a06ffbe15c9f8e68b1da86e5ca7daf5ecfd6da4/awx/main/migrations/_create_system_jobs.py#L39
This schedule will only ever run once due to `COUNT=1`.... we should omit that so that it will periodically run.
### AWX version
latest
### Select the relevant components
- [ ] UI
- [X] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
minishift
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
_No response_
### Steps to reproduce
install awx
### Expected results
periodic running of these management jobs
### Actual results
the jobs only run once
### Additional information
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awx/main/migrations/_create_system_jobs.py`
Content:
```
1 import logging
2
3 from django.utils.timezone import now
4
5 logger = logging.getLogger('awx.main.migrations')
6
7 __all__ = ['create_collection_jt', 'create_clearsessions_jt', 'create_cleartokens_jt']
8
9 '''
10 These methods are called by migrations to create various system job templates
11
12 Create default system job templates if not present. Create default schedules
13 only if new system job templates were created (i.e. new database).
14 '''
15
16
17 def create_clearsessions_jt(apps, schema_editor):
18
19 SystemJobTemplate = apps.get_model('main', 'SystemJobTemplate')
20 Schedule = apps.get_model('main', 'Schedule')
21 ContentType = apps.get_model('contenttypes', 'ContentType')
22 sjt_ct = ContentType.objects.get_for_model(SystemJobTemplate)
23 now_dt = now()
24 schedule_time = now_dt.strftime('%Y%m%dT%H%M%SZ')
25
26 sjt, created = SystemJobTemplate.objects.get_or_create(
27 job_type='cleanup_sessions',
28 defaults=dict(
29 name='Cleanup Expired Sessions',
30 description='Cleans out expired browser sessions',
31 polymorphic_ctype=sjt_ct,
32 created=now_dt,
33 modified=now_dt,
34 ),
35 )
36 if created:
37 sched = Schedule(
38 name='Cleanup Expired Sessions',
39 rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,
40 description='Cleans out expired browser sessions',
41 enabled=True,
42 created=now_dt,
43 modified=now_dt,
44 extra_data={},
45 )
46 sched.unified_job_template = sjt
47 sched.save()
48
49
50 def create_cleartokens_jt(apps, schema_editor):
51
52 SystemJobTemplate = apps.get_model('main', 'SystemJobTemplate')
53 Schedule = apps.get_model('main', 'Schedule')
54 ContentType = apps.get_model('contenttypes', 'ContentType')
55 sjt_ct = ContentType.objects.get_for_model(SystemJobTemplate)
56 now_dt = now()
57 schedule_time = now_dt.strftime('%Y%m%dT%H%M%SZ')
58
59 sjt, created = SystemJobTemplate.objects.get_or_create(
60 job_type='cleanup_tokens',
61 defaults=dict(
62 name='Cleanup Expired OAuth 2 Tokens',
63 description='Cleanup expired OAuth 2 access and refresh tokens',
64 polymorphic_ctype=sjt_ct,
65 created=now_dt,
66 modified=now_dt,
67 ),
68 )
69 if created:
70 sched = Schedule(
71 name='Cleanup Expired OAuth 2 Tokens',
72 rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,
73 description='Removes expired OAuth 2 access and refresh tokens',
74 enabled=True,
75 created=now_dt,
76 modified=now_dt,
77 extra_data={},
78 )
79 sched.unified_job_template = sjt
80 sched.save()
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/awx/main/migrations/_create_system_jobs.py b/awx/main/migrations/_create_system_jobs.py
--- a/awx/main/migrations/_create_system_jobs.py
+++ b/awx/main/migrations/_create_system_jobs.py
@@ -36,7 +36,7 @@
if created:
sched = Schedule(
name='Cleanup Expired Sessions',
- rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,
+ rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1' % schedule_time,
description='Cleans out expired browser sessions',
enabled=True,
created=now_dt,
@@ -69,7 +69,7 @@
if created:
sched = Schedule(
name='Cleanup Expired OAuth 2 Tokens',
- rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,
+ rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1' % schedule_time,
description='Removes expired OAuth 2 access and refresh tokens',
enabled=True,
created=now_dt,
| {"golden_diff": "diff --git a/awx/main/migrations/_create_system_jobs.py b/awx/main/migrations/_create_system_jobs.py\n--- a/awx/main/migrations/_create_system_jobs.py\n+++ b/awx/main/migrations/_create_system_jobs.py\n@@ -36,7 +36,7 @@\n if created:\n sched = Schedule(\n name='Cleanup Expired Sessions',\n- rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,\n+ rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1' % schedule_time,\n description='Cleans out expired browser sessions',\n enabled=True,\n created=now_dt,\n@@ -69,7 +69,7 @@\n if created:\n sched = Schedule(\n name='Cleanup Expired OAuth 2 Tokens',\n- rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,\n+ rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1' % schedule_time,\n description='Removes expired OAuth 2 access and refresh tokens',\n enabled=True,\n created=now_dt,\n", "issue": "Token and Session Expiration never run after the first time\n### Please confirm the following\n\n- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).\n- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.\n- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.\n\n### Bug Summary\n\nLooks like when we implemented token and session cleanup way back in https://github.com/ansible/awx/pull/3856\r\n\r\nWe populated the recurrence rule incorrectly:\r\n\r\nhttps://github.com/ansible/awx/blob/8a06ffbe15c9f8e68b1da86e5ca7daf5ecfd6da4/awx/main/migrations/_create_system_jobs.py#L39\r\n\r\nThis schedule will only ever run once due to `COUNT=1`.... we should omit that so that it will periodically run.\n\n### AWX version\n\nlatest\n\n### Select the relevant components\n\n- [ ] UI\n- [X] API\n- [ ] Docs\n- [ ] Collection\n- [ ] CLI\n- [ ] Other\n\n### Installation method\n\nminishift\n\n### Modifications\n\nno\n\n### Ansible version\n\n_No response_\n\n### Operating system\n\n_No response_\n\n### Web browser\n\n_No response_\n\n### Steps to reproduce\n\ninstall awx\n\n### Expected results\n\nperiodic running of these management jobs\n\n### Actual results\n\nthe jobs only run once\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "import logging\n\nfrom django.utils.timezone import now\n\nlogger = logging.getLogger('awx.main.migrations')\n\n__all__ = ['create_collection_jt', 'create_clearsessions_jt', 'create_cleartokens_jt']\n\n'''\nThese methods are called by migrations to create various system job templates\n\nCreate default system job templates if not present. Create default schedules\nonly if new system job templates were created (i.e. new database).\n'''\n\n\ndef create_clearsessions_jt(apps, schema_editor):\n\n SystemJobTemplate = apps.get_model('main', 'SystemJobTemplate')\n Schedule = apps.get_model('main', 'Schedule')\n ContentType = apps.get_model('contenttypes', 'ContentType')\n sjt_ct = ContentType.objects.get_for_model(SystemJobTemplate)\n now_dt = now()\n schedule_time = now_dt.strftime('%Y%m%dT%H%M%SZ')\n\n sjt, created = SystemJobTemplate.objects.get_or_create(\n job_type='cleanup_sessions',\n defaults=dict(\n name='Cleanup Expired Sessions',\n description='Cleans out expired browser sessions',\n polymorphic_ctype=sjt_ct,\n created=now_dt,\n modified=now_dt,\n ),\n )\n if created:\n sched = Schedule(\n name='Cleanup Expired Sessions',\n rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,\n description='Cleans out expired browser sessions',\n enabled=True,\n created=now_dt,\n modified=now_dt,\n extra_data={},\n )\n sched.unified_job_template = sjt\n sched.save()\n\n\ndef create_cleartokens_jt(apps, schema_editor):\n\n SystemJobTemplate = apps.get_model('main', 'SystemJobTemplate')\n Schedule = apps.get_model('main', 'Schedule')\n ContentType = apps.get_model('contenttypes', 'ContentType')\n sjt_ct = ContentType.objects.get_for_model(SystemJobTemplate)\n now_dt = now()\n schedule_time = now_dt.strftime('%Y%m%dT%H%M%SZ')\n\n sjt, created = SystemJobTemplate.objects.get_or_create(\n job_type='cleanup_tokens',\n defaults=dict(\n name='Cleanup Expired OAuth 2 Tokens',\n description='Cleanup expired OAuth 2 access and refresh tokens',\n polymorphic_ctype=sjt_ct,\n created=now_dt,\n modified=now_dt,\n ),\n )\n if created:\n sched = Schedule(\n name='Cleanup Expired OAuth 2 Tokens',\n rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' % schedule_time,\n description='Removes expired OAuth 2 access and refresh tokens',\n enabled=True,\n created=now_dt,\n modified=now_dt,\n extra_data={},\n )\n sched.unified_job_template = sjt\n sched.save()\n", "path": "awx/main/migrations/_create_system_jobs.py"}], "after_files": [{"content": "import logging\n\nfrom django.utils.timezone import now\n\nlogger = logging.getLogger('awx.main.migrations')\n\n__all__ = ['create_collection_jt', 'create_clearsessions_jt', 'create_cleartokens_jt']\n\n'''\nThese methods are called by migrations to create various system job templates\n\nCreate default system job templates if not present. Create default schedules\nonly if new system job templates were created (i.e. new database).\n'''\n\n\ndef create_clearsessions_jt(apps, schema_editor):\n\n SystemJobTemplate = apps.get_model('main', 'SystemJobTemplate')\n Schedule = apps.get_model('main', 'Schedule')\n ContentType = apps.get_model('contenttypes', 'ContentType')\n sjt_ct = ContentType.objects.get_for_model(SystemJobTemplate)\n now_dt = now()\n schedule_time = now_dt.strftime('%Y%m%dT%H%M%SZ')\n\n sjt, created = SystemJobTemplate.objects.get_or_create(\n job_type='cleanup_sessions',\n defaults=dict(\n name='Cleanup Expired Sessions',\n description='Cleans out expired browser sessions',\n polymorphic_ctype=sjt_ct,\n created=now_dt,\n modified=now_dt,\n ),\n )\n if created:\n sched = Schedule(\n name='Cleanup Expired Sessions',\n rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1' % schedule_time,\n description='Cleans out expired browser sessions',\n enabled=True,\n created=now_dt,\n modified=now_dt,\n extra_data={},\n )\n sched.unified_job_template = sjt\n sched.save()\n\n\ndef create_cleartokens_jt(apps, schema_editor):\n\n SystemJobTemplate = apps.get_model('main', 'SystemJobTemplate')\n Schedule = apps.get_model('main', 'Schedule')\n ContentType = apps.get_model('contenttypes', 'ContentType')\n sjt_ct = ContentType.objects.get_for_model(SystemJobTemplate)\n now_dt = now()\n schedule_time = now_dt.strftime('%Y%m%dT%H%M%SZ')\n\n sjt, created = SystemJobTemplate.objects.get_or_create(\n job_type='cleanup_tokens',\n defaults=dict(\n name='Cleanup Expired OAuth 2 Tokens',\n description='Cleanup expired OAuth 2 access and refresh tokens',\n polymorphic_ctype=sjt_ct,\n created=now_dt,\n modified=now_dt,\n ),\n )\n if created:\n sched = Schedule(\n name='Cleanup Expired OAuth 2 Tokens',\n rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1' % schedule_time,\n description='Removes expired OAuth 2 access and refresh tokens',\n enabled=True,\n created=now_dt,\n modified=now_dt,\n extra_data={},\n )\n sched.unified_job_template = sjt\n sched.save()\n", "path": "awx/main/migrations/_create_system_jobs.py"}]} | 1,380 | 273 |
gh_patches_debug_20553 | rasdani/github-patches | git_diff | paperless-ngx__paperless-ngx-1605 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] permission error if no consume share is mapped
### Description
starting with Paperless-ngx 1.8.0, Paperless on docker does not start when no consume share is mapped:
`
SystemCheckError: System check identified some issues:
ERRORS:
?: PAPERLESS_CONSUMPTION_DIR is not writeable
HINT: Set the permissions of
drwxr-xr-x /usr/src/paperless/consume
to be writeable by the user running the Paperless services
`
I've some containers running without a mapped share, as I don't need a consumption folder. Was no issue in previous versions; I assume the permissions of the folder in the docker container has changed.
### Steps to reproduce
1. create docker-compose without mapped consumption folder
2. look at the error message
### Webserver logs
```bash
SystemCheckError: System check identified some issues:
ERRORS:
?: PAPERLESS_CONSUMPTION_DIR is not writeable
HINT: Set the permissions of
drwxr-xr-x /usr/src/paperless/consume
to be writeable by the user running the Paperless services
```
### Paperless-ngx version
1.8,0
### Host OS
docker
### Installation method
Docker - official image
### Browser
_No response_
### Configuration changes
_No response_
### Other
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/paperless/checks.py`
Content:
```
1 import os
2 import shutil
3 import stat
4
5 from django.conf import settings
6 from django.core.checks import Error
7 from django.core.checks import register
8 from django.core.checks import Warning
9
10 exists_message = "{} is set but doesn't exist."
11 exists_hint = "Create a directory at {}"
12 writeable_message = "{} is not writeable"
13 writeable_hint = (
14 "Set the permissions of {} to be writeable by the user running the "
15 "Paperless services"
16 )
17
18
19 def path_check(var, directory):
20 messages = []
21 if directory:
22 if not os.path.isdir(directory):
23 messages.append(
24 Error(exists_message.format(var), exists_hint.format(directory)),
25 )
26 else:
27 test_file = os.path.join(
28 directory,
29 f"__paperless_write_test_{os.getpid()}__",
30 )
31 try:
32 with open(test_file, "w"):
33 pass
34 except PermissionError:
35 messages.append(
36 Error(
37 writeable_message.format(var),
38 writeable_hint.format(
39 f"\n{stat.filemode(os.stat(directory).st_mode)} "
40 f"{directory}\n",
41 ),
42 ),
43 )
44 finally:
45 if os.path.isfile(test_file):
46 os.remove(test_file)
47
48 return messages
49
50
51 @register()
52 def paths_check(app_configs, **kwargs):
53 """
54 Check the various paths for existence, readability and writeability
55 """
56
57 return (
58 path_check("PAPERLESS_DATA_DIR", settings.DATA_DIR)
59 + path_check("PAPERLESS_TRASH_DIR", settings.TRASH_DIR)
60 + path_check("PAPERLESS_MEDIA_ROOT", settings.MEDIA_ROOT)
61 + path_check("PAPERLESS_CONSUMPTION_DIR", settings.CONSUMPTION_DIR)
62 )
63
64
65 @register()
66 def binaries_check(app_configs, **kwargs):
67 """
68 Paperless requires the existence of a few binaries, so we do some checks
69 for those here.
70 """
71
72 error = "Paperless can't find {}. Without it, consumption is impossible."
73 hint = "Either it's not in your ${PATH} or it's not installed."
74
75 binaries = (settings.CONVERT_BINARY, "tesseract")
76
77 check_messages = []
78 for binary in binaries:
79 if shutil.which(binary) is None:
80 check_messages.append(Warning(error.format(binary), hint))
81
82 return check_messages
83
84
85 @register()
86 def debug_mode_check(app_configs, **kwargs):
87 if settings.DEBUG:
88 return [
89 Warning(
90 "DEBUG mode is enabled. Disable Debug mode. This is a serious "
91 "security issue, since it puts security overides in place which "
92 "are meant to be only used during development. This "
93 "also means that paperless will tell anyone various "
94 "debugging information when something goes wrong.",
95 ),
96 ]
97 else:
98 return []
99
100
101 @register()
102 def settings_values_check(app_configs, **kwargs):
103 """
104 Validates at least some of the user provided settings
105 """
106
107 def _ocrmypdf_settings_check():
108 """
109 Validates some of the arguments which will be provided to ocrmypdf
110 against the valid options. Use "ocrmypdf --help" to see the valid
111 inputs
112 """
113 msgs = []
114 if settings.OCR_OUTPUT_TYPE not in {
115 "pdfa",
116 "pdf",
117 "pdfa-1",
118 "pdfa-2",
119 "pdfa-3",
120 }:
121 msgs.append(
122 Error(f'OCR output type "{settings.OCR_OUTPUT_TYPE}" is not valid'),
123 )
124
125 if settings.OCR_MODE not in {"force", "skip", "redo_ocr"}:
126 msgs.append(Error(f'OCR output mode "{settings.OCR_MODE}" is not valid'))
127
128 if settings.OCR_CLEAN not in {"clean", "clean_final"}:
129 msgs.append(Error(f'OCR clean mode "{settings.OCR_CLEAN}" is not valid'))
130 return msgs
131
132 def _timezone_validate():
133 """
134 Validates the user provided timezone is a valid timezone
135 """
136 try:
137 import zoneinfo
138 except ImportError: # pragma: nocover
139 import backports.zoneinfo as zoneinfo
140 msgs = []
141 if settings.TIME_ZONE not in zoneinfo.available_timezones():
142 msgs.append(
143 Error(f'Timezone "{settings.TIME_ZONE}" is not a valid timezone'),
144 )
145 return msgs
146
147 return _ocrmypdf_settings_check() + _timezone_validate()
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/paperless/checks.py b/src/paperless/checks.py
--- a/src/paperless/checks.py
+++ b/src/paperless/checks.py
@@ -1,4 +1,6 @@
+import grp
import os
+import pwd
import shutil
import stat
@@ -32,12 +34,15 @@
with open(test_file, "w"):
pass
except PermissionError:
+ dir_stat = os.stat(directory)
+ dir_mode = stat.filemode(dir_stat.st_mode)
+ dir_owner = pwd.getpwuid(dir_stat.st_uid).pw_name
+ dir_group = grp.getgrgid(dir_stat.st_gid).gr_name
messages.append(
Error(
writeable_message.format(var),
writeable_hint.format(
- f"\n{stat.filemode(os.stat(directory).st_mode)} "
- f"{directory}\n",
+ f"\n{dir_mode} {dir_owner} {dir_group} " f"{directory}\n",
),
),
)
| {"golden_diff": "diff --git a/src/paperless/checks.py b/src/paperless/checks.py\n--- a/src/paperless/checks.py\n+++ b/src/paperless/checks.py\n@@ -1,4 +1,6 @@\n+import grp\n import os\n+import pwd\n import shutil\n import stat\n \n@@ -32,12 +34,15 @@\n with open(test_file, \"w\"):\n pass\n except PermissionError:\n+ dir_stat = os.stat(directory)\n+ dir_mode = stat.filemode(dir_stat.st_mode)\n+ dir_owner = pwd.getpwuid(dir_stat.st_uid).pw_name\n+ dir_group = grp.getgrgid(dir_stat.st_gid).gr_name\n messages.append(\n Error(\n writeable_message.format(var),\n writeable_hint.format(\n- f\"\\n{stat.filemode(os.stat(directory).st_mode)} \"\n- f\"{directory}\\n\",\n+ f\"\\n{dir_mode} {dir_owner} {dir_group} \" f\"{directory}\\n\",\n ),\n ),\n )\n", "issue": "[BUG] permission error if no consume share is mapped\n### Description\r\n\r\nstarting with Paperless-ngx 1.8.0, Paperless on docker does not start when no consume share is mapped:\r\n\r\n`\r\n SystemCheckError: System check identified some issues:\r\n ERRORS:\r\n ?: PAPERLESS_CONSUMPTION_DIR is not writeable\r\n\tHINT: Set the permissions of \r\n drwxr-xr-x /usr/src/paperless/consume\r\n to be writeable by the user running the Paperless services\r\n\r\n`\r\n\r\nI've some containers running without a mapped share, as I don't need a consumption folder. Was no issue in previous versions; I assume the permissions of the folder in the docker container has changed.\r\n\r\n### Steps to reproduce\r\n\r\n1. create docker-compose without mapped consumption folder\r\n2. look at the error message\r\n\r\n### Webserver logs\r\n\r\n```bash\r\nSystemCheckError: System check identified some issues:\r\nERRORS:\r\n?: PAPERLESS_CONSUMPTION_DIR is not writeable\r\n\tHINT: Set the permissions of \r\ndrwxr-xr-x /usr/src/paperless/consume\r\n to be writeable by the user running the Paperless services\r\n```\r\n\r\n\r\n### Paperless-ngx version\r\n\r\n1.8,0\r\n\r\n### Host OS\r\n\r\ndocker\r\n\r\n### Installation method\r\n\r\nDocker - official image\r\n\r\n### Browser\r\n\r\n_No response_\r\n\r\n### Configuration changes\r\n\r\n_No response_\r\n\r\n### Other\r\n\r\n_No response_\n", "before_files": [{"content": "import os\nimport shutil\nimport stat\n\nfrom django.conf import settings\nfrom django.core.checks import Error\nfrom django.core.checks import register\nfrom django.core.checks import Warning\n\nexists_message = \"{} is set but doesn't exist.\"\nexists_hint = \"Create a directory at {}\"\nwriteable_message = \"{} is not writeable\"\nwriteable_hint = (\n \"Set the permissions of {} to be writeable by the user running the \"\n \"Paperless services\"\n)\n\n\ndef path_check(var, directory):\n messages = []\n if directory:\n if not os.path.isdir(directory):\n messages.append(\n Error(exists_message.format(var), exists_hint.format(directory)),\n )\n else:\n test_file = os.path.join(\n directory,\n f\"__paperless_write_test_{os.getpid()}__\",\n )\n try:\n with open(test_file, \"w\"):\n pass\n except PermissionError:\n messages.append(\n Error(\n writeable_message.format(var),\n writeable_hint.format(\n f\"\\n{stat.filemode(os.stat(directory).st_mode)} \"\n f\"{directory}\\n\",\n ),\n ),\n )\n finally:\n if os.path.isfile(test_file):\n os.remove(test_file)\n\n return messages\n\n\n@register()\ndef paths_check(app_configs, **kwargs):\n \"\"\"\n Check the various paths for existence, readability and writeability\n \"\"\"\n\n return (\n path_check(\"PAPERLESS_DATA_DIR\", settings.DATA_DIR)\n + path_check(\"PAPERLESS_TRASH_DIR\", settings.TRASH_DIR)\n + path_check(\"PAPERLESS_MEDIA_ROOT\", settings.MEDIA_ROOT)\n + path_check(\"PAPERLESS_CONSUMPTION_DIR\", settings.CONSUMPTION_DIR)\n )\n\n\n@register()\ndef binaries_check(app_configs, **kwargs):\n \"\"\"\n Paperless requires the existence of a few binaries, so we do some checks\n for those here.\n \"\"\"\n\n error = \"Paperless can't find {}. Without it, consumption is impossible.\"\n hint = \"Either it's not in your ${PATH} or it's not installed.\"\n\n binaries = (settings.CONVERT_BINARY, \"tesseract\")\n\n check_messages = []\n for binary in binaries:\n if shutil.which(binary) is None:\n check_messages.append(Warning(error.format(binary), hint))\n\n return check_messages\n\n\n@register()\ndef debug_mode_check(app_configs, **kwargs):\n if settings.DEBUG:\n return [\n Warning(\n \"DEBUG mode is enabled. Disable Debug mode. This is a serious \"\n \"security issue, since it puts security overides in place which \"\n \"are meant to be only used during development. This \"\n \"also means that paperless will tell anyone various \"\n \"debugging information when something goes wrong.\",\n ),\n ]\n else:\n return []\n\n\n@register()\ndef settings_values_check(app_configs, **kwargs):\n \"\"\"\n Validates at least some of the user provided settings\n \"\"\"\n\n def _ocrmypdf_settings_check():\n \"\"\"\n Validates some of the arguments which will be provided to ocrmypdf\n against the valid options. Use \"ocrmypdf --help\" to see the valid\n inputs\n \"\"\"\n msgs = []\n if settings.OCR_OUTPUT_TYPE not in {\n \"pdfa\",\n \"pdf\",\n \"pdfa-1\",\n \"pdfa-2\",\n \"pdfa-3\",\n }:\n msgs.append(\n Error(f'OCR output type \"{settings.OCR_OUTPUT_TYPE}\" is not valid'),\n )\n\n if settings.OCR_MODE not in {\"force\", \"skip\", \"redo_ocr\"}:\n msgs.append(Error(f'OCR output mode \"{settings.OCR_MODE}\" is not valid'))\n\n if settings.OCR_CLEAN not in {\"clean\", \"clean_final\"}:\n msgs.append(Error(f'OCR clean mode \"{settings.OCR_CLEAN}\" is not valid'))\n return msgs\n\n def _timezone_validate():\n \"\"\"\n Validates the user provided timezone is a valid timezone\n \"\"\"\n try:\n import zoneinfo\n except ImportError: # pragma: nocover\n import backports.zoneinfo as zoneinfo\n msgs = []\n if settings.TIME_ZONE not in zoneinfo.available_timezones():\n msgs.append(\n Error(f'Timezone \"{settings.TIME_ZONE}\" is not a valid timezone'),\n )\n return msgs\n\n return _ocrmypdf_settings_check() + _timezone_validate()\n", "path": "src/paperless/checks.py"}], "after_files": [{"content": "import grp\nimport os\nimport pwd\nimport shutil\nimport stat\n\nfrom django.conf import settings\nfrom django.core.checks import Error\nfrom django.core.checks import register\nfrom django.core.checks import Warning\n\nexists_message = \"{} is set but doesn't exist.\"\nexists_hint = \"Create a directory at {}\"\nwriteable_message = \"{} is not writeable\"\nwriteable_hint = (\n \"Set the permissions of {} to be writeable by the user running the \"\n \"Paperless services\"\n)\n\n\ndef path_check(var, directory):\n messages = []\n if directory:\n if not os.path.isdir(directory):\n messages.append(\n Error(exists_message.format(var), exists_hint.format(directory)),\n )\n else:\n test_file = os.path.join(\n directory,\n f\"__paperless_write_test_{os.getpid()}__\",\n )\n try:\n with open(test_file, \"w\"):\n pass\n except PermissionError:\n dir_stat = os.stat(directory)\n dir_mode = stat.filemode(dir_stat.st_mode)\n dir_owner = pwd.getpwuid(dir_stat.st_uid).pw_name\n dir_group = grp.getgrgid(dir_stat.st_gid).gr_name\n messages.append(\n Error(\n writeable_message.format(var),\n writeable_hint.format(\n f\"\\n{dir_mode} {dir_owner} {dir_group} \" f\"{directory}\\n\",\n ),\n ),\n )\n finally:\n if os.path.isfile(test_file):\n os.remove(test_file)\n\n return messages\n\n\n@register()\ndef paths_check(app_configs, **kwargs):\n \"\"\"\n Check the various paths for existence, readability and writeability\n \"\"\"\n\n return (\n path_check(\"PAPERLESS_DATA_DIR\", settings.DATA_DIR)\n + path_check(\"PAPERLESS_TRASH_DIR\", settings.TRASH_DIR)\n + path_check(\"PAPERLESS_MEDIA_ROOT\", settings.MEDIA_ROOT)\n + path_check(\"PAPERLESS_CONSUMPTION_DIR\", settings.CONSUMPTION_DIR)\n )\n\n\n@register()\ndef binaries_check(app_configs, **kwargs):\n \"\"\"\n Paperless requires the existence of a few binaries, so we do some checks\n for those here.\n \"\"\"\n\n error = \"Paperless can't find {}. Without it, consumption is impossible.\"\n hint = \"Either it's not in your ${PATH} or it's not installed.\"\n\n binaries = (settings.CONVERT_BINARY, \"tesseract\")\n\n check_messages = []\n for binary in binaries:\n if shutil.which(binary) is None:\n check_messages.append(Warning(error.format(binary), hint))\n\n return check_messages\n\n\n@register()\ndef debug_mode_check(app_configs, **kwargs):\n if settings.DEBUG:\n return [\n Warning(\n \"DEBUG mode is enabled. Disable Debug mode. This is a serious \"\n \"security issue, since it puts security overides in place which \"\n \"are meant to be only used during development. This \"\n \"also means that paperless will tell anyone various \"\n \"debugging information when something goes wrong.\",\n ),\n ]\n else:\n return []\n\n\n@register()\ndef settings_values_check(app_configs, **kwargs):\n \"\"\"\n Validates at least some of the user provided settings\n \"\"\"\n\n def _ocrmypdf_settings_check():\n \"\"\"\n Validates some of the arguments which will be provided to ocrmypdf\n against the valid options. Use \"ocrmypdf --help\" to see the valid\n inputs\n \"\"\"\n msgs = []\n if settings.OCR_OUTPUT_TYPE not in {\n \"pdfa\",\n \"pdf\",\n \"pdfa-1\",\n \"pdfa-2\",\n \"pdfa-3\",\n }:\n msgs.append(\n Error(f'OCR output type \"{settings.OCR_OUTPUT_TYPE}\" is not valid'),\n )\n\n if settings.OCR_MODE not in {\"force\", \"skip\", \"redo_ocr\"}:\n msgs.append(Error(f'OCR output mode \"{settings.OCR_MODE}\" is not valid'))\n\n if settings.OCR_CLEAN not in {\"clean\", \"clean_final\"}:\n msgs.append(Error(f'OCR clean mode \"{settings.OCR_CLEAN}\" is not valid'))\n return msgs\n\n def _timezone_validate():\n \"\"\"\n Validates the user provided timezone is a valid timezone\n \"\"\"\n try:\n import zoneinfo\n except ImportError: # pragma: nocover\n import backports.zoneinfo as zoneinfo\n msgs = []\n if settings.TIME_ZONE not in zoneinfo.available_timezones():\n msgs.append(\n Error(f'Timezone \"{settings.TIME_ZONE}\" is not a valid timezone'),\n )\n return msgs\n\n return _ocrmypdf_settings_check() + _timezone_validate()\n", "path": "src/paperless/checks.py"}]} | 1,837 | 229 |
gh_patches_debug_3959 | rasdani/github-patches | git_diff | great-expectations__great_expectations-5468 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py`
Content:
```
1 # Make sure to include any Expectations your want exported below!
2
3 from .expect_column_values_confidence_for_data_label_to_be_greater_than_or_equal_to_threshold import (
4 ExpectColumnValuesConfidenceForDataLabelToBeGreaterThanOrEqualToThreshold,
5 )
6 from .expect_column_values_confidence_for_data_label_to_be_less_than_or_equal_to_threshold import (
7 ExpectColumnValuesConfidenceForDataLabelToBeLessThanOrEqualToThreshold,
8 )
9 from .expect_column_values_to_be_equal_to_or_greater_than_profile_min import (
10 ExpectColumnValuesToBeEqualToOrGreaterThanProfileMin,
11 )
12 from .expect_column_values_to_be_equal_to_or_less_than_profile_max import (
13 ExpectColumnValuesToBeEqualToOrLessThanProfileMax,
14 )
15 from .expect_column_values_to_be_probabilistically_greater_than_or_equal_to_threshold import (
16 ExpectColumnValuesToBeProbabilisticallyGreaterThanOrEqualToThreshold,
17 )
18 from .expect_profile_numeric_columns_diff_between_threshold_range import (
19 ExpectProfileNumericColumnsDiffBetweenThresholdRange
20 )
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py b/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py
--- a/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py
+++ b/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py
@@ -16,5 +16,5 @@
ExpectColumnValuesToBeProbabilisticallyGreaterThanOrEqualToThreshold,
)
from .expect_profile_numeric_columns_diff_between_threshold_range import (
- ExpectProfileNumericColumnsDiffBetweenThresholdRange
+ ExpectProfileNumericColumnsDiffBetweenThresholdRange,
)
| {"golden_diff": "diff --git a/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py b/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py\n--- a/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py\n+++ b/contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py\n@@ -16,5 +16,5 @@\n ExpectColumnValuesToBeProbabilisticallyGreaterThanOrEqualToThreshold,\n )\n from .expect_profile_numeric_columns_diff_between_threshold_range import (\n- ExpectProfileNumericColumnsDiffBetweenThresholdRange\n+ ExpectProfileNumericColumnsDiffBetweenThresholdRange,\n )\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "# Make sure to include any Expectations your want exported below!\n\nfrom .expect_column_values_confidence_for_data_label_to_be_greater_than_or_equal_to_threshold import (\n ExpectColumnValuesConfidenceForDataLabelToBeGreaterThanOrEqualToThreshold,\n)\nfrom .expect_column_values_confidence_for_data_label_to_be_less_than_or_equal_to_threshold import (\n ExpectColumnValuesConfidenceForDataLabelToBeLessThanOrEqualToThreshold,\n)\nfrom .expect_column_values_to_be_equal_to_or_greater_than_profile_min import (\n ExpectColumnValuesToBeEqualToOrGreaterThanProfileMin,\n)\nfrom .expect_column_values_to_be_equal_to_or_less_than_profile_max import (\n ExpectColumnValuesToBeEqualToOrLessThanProfileMax,\n)\nfrom .expect_column_values_to_be_probabilistically_greater_than_or_equal_to_threshold import (\n ExpectColumnValuesToBeProbabilisticallyGreaterThanOrEqualToThreshold,\n)\nfrom .expect_profile_numeric_columns_diff_between_threshold_range import (\n ExpectProfileNumericColumnsDiffBetweenThresholdRange\n)\n", "path": "contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py"}], "after_files": [{"content": "# Make sure to include any Expectations your want exported below!\n\nfrom .expect_column_values_confidence_for_data_label_to_be_greater_than_or_equal_to_threshold import (\n ExpectColumnValuesConfidenceForDataLabelToBeGreaterThanOrEqualToThreshold,\n)\nfrom .expect_column_values_confidence_for_data_label_to_be_less_than_or_equal_to_threshold import (\n ExpectColumnValuesConfidenceForDataLabelToBeLessThanOrEqualToThreshold,\n)\nfrom .expect_column_values_to_be_equal_to_or_greater_than_profile_min import (\n ExpectColumnValuesToBeEqualToOrGreaterThanProfileMin,\n)\nfrom .expect_column_values_to_be_equal_to_or_less_than_profile_max import (\n ExpectColumnValuesToBeEqualToOrLessThanProfileMax,\n)\nfrom .expect_column_values_to_be_probabilistically_greater_than_or_equal_to_threshold import (\n ExpectColumnValuesToBeProbabilisticallyGreaterThanOrEqualToThreshold,\n)\nfrom .expect_profile_numeric_columns_diff_between_threshold_range import (\n ExpectProfileNumericColumnsDiffBetweenThresholdRange,\n)\n", "path": "contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations/__init__.py"}]} | 544 | 181 |
gh_patches_debug_5751 | rasdani/github-patches | git_diff | ansible__ansible-lint-1128 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[701] No 'galaxy_info' found results in meta/requirements.yml file
### Summary
ansible-lint reporting `[701] No 'galaxy_info' found` in my `meta/requirements.yml`, a file that unlike `meta/main.yml` does not (to my knowledge) support a `galaxy_info` field.
##### Issue Type
- Bug Report
##### Ansible and Ansible Lint details
<!--- Paste verbatim output between tripple backticks -->
```console (paste below)
$ ansible --version
ansible 2.10.1
$ ansible-lint --version
ansible-lint 4.3.5
```
- ansible installation method: pipenv (pip)
- ansible-lint installation method: pipenv (pip)
##### OS / ENVIRONMENT
MacOS 10.15.7 (Catalina Latest)
##### STEPS TO REPRODUCE
Using this `meta/requirements.yml`
```bash
---
# insert third party deps here. download with:
# ansible-galaxy install -r requirements.yml
# https://docs.ansible.com/ansible/galaxy.html
- name: singleplatform-eng.users
version: v1.2.6
- name: weareinteractive.sudo
version: 1.14.1
- name: geerlingguy.fluentd
version: 1.1.0
```
Note that `meta/main.yml` does include `galaxy_info`, but does not include as dependencies, the roles listed in requirements.yml. This is purposeful because I'm choosing `meta/requirements.yml` instead of `meta/main.yml` because I prefer the workflow and do not want the roles running first, as they do when in `meta/main.yml`. I'm following the previously linked user-guide on this topic.
To reproduce, I simply run ansible-lint directly or via molecule.
##### Desired Behaviour
I would expect ansible-lint not to flag these as issues... unless I'm completely misunderstanding the finding and misreading the documentation associated with this rule.
##### Actual Behaviour
Below are the ansible-lint results when run on my role.
```bash
$ ansible-lint
[701] No 'galaxy_info' found
meta/requirements.yml:7
{'meta/main.yml': {'name': 'singleplatform-eng.users', 'version': 'v1.2.6', '__line__': 7, '__file__': '/Users/tmichael/orgs/tmb/ansible_roles/base/meta/requirements.yml', 'skipped_rules': []}}
[701] No 'galaxy_info' found
meta/requirements.yml:10
{'meta/main.yml': {'name': 'weareinteractive.sudo', 'version': '1.14.1', '__line__': 10, '__file__': '/Users/tmichael/orgs/tmb/ansible_roles/base/meta/requirements.yml'}}
[701] No 'galaxy_info' found
meta/requirements.yml:13
{'meta/main.yml': {'name': 'geerlingguy.fluentd', 'version': '1.1.0', '__line__': 13, '__file__': '/Users/tmichael/orgs/tmb/ansible_roles/base/meta/requirements.yml'}}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansiblelint/rules/MetaMainHasInfoRule.py`
Content:
```
1 # Copyright (c) 2016, Will Thames and contributors
2 # Copyright (c) 2018, Ansible Project
3
4 from ansiblelint.rules import AnsibleLintRule
5
6 META_STR_INFO = (
7 'author',
8 'description'
9 )
10 META_INFO = tuple(list(META_STR_INFO) + [
11 'license',
12 'min_ansible_version',
13 'platforms',
14 ])
15
16
17 def _platform_info_errors_itr(platforms):
18 if not isinstance(platforms, list):
19 yield 'Platforms should be a list of dictionaries'
20 return
21
22 for platform in platforms:
23 if not isinstance(platform, dict):
24 yield 'Platforms should be a list of dictionaries'
25 elif 'name' not in platform:
26 yield 'Platform should contain name'
27
28
29 def _galaxy_info_errors_itr(galaxy_info,
30 info_list=META_INFO,
31 str_info_list=META_STR_INFO):
32 for info in info_list:
33 ginfo = galaxy_info.get(info, False)
34 if ginfo:
35 if info in str_info_list and not isinstance(ginfo, str):
36 yield '{info} should be a string'.format(info=info)
37 elif info == 'platforms':
38 for err in _platform_info_errors_itr(ginfo):
39 yield err
40 else:
41 yield 'Role info should contain {info}'.format(info=info)
42
43
44 class MetaMainHasInfoRule(AnsibleLintRule):
45 id = '701'
46 shortdesc = 'meta/main.yml should contain relevant info'
47 str_info = META_STR_INFO
48 info = META_INFO
49 description = (
50 'meta/main.yml should contain: ``{}``'.format(', '.join(info))
51 )
52 severity = 'HIGH'
53 tags = ['metadata']
54 version_added = 'v4.0.0'
55
56 def matchplay(self, file, data):
57 if file['type'] != 'meta':
58 return False
59
60 meta = {'meta/main.yml': data}
61 galaxy_info = data.get('galaxy_info', False)
62 if galaxy_info:
63 return [(meta, err) for err
64 in _galaxy_info_errors_itr(galaxy_info)]
65
66 return [(meta, "No 'galaxy_info' found")]
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/ansiblelint/rules/MetaMainHasInfoRule.py b/lib/ansiblelint/rules/MetaMainHasInfoRule.py
--- a/lib/ansiblelint/rules/MetaMainHasInfoRule.py
+++ b/lib/ansiblelint/rules/MetaMainHasInfoRule.py
@@ -57,6 +57,11 @@
if file['type'] != 'meta':
return False
+ # since Ansible 2.10 we can add a meta/requirements.yml but
+ # we only want to match on meta/main.yml
+ if not file['path'].endswith('/main.yml'):
+ return False
+
meta = {'meta/main.yml': data}
galaxy_info = data.get('galaxy_info', False)
if galaxy_info:
| {"golden_diff": "diff --git a/lib/ansiblelint/rules/MetaMainHasInfoRule.py b/lib/ansiblelint/rules/MetaMainHasInfoRule.py\n--- a/lib/ansiblelint/rules/MetaMainHasInfoRule.py\n+++ b/lib/ansiblelint/rules/MetaMainHasInfoRule.py\n@@ -57,6 +57,11 @@\n if file['type'] != 'meta':\n return False\n \n+ # since Ansible 2.10 we can add a meta/requirements.yml but\n+ # we only want to match on meta/main.yml\n+ if not file['path'].endswith('/main.yml'):\n+ return False\n+\n meta = {'meta/main.yml': data}\n galaxy_info = data.get('galaxy_info', False)\n if galaxy_info:\n", "issue": "[701] No 'galaxy_info' found results in meta/requirements.yml file\n### Summary\r\n\r\nansible-lint reporting `[701] No 'galaxy_info' found` in my `meta/requirements.yml`, a file that unlike `meta/main.yml` does not (to my knowledge) support a `galaxy_info` field.\r\n\r\n##### Issue Type\r\n\r\n- Bug Report\r\n\r\n##### Ansible and Ansible Lint details\r\n<!--- Paste verbatim output between tripple backticks -->\r\n```console (paste below)\r\n$ ansible --version\r\nansible 2.10.1\r\n\r\n$ ansible-lint --version\r\nansible-lint 4.3.5\r\n```\r\n\r\n- ansible installation method: pipenv (pip)\r\n- ansible-lint installation method: pipenv (pip)\r\n\r\n##### OS / ENVIRONMENT\r\nMacOS 10.15.7 (Catalina Latest)\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\nUsing this `meta/requirements.yml`\r\n```bash\r\n---\r\n\r\n# insert third party deps here. download with:\r\n# ansible-galaxy install -r requirements.yml\r\n# https://docs.ansible.com/ansible/galaxy.html\r\n\r\n- name: singleplatform-eng.users\r\n version: v1.2.6\r\n\r\n- name: weareinteractive.sudo\r\n version: 1.14.1\r\n\r\n- name: geerlingguy.fluentd\r\n version: 1.1.0\r\n```\r\n\r\nNote that `meta/main.yml` does include `galaxy_info`, but does not include as dependencies, the roles listed in requirements.yml. This is purposeful because I'm choosing `meta/requirements.yml` instead of `meta/main.yml` because I prefer the workflow and do not want the roles running first, as they do when in `meta/main.yml`. I'm following the previously linked user-guide on this topic.\r\n\r\nTo reproduce, I simply run ansible-lint directly or via molecule.\r\n\r\n##### Desired Behaviour\r\n\r\nI would expect ansible-lint not to flag these as issues... unless I'm completely misunderstanding the finding and misreading the documentation associated with this rule.\r\n\r\n##### Actual Behaviour\r\n\r\n\r\nBelow are the ansible-lint results when run on my role.\r\n```bash\r\n$ ansible-lint\r\n[701] No 'galaxy_info' found\r\nmeta/requirements.yml:7\r\n{'meta/main.yml': {'name': 'singleplatform-eng.users', 'version': 'v1.2.6', '__line__': 7, '__file__': '/Users/tmichael/orgs/tmb/ansible_roles/base/meta/requirements.yml', 'skipped_rules': []}}\r\n\r\n[701] No 'galaxy_info' found\r\nmeta/requirements.yml:10\r\n{'meta/main.yml': {'name': 'weareinteractive.sudo', 'version': '1.14.1', '__line__': 10, '__file__': '/Users/tmichael/orgs/tmb/ansible_roles/base/meta/requirements.yml'}}\r\n\r\n[701] No 'galaxy_info' found\r\nmeta/requirements.yml:13\r\n{'meta/main.yml': {'name': 'geerlingguy.fluentd', 'version': '1.1.0', '__line__': 13, '__file__': '/Users/tmichael/orgs/tmb/ansible_roles/base/meta/requirements.yml'}}\r\n```\n", "before_files": [{"content": "# Copyright (c) 2016, Will Thames and contributors\n# Copyright (c) 2018, Ansible Project\n\nfrom ansiblelint.rules import AnsibleLintRule\n\nMETA_STR_INFO = (\n 'author',\n 'description'\n)\nMETA_INFO = tuple(list(META_STR_INFO) + [\n 'license',\n 'min_ansible_version',\n 'platforms',\n])\n\n\ndef _platform_info_errors_itr(platforms):\n if not isinstance(platforms, list):\n yield 'Platforms should be a list of dictionaries'\n return\n\n for platform in platforms:\n if not isinstance(platform, dict):\n yield 'Platforms should be a list of dictionaries'\n elif 'name' not in platform:\n yield 'Platform should contain name'\n\n\ndef _galaxy_info_errors_itr(galaxy_info,\n info_list=META_INFO,\n str_info_list=META_STR_INFO):\n for info in info_list:\n ginfo = galaxy_info.get(info, False)\n if ginfo:\n if info in str_info_list and not isinstance(ginfo, str):\n yield '{info} should be a string'.format(info=info)\n elif info == 'platforms':\n for err in _platform_info_errors_itr(ginfo):\n yield err\n else:\n yield 'Role info should contain {info}'.format(info=info)\n\n\nclass MetaMainHasInfoRule(AnsibleLintRule):\n id = '701'\n shortdesc = 'meta/main.yml should contain relevant info'\n str_info = META_STR_INFO\n info = META_INFO\n description = (\n 'meta/main.yml should contain: ``{}``'.format(', '.join(info))\n )\n severity = 'HIGH'\n tags = ['metadata']\n version_added = 'v4.0.0'\n\n def matchplay(self, file, data):\n if file['type'] != 'meta':\n return False\n\n meta = {'meta/main.yml': data}\n galaxy_info = data.get('galaxy_info', False)\n if galaxy_info:\n return [(meta, err) for err\n in _galaxy_info_errors_itr(galaxy_info)]\n\n return [(meta, \"No 'galaxy_info' found\")]\n", "path": "lib/ansiblelint/rules/MetaMainHasInfoRule.py"}], "after_files": [{"content": "# Copyright (c) 2016, Will Thames and contributors\n# Copyright (c) 2018, Ansible Project\n\nfrom ansiblelint.rules import AnsibleLintRule\n\nMETA_STR_INFO = (\n 'author',\n 'description'\n)\nMETA_INFO = tuple(list(META_STR_INFO) + [\n 'license',\n 'min_ansible_version',\n 'platforms',\n])\n\n\ndef _platform_info_errors_itr(platforms):\n if not isinstance(platforms, list):\n yield 'Platforms should be a list of dictionaries'\n return\n\n for platform in platforms:\n if not isinstance(platform, dict):\n yield 'Platforms should be a list of dictionaries'\n elif 'name' not in platform:\n yield 'Platform should contain name'\n\n\ndef _galaxy_info_errors_itr(galaxy_info,\n info_list=META_INFO,\n str_info_list=META_STR_INFO):\n for info in info_list:\n ginfo = galaxy_info.get(info, False)\n if ginfo:\n if info in str_info_list and not isinstance(ginfo, str):\n yield '{info} should be a string'.format(info=info)\n elif info == 'platforms':\n for err in _platform_info_errors_itr(ginfo):\n yield err\n else:\n yield 'Role info should contain {info}'.format(info=info)\n\n\nclass MetaMainHasInfoRule(AnsibleLintRule):\n id = '701'\n shortdesc = 'meta/main.yml should contain relevant info'\n str_info = META_STR_INFO\n info = META_INFO\n description = (\n 'meta/main.yml should contain: ``{}``'.format(', '.join(info))\n )\n severity = 'HIGH'\n tags = ['metadata']\n version_added = 'v4.0.0'\n\n def matchplay(self, file, data):\n if file['type'] != 'meta':\n return False\n\n # since Ansible 2.10 we can add a meta/requirements.yml but\n # we only want to match on meta/main.yml\n if not file['path'].endswith('/main.yml'):\n return False\n\n meta = {'meta/main.yml': data}\n galaxy_info = data.get('galaxy_info', False)\n if galaxy_info:\n return [(meta, err) for err\n in _galaxy_info_errors_itr(galaxy_info)]\n\n return [(meta, \"No 'galaxy_info' found\")]\n", "path": "lib/ansiblelint/rules/MetaMainHasInfoRule.py"}]} | 1,576 | 169 |
gh_patches_debug_32180 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3459 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Demo mode results in 5XX
## Description
<!-- A clear and concise description of what the bug is. -->
Mathesar is broken (as of 0.1.4) for Demo Mode. It doesn't load, and just says "Server Error (500)" instead.
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Demo mode should work.
## To Reproduce
<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->
Change the `.env` file according to the comment to use the demo mode settings, try to build and start mathesar (dev environment is fine)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/install.py`
Content:
```
1 """
2 This script installs functions and types for Mathesar onto the configured DB.
3 """
4 import getopt
5 import os
6 import sys
7
8 import django
9 from django.core import management
10 from decouple import config as decouple_config
11 from django.conf import settings
12 from django.db.utils import IntegrityError
13 from sqlalchemy.exc import OperationalError
14 from db import install
15
16
17 def main(skip_static_collection=False):
18 # skip_confirm is temporarily enabled by default as we don't have any use
19 # for interactive prompts with docker only deployments
20 skip_confirm = True
21 (opts, _) = getopt.getopt(sys.argv[1:], ":s", ["skip-confirm"])
22 for (opt, value) in opts:
23 if (opt == "-s") or (opt == "--skip-confirm"):
24 skip_confirm = True
25 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.production")
26 django.setup()
27 management.call_command('migrate')
28 debug_mode = decouple_config('DEBUG', default=False, cast=bool)
29 #
30 if not debug_mode and not skip_static_collection:
31 management.call_command('collectstatic', '--noinput', '--clear')
32 print("------------Setting up User Databases------------")
33 django_db_key = decouple_config('DJANGO_DATABASE_KEY', default="default")
34 user_databases = [key for key in settings.DATABASES if key != django_db_key]
35 for database_key in user_databases:
36 try:
37 install_on_db_with_key(database_key, skip_confirm)
38 except IntegrityError:
39 continue
40
41
42 def install_on_db_with_key(database_key, skip_confirm):
43 from mathesar.models.base import Database
44 db_model = Database.create_from_settings_key(database_key)
45 db_model.save()
46 try:
47 install.install_mathesar(
48 database_name=db_model.db_name,
49 hostname=db_model.host,
50 username=db_model.username,
51 password=db_model.password,
52 port=db_model.port,
53 skip_confirm=skip_confirm
54 )
55 except OperationalError as e:
56 db_model.delete()
57 raise e
58
59
60 if __name__ == "__main__":
61 main()
62
```
Path: `demo/settings.py`
Content:
```
1 from config.settings.production import * # noqa
2 from config.settings import * # noqa
3 from decouple import config as decouple_config
4
5 INSTALLED_APPS += [ # noqa
6 "demo"
7 ]
8
9 MIDDLEWARE += [ # noqa
10 "demo.middleware.LiveDemoModeMiddleware",
11 ]
12
13 MATHESAR_LIVE_DEMO = True
14 MATHESAR_LIVE_DEMO_USERNAME = decouple_config('MATHESAR_LIVE_DEMO_USERNAME', default=None)
15 MATHESAR_LIVE_DEMO_PASSWORD = decouple_config('MATHESAR_LIVE_DEMO_PASSWORD', default=None)
16
17 MATHESAR_DEMO_TEMPLATE = 'mathesar_demo_template'
18 MATHESAR_DEMO_ARXIV_LOG_PATH = decouple_config(
19 'MATHESAR_DEMO_ARXIV_LOG_PATH',
20 default='/var/lib/mathesar/demo/arxiv_db_schema_log'
21 )
22 BASE_TEMPLATE_ADDITIONAL_SCRIPT_TEMPLATES += ['demo/analytics.html'] # noqa
23 ROOT_URLCONF = "demo.urls"
24
```
Path: `demo/management/commands/setup_demo_template_db.py`
Content:
```
1 from sqlalchemy import text
2 from sqlalchemy.exc import OperationalError
3 from django.conf import settings
4 from django.core.management import BaseCommand
5
6 from db.install import install_mathesar
7 from demo.install.datasets import load_datasets
8 from mathesar.database.base import create_mathesar_engine
9 from mathesar.models.base import Database
10
11
12 class Command(BaseCommand):
13 help = 'Initialize the demo template database.'
14
15 def handle(self, *args, **options):
16 _setup_demo_template_db()
17
18
19 def _setup_demo_template_db():
20 print("Initializing demo template database...")
21
22 template_db_name = settings.MATHESAR_DEMO_TEMPLATE
23 django_model = Database.current_objects.get(name=settings.DATABASES["default"]["NAME"])
24 root_engine = create_mathesar_engine(django_model)
25 with root_engine.connect() as conn:
26 conn.execution_options(isolation_level="AUTOCOMMIT")
27 conn.execute(text(f"DROP DATABASE IF EXISTS {template_db_name} WITH (FORCE)"))
28 root_engine.dispose()
29 db_model, _ = Database.current_objects.get_or_create(
30 name=template_db_name,
31 defaults={
32 'db_name': template_db_name,
33 'username': django_model.username,
34 'password': django_model.password,
35 'host': django_model.host,
36 'port': django_model.port
37 }
38 )
39 try:
40 install_mathesar(
41 database_name=template_db_name,
42 hostname=db_model.host,
43 username=db_model.username,
44 password=db_model.password,
45 port=db_model.port,
46 skip_confirm=True
47 )
48 except OperationalError as e:
49 db_model.delete()
50 raise e
51 user_engine = create_mathesar_engine(db_model)
52 load_datasets(user_engine)
53 user_engine.dispose()
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/demo/management/commands/setup_demo_template_db.py b/demo/management/commands/setup_demo_template_db.py
--- a/demo/management/commands/setup_demo_template_db.py
+++ b/demo/management/commands/setup_demo_template_db.py
@@ -20,7 +20,7 @@
print("Initializing demo template database...")
template_db_name = settings.MATHESAR_DEMO_TEMPLATE
- django_model = Database.current_objects.get(name=settings.DATABASES["default"]["NAME"])
+ django_model = Database.create_from_settings_key("default")
root_engine = create_mathesar_engine(django_model)
with root_engine.connect() as conn:
conn.execution_options(isolation_level="AUTOCOMMIT")
diff --git a/demo/settings.py b/demo/settings.py
--- a/demo/settings.py
+++ b/demo/settings.py
@@ -1,5 +1,4 @@
-from config.settings.production import * # noqa
-from config.settings import * # noqa
+from config.settings.common_settings import * # noqa
from decouple import config as decouple_config
INSTALLED_APPS += [ # noqa
@@ -10,6 +9,7 @@
"demo.middleware.LiveDemoModeMiddleware",
]
+MATHESAR_MODE = 'PRODUCTION'
MATHESAR_LIVE_DEMO = True
MATHESAR_LIVE_DEMO_USERNAME = decouple_config('MATHESAR_LIVE_DEMO_USERNAME', default=None)
MATHESAR_LIVE_DEMO_PASSWORD = decouple_config('MATHESAR_LIVE_DEMO_PASSWORD', default=None)
diff --git a/mathesar/install.py b/mathesar/install.py
--- a/mathesar/install.py
+++ b/mathesar/install.py
@@ -37,6 +37,14 @@
install_on_db_with_key(database_key, skip_confirm)
except IntegrityError:
continue
+ if getattr(settings, 'MATHESAR_LIVE_DEMO', False) is True:
+ management.call_command(
+ 'createsuperuser',
+ '--no-input',
+ '--username', 'demo',
+ '--email', '[email protected]',
+ )
+ management.call_command('setup_demo_template_db')
def install_on_db_with_key(database_key, skip_confirm):
| {"golden_diff": "diff --git a/demo/management/commands/setup_demo_template_db.py b/demo/management/commands/setup_demo_template_db.py\n--- a/demo/management/commands/setup_demo_template_db.py\n+++ b/demo/management/commands/setup_demo_template_db.py\n@@ -20,7 +20,7 @@\n print(\"Initializing demo template database...\")\n \n template_db_name = settings.MATHESAR_DEMO_TEMPLATE\n- django_model = Database.current_objects.get(name=settings.DATABASES[\"default\"][\"NAME\"])\n+ django_model = Database.create_from_settings_key(\"default\")\n root_engine = create_mathesar_engine(django_model)\n with root_engine.connect() as conn:\n conn.execution_options(isolation_level=\"AUTOCOMMIT\")\ndiff --git a/demo/settings.py b/demo/settings.py\n--- a/demo/settings.py\n+++ b/demo/settings.py\n@@ -1,5 +1,4 @@\n-from config.settings.production import * # noqa\n-from config.settings import * # noqa\n+from config.settings.common_settings import * # noqa\n from decouple import config as decouple_config\n \n INSTALLED_APPS += [ # noqa\n@@ -10,6 +9,7 @@\n \"demo.middleware.LiveDemoModeMiddleware\",\n ]\n \n+MATHESAR_MODE = 'PRODUCTION'\n MATHESAR_LIVE_DEMO = True\n MATHESAR_LIVE_DEMO_USERNAME = decouple_config('MATHESAR_LIVE_DEMO_USERNAME', default=None)\n MATHESAR_LIVE_DEMO_PASSWORD = decouple_config('MATHESAR_LIVE_DEMO_PASSWORD', default=None)\ndiff --git a/mathesar/install.py b/mathesar/install.py\n--- a/mathesar/install.py\n+++ b/mathesar/install.py\n@@ -37,6 +37,14 @@\n install_on_db_with_key(database_key, skip_confirm)\n except IntegrityError:\n continue\n+ if getattr(settings, 'MATHESAR_LIVE_DEMO', False) is True:\n+ management.call_command(\n+ 'createsuperuser',\n+ '--no-input',\n+ '--username', 'demo',\n+ '--email', '[email protected]',\n+ )\n+ management.call_command('setup_demo_template_db')\n \n \n def install_on_db_with_key(database_key, skip_confirm):\n", "issue": "Demo mode results in 5XX\n## Description\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nMathesar is broken (as of 0.1.4) for Demo Mode. It doesn't load, and just says \"Server Error (500)\" instead.\r\n\r\n## Expected behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\nDemo mode should work.\r\n\r\n## To Reproduce\r\n<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->\r\n\r\nChange the `.env` file according to the comment to use the demo mode settings, try to build and start mathesar (dev environment is fine)\n", "before_files": [{"content": "\"\"\"\nThis script installs functions and types for Mathesar onto the configured DB.\n\"\"\"\nimport getopt\nimport os\nimport sys\n\nimport django\nfrom django.core import management\nfrom decouple import config as decouple_config\nfrom django.conf import settings\nfrom django.db.utils import IntegrityError\nfrom sqlalchemy.exc import OperationalError\nfrom db import install\n\n\ndef main(skip_static_collection=False):\n # skip_confirm is temporarily enabled by default as we don't have any use\n # for interactive prompts with docker only deployments\n skip_confirm = True\n (opts, _) = getopt.getopt(sys.argv[1:], \":s\", [\"skip-confirm\"])\n for (opt, value) in opts:\n if (opt == \"-s\") or (opt == \"--skip-confirm\"):\n skip_confirm = True\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"config.settings.production\")\n django.setup()\n management.call_command('migrate')\n debug_mode = decouple_config('DEBUG', default=False, cast=bool)\n #\n if not debug_mode and not skip_static_collection:\n management.call_command('collectstatic', '--noinput', '--clear')\n print(\"------------Setting up User Databases------------\")\n django_db_key = decouple_config('DJANGO_DATABASE_KEY', default=\"default\")\n user_databases = [key for key in settings.DATABASES if key != django_db_key]\n for database_key in user_databases:\n try:\n install_on_db_with_key(database_key, skip_confirm)\n except IntegrityError:\n continue\n\n\ndef install_on_db_with_key(database_key, skip_confirm):\n from mathesar.models.base import Database\n db_model = Database.create_from_settings_key(database_key)\n db_model.save()\n try:\n install.install_mathesar(\n database_name=db_model.db_name,\n hostname=db_model.host,\n username=db_model.username,\n password=db_model.password,\n port=db_model.port,\n skip_confirm=skip_confirm\n )\n except OperationalError as e:\n db_model.delete()\n raise e\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "mathesar/install.py"}, {"content": "from config.settings.production import * # noqa\nfrom config.settings import * # noqa\nfrom decouple import config as decouple_config\n\nINSTALLED_APPS += [ # noqa\n \"demo\"\n]\n\nMIDDLEWARE += [ # noqa\n \"demo.middleware.LiveDemoModeMiddleware\",\n]\n\nMATHESAR_LIVE_DEMO = True\nMATHESAR_LIVE_DEMO_USERNAME = decouple_config('MATHESAR_LIVE_DEMO_USERNAME', default=None)\nMATHESAR_LIVE_DEMO_PASSWORD = decouple_config('MATHESAR_LIVE_DEMO_PASSWORD', default=None)\n\nMATHESAR_DEMO_TEMPLATE = 'mathesar_demo_template'\nMATHESAR_DEMO_ARXIV_LOG_PATH = decouple_config(\n 'MATHESAR_DEMO_ARXIV_LOG_PATH',\n default='/var/lib/mathesar/demo/arxiv_db_schema_log'\n)\nBASE_TEMPLATE_ADDITIONAL_SCRIPT_TEMPLATES += ['demo/analytics.html'] # noqa\nROOT_URLCONF = \"demo.urls\"\n", "path": "demo/settings.py"}, {"content": "from sqlalchemy import text\nfrom sqlalchemy.exc import OperationalError\nfrom django.conf import settings\nfrom django.core.management import BaseCommand\n\nfrom db.install import install_mathesar\nfrom demo.install.datasets import load_datasets\nfrom mathesar.database.base import create_mathesar_engine\nfrom mathesar.models.base import Database\n\n\nclass Command(BaseCommand):\n help = 'Initialize the demo template database.'\n\n def handle(self, *args, **options):\n _setup_demo_template_db()\n\n\ndef _setup_demo_template_db():\n print(\"Initializing demo template database...\")\n\n template_db_name = settings.MATHESAR_DEMO_TEMPLATE\n django_model = Database.current_objects.get(name=settings.DATABASES[\"default\"][\"NAME\"])\n root_engine = create_mathesar_engine(django_model)\n with root_engine.connect() as conn:\n conn.execution_options(isolation_level=\"AUTOCOMMIT\")\n conn.execute(text(f\"DROP DATABASE IF EXISTS {template_db_name} WITH (FORCE)\"))\n root_engine.dispose()\n db_model, _ = Database.current_objects.get_or_create(\n name=template_db_name,\n defaults={\n 'db_name': template_db_name,\n 'username': django_model.username,\n 'password': django_model.password,\n 'host': django_model.host,\n 'port': django_model.port\n }\n )\n try:\n install_mathesar(\n database_name=template_db_name,\n hostname=db_model.host,\n username=db_model.username,\n password=db_model.password,\n port=db_model.port,\n skip_confirm=True\n )\n except OperationalError as e:\n db_model.delete()\n raise e\n user_engine = create_mathesar_engine(db_model)\n load_datasets(user_engine)\n user_engine.dispose()\n", "path": "demo/management/commands/setup_demo_template_db.py"}], "after_files": [{"content": "\"\"\"\nThis script installs functions and types for Mathesar onto the configured DB.\n\"\"\"\nimport getopt\nimport os\nimport sys\n\nimport django\nfrom django.core import management\nfrom decouple import config as decouple_config\nfrom django.conf import settings\nfrom django.db.utils import IntegrityError\nfrom sqlalchemy.exc import OperationalError\nfrom db import install\n\n\ndef main(skip_static_collection=False):\n # skip_confirm is temporarily enabled by default as we don't have any use\n # for interactive prompts with docker only deployments\n skip_confirm = True\n (opts, _) = getopt.getopt(sys.argv[1:], \":s\", [\"skip-confirm\"])\n for (opt, value) in opts:\n if (opt == \"-s\") or (opt == \"--skip-confirm\"):\n skip_confirm = True\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"config.settings.production\")\n django.setup()\n management.call_command('migrate')\n debug_mode = decouple_config('DEBUG', default=False, cast=bool)\n #\n if not debug_mode and not skip_static_collection:\n management.call_command('collectstatic', '--noinput', '--clear')\n print(\"------------Setting up User Databases------------\")\n django_db_key = decouple_config('DJANGO_DATABASE_KEY', default=\"default\")\n user_databases = [key for key in settings.DATABASES if key != django_db_key]\n for database_key in user_databases:\n try:\n install_on_db_with_key(database_key, skip_confirm)\n except IntegrityError:\n continue\n if getattr(settings, 'MATHESAR_LIVE_DEMO', False) is True:\n management.call_command(\n 'createsuperuser',\n '--no-input',\n '--username', 'demo',\n '--email', '[email protected]',\n )\n management.call_command('setup_demo_template_db')\n\n\ndef install_on_db_with_key(database_key, skip_confirm):\n from mathesar.models.base import Database\n db_model = Database.create_from_settings_key(database_key)\n db_model.save()\n try:\n install.install_mathesar(\n database_name=db_model.db_name,\n hostname=db_model.host,\n username=db_model.username,\n password=db_model.password,\n port=db_model.port,\n skip_confirm=skip_confirm\n )\n except OperationalError as e:\n db_model.delete()\n raise e\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "mathesar/install.py"}, {"content": "from config.settings.common_settings import * # noqa\nfrom decouple import config as decouple_config\n\nINSTALLED_APPS += [ # noqa\n \"demo\"\n]\n\nMIDDLEWARE += [ # noqa\n \"demo.middleware.LiveDemoModeMiddleware\",\n]\n\nMATHESAR_MODE = 'PRODUCTION'\nMATHESAR_LIVE_DEMO = True\nMATHESAR_LIVE_DEMO_USERNAME = decouple_config('MATHESAR_LIVE_DEMO_USERNAME', default=None)\nMATHESAR_LIVE_DEMO_PASSWORD = decouple_config('MATHESAR_LIVE_DEMO_PASSWORD', default=None)\n\nMATHESAR_DEMO_TEMPLATE = 'mathesar_demo_template'\nMATHESAR_DEMO_ARXIV_LOG_PATH = decouple_config(\n 'MATHESAR_DEMO_ARXIV_LOG_PATH',\n default='/var/lib/mathesar/demo/arxiv_db_schema_log'\n)\nBASE_TEMPLATE_ADDITIONAL_SCRIPT_TEMPLATES += ['demo/analytics.html'] # noqa\nROOT_URLCONF = \"demo.urls\"\n", "path": "demo/settings.py"}, {"content": "from sqlalchemy import text\nfrom sqlalchemy.exc import OperationalError\nfrom django.conf import settings\nfrom django.core.management import BaseCommand\n\nfrom db.install import install_mathesar\nfrom demo.install.datasets import load_datasets\nfrom mathesar.database.base import create_mathesar_engine\nfrom mathesar.models.base import Database\n\n\nclass Command(BaseCommand):\n help = 'Initialize the demo template database.'\n\n def handle(self, *args, **options):\n _setup_demo_template_db()\n\n\ndef _setup_demo_template_db():\n print(\"Initializing demo template database...\")\n\n template_db_name = settings.MATHESAR_DEMO_TEMPLATE\n django_model = Database.create_from_settings_key(\"default\")\n root_engine = create_mathesar_engine(django_model)\n with root_engine.connect() as conn:\n conn.execution_options(isolation_level=\"AUTOCOMMIT\")\n conn.execute(text(f\"DROP DATABASE IF EXISTS {template_db_name} WITH (FORCE)\"))\n root_engine.dispose()\n db_model, _ = Database.current_objects.get_or_create(\n name=template_db_name,\n defaults={\n 'db_name': template_db_name,\n 'username': django_model.username,\n 'password': django_model.password,\n 'host': django_model.host,\n 'port': django_model.port\n }\n )\n try:\n install_mathesar(\n database_name=template_db_name,\n hostname=db_model.host,\n username=db_model.username,\n password=db_model.password,\n port=db_model.port,\n skip_confirm=True\n )\n except OperationalError as e:\n db_model.delete()\n raise e\n user_engine = create_mathesar_engine(db_model)\n load_datasets(user_engine)\n user_engine.dispose()\n", "path": "demo/management/commands/setup_demo_template_db.py"}]} | 1,714 | 481 |
gh_patches_debug_38209 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-445 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve documentation of cms rules
Explain the rules module and how it interacts with our permission management. Add docstrings of the following format:
```
"""
[Summary]
:param [ParamName]: [ParamDescription], defaults to [DefaultParamVal]
:type [ParamName]: [ParamType](, optional)
...
:raises [ErrorType]: [ErrorDescription]
...
:return: [ReturnDescription]
:rtype: [ReturnType]
"""
```
Improve documentation of cms rules
Explain the rules module and how it interacts with our permission management. Add docstrings of the following format:
```
"""
[Summary]
:param [ParamName]: [ParamDescription], defaults to [DefaultParamVal]
:type [ParamName]: [ParamType](, optional)
...
:raises [ErrorType]: [ErrorDescription]
...
:return: [ReturnDescription]
:rtype: [ReturnType]
"""
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cms/rules.py`
Content:
```
1 from rules import add_perm, predicate
2
3
4 # Predicates
5
6 @predicate
7 def is_page_editor(user, page):
8 if not page:
9 return False
10 return user in page.editors.all()
11
12 @predicate
13 def is_page_publisher(user, page):
14 if not page:
15 return False
16 return user in page.publishers.all()
17
18 @predicate
19 # pylint: disable=unused-argument
20 def can_edit_all_pages(user, page):
21 return user.has_perm('cms.edit_pages')
22
23 @predicate
24 # pylint: disable=unused-argument
25 def can_publish_all_pages(user, page):
26 return user.has_perm('cms.publish_pages')
27
28
29 # Permissions
30
31 add_perm('cms.edit_page', can_edit_all_pages | is_page_editor | can_publish_all_pages | is_page_publisher)
32 add_perm('cms.publish_page', can_publish_all_pages | is_page_publisher)
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cms/rules.py b/src/cms/rules.py
--- a/src/cms/rules.py
+++ b/src/cms/rules.py
@@ -1,3 +1,22 @@
+"""
+We use `django-rules <https://pypi.org/project/rules/>`_ to add custom permissions for specific pages.
+
+For a given user and page, the following permissions are added:
+
+* ``cms.edit_page`` if one of the following predicates return true:
+
+ * :func:`~cms.rules.can_edit_all_pages`
+ * :func:`~cms.rules.is_page_editor`
+ * :func:`~cms.rules.can_publish_all_pages`
+ * :func:`~cms.rules.is_page_publisher`
+
+* ``cms.publish_page`` if one of the following predicates return true:
+
+ * :func:`~cms.rules.can_publish_all_pages`
+ * :func:`~cms.rules.is_page_publisher`
+
+See the project's `README <https://github.com/dfunckt/django-rules/blob/master/README.rst>`_ to learn more.
+"""
from rules import add_perm, predicate
@@ -5,12 +24,36 @@
@predicate
def is_page_editor(user, page):
+ """
+ This predicate checks whether the given user is one of the editors of the given page.
+
+ :param user: The user who's permission should be checked
+ :type user: ~django.contrib.auth.models.User
+
+ :param page: The requested page
+ :type page: ~cms.models.pages.page.Page
+
+ :return: Whether or not ``user`` is an editor of ``page``
+ :rtype: bool
+ """
if not page:
return False
return user in page.editors.all()
@predicate
def is_page_publisher(user, page):
+ """
+ This predicate checks whether the given user is one of the publishers of the given page.
+
+ :param user: The user who's permission should be checked
+ :type user: ~django.contrib.auth.models.User
+
+ :param page: The requested page
+ :type page: ~cms.models.pages.page.Page
+
+ :return: Whether or not ``user`` is a publisher of ``page``
+ :rtype: bool
+ """
if not page:
return False
return user in page.publishers.all()
@@ -18,11 +61,35 @@
@predicate
# pylint: disable=unused-argument
def can_edit_all_pages(user, page):
+ """
+ This predicate checks whether the given user can edit all pages.
+
+ :param user: The user who's permission should be checked
+ :type user: ~django.contrib.auth.models.User
+
+ :param page: Unused page parameter (the function signature must match the other predicates)
+ :type page: ~cms.models.pages.page.Page
+
+ :return: Whether or not ``user`` can edit all pages
+ :rtype: bool
+ """
return user.has_perm('cms.edit_pages')
@predicate
# pylint: disable=unused-argument
def can_publish_all_pages(user, page):
+ """
+ This predicate checks whether the given user can publish all pages.
+
+ :param user: The user who's permission should be checked
+ :type user: ~django.contrib.auth.models.User
+
+ :param page: Unused page parameter (the function signature must match the other predicates)
+ :type page: ~cms.models.pages.page.Page
+
+ :return: Whether or not ``user`` can publish all pages
+ :rtype: bool
+ """
return user.has_perm('cms.publish_pages')
| {"golden_diff": "diff --git a/src/cms/rules.py b/src/cms/rules.py\n--- a/src/cms/rules.py\n+++ b/src/cms/rules.py\n@@ -1,3 +1,22 @@\n+\"\"\"\n+We use `django-rules <https://pypi.org/project/rules/>`_ to add custom permissions for specific pages.\n+\n+For a given user and page, the following permissions are added:\n+\n+* ``cms.edit_page`` if one of the following predicates return true:\n+\n+ * :func:`~cms.rules.can_edit_all_pages`\n+ * :func:`~cms.rules.is_page_editor`\n+ * :func:`~cms.rules.can_publish_all_pages`\n+ * :func:`~cms.rules.is_page_publisher`\n+\n+* ``cms.publish_page`` if one of the following predicates return true:\n+\n+ * :func:`~cms.rules.can_publish_all_pages`\n+ * :func:`~cms.rules.is_page_publisher`\n+\n+See the project's `README <https://github.com/dfunckt/django-rules/blob/master/README.rst>`_ to learn more.\n+\"\"\"\n from rules import add_perm, predicate\n \n \n@@ -5,12 +24,36 @@\n \n @predicate\n def is_page_editor(user, page):\n+ \"\"\"\n+ This predicate checks whether the given user is one of the editors of the given page.\n+\n+ :param user: The user who's permission should be checked\n+ :type user: ~django.contrib.auth.models.User\n+\n+ :param page: The requested page\n+ :type page: ~cms.models.pages.page.Page\n+\n+ :return: Whether or not ``user`` is an editor of ``page``\n+ :rtype: bool\n+ \"\"\"\n if not page:\n return False\n return user in page.editors.all()\n \n @predicate\n def is_page_publisher(user, page):\n+ \"\"\"\n+ This predicate checks whether the given user is one of the publishers of the given page.\n+\n+ :param user: The user who's permission should be checked\n+ :type user: ~django.contrib.auth.models.User\n+\n+ :param page: The requested page\n+ :type page: ~cms.models.pages.page.Page\n+\n+ :return: Whether or not ``user`` is a publisher of ``page``\n+ :rtype: bool\n+ \"\"\"\n if not page:\n return False\n return user in page.publishers.all()\n@@ -18,11 +61,35 @@\n @predicate\n # pylint: disable=unused-argument\n def can_edit_all_pages(user, page):\n+ \"\"\"\n+ This predicate checks whether the given user can edit all pages.\n+\n+ :param user: The user who's permission should be checked\n+ :type user: ~django.contrib.auth.models.User\n+\n+ :param page: Unused page parameter (the function signature must match the other predicates)\n+ :type page: ~cms.models.pages.page.Page\n+\n+ :return: Whether or not ``user`` can edit all pages\n+ :rtype: bool\n+ \"\"\"\n return user.has_perm('cms.edit_pages')\n \n @predicate\n # pylint: disable=unused-argument\n def can_publish_all_pages(user, page):\n+ \"\"\"\n+ This predicate checks whether the given user can publish all pages.\n+\n+ :param user: The user who's permission should be checked\n+ :type user: ~django.contrib.auth.models.User\n+\n+ :param page: Unused page parameter (the function signature must match the other predicates)\n+ :type page: ~cms.models.pages.page.Page\n+\n+ :return: Whether or not ``user`` can publish all pages\n+ :rtype: bool\n+ \"\"\"\n return user.has_perm('cms.publish_pages')\n", "issue": "Improve documentation of cms rules\nExplain the rules module and how it interacts with our permission management. Add docstrings of the following format:\r\n```\r\n\"\"\"\r\n[Summary]\r\n\r\n:param [ParamName]: [ParamDescription], defaults to [DefaultParamVal]\r\n:type [ParamName]: [ParamType](, optional)\r\n...\r\n:raises [ErrorType]: [ErrorDescription]\r\n...\r\n:return: [ReturnDescription]\r\n:rtype: [ReturnType]\r\n\"\"\"\r\n```\nImprove documentation of cms rules\nExplain the rules module and how it interacts with our permission management. Add docstrings of the following format:\r\n```\r\n\"\"\"\r\n[Summary]\r\n\r\n:param [ParamName]: [ParamDescription], defaults to [DefaultParamVal]\r\n:type [ParamName]: [ParamType](, optional)\r\n...\r\n:raises [ErrorType]: [ErrorDescription]\r\n...\r\n:return: [ReturnDescription]\r\n:rtype: [ReturnType]\r\n\"\"\"\r\n```\n", "before_files": [{"content": "from rules import add_perm, predicate\n\n\n# Predicates\n\n@predicate\ndef is_page_editor(user, page):\n if not page:\n return False\n return user in page.editors.all()\n\n@predicate\ndef is_page_publisher(user, page):\n if not page:\n return False\n return user in page.publishers.all()\n\n@predicate\n# pylint: disable=unused-argument\ndef can_edit_all_pages(user, page):\n return user.has_perm('cms.edit_pages')\n\n@predicate\n# pylint: disable=unused-argument\ndef can_publish_all_pages(user, page):\n return user.has_perm('cms.publish_pages')\n\n\n# Permissions\n\nadd_perm('cms.edit_page', can_edit_all_pages | is_page_editor | can_publish_all_pages | is_page_publisher)\nadd_perm('cms.publish_page', can_publish_all_pages | is_page_publisher)\n", "path": "src/cms/rules.py"}], "after_files": [{"content": "\"\"\"\nWe use `django-rules <https://pypi.org/project/rules/>`_ to add custom permissions for specific pages.\n\nFor a given user and page, the following permissions are added:\n\n* ``cms.edit_page`` if one of the following predicates return true:\n\n * :func:`~cms.rules.can_edit_all_pages`\n * :func:`~cms.rules.is_page_editor`\n * :func:`~cms.rules.can_publish_all_pages`\n * :func:`~cms.rules.is_page_publisher`\n\n* ``cms.publish_page`` if one of the following predicates return true:\n\n * :func:`~cms.rules.can_publish_all_pages`\n * :func:`~cms.rules.is_page_publisher`\n\nSee the project's `README <https://github.com/dfunckt/django-rules/blob/master/README.rst>`_ to learn more.\n\"\"\"\nfrom rules import add_perm, predicate\n\n\n# Predicates\n\n@predicate\ndef is_page_editor(user, page):\n \"\"\"\n This predicate checks whether the given user is one of the editors of the given page.\n\n :param user: The user who's permission should be checked\n :type user: ~django.contrib.auth.models.User\n\n :param page: The requested page\n :type page: ~cms.models.pages.page.Page\n\n :return: Whether or not ``user`` is an editor of ``page``\n :rtype: bool\n \"\"\"\n if not page:\n return False\n return user in page.editors.all()\n\n@predicate\ndef is_page_publisher(user, page):\n \"\"\"\n This predicate checks whether the given user is one of the publishers of the given page.\n\n :param user: The user who's permission should be checked\n :type user: ~django.contrib.auth.models.User\n\n :param page: The requested page\n :type page: ~cms.models.pages.page.Page\n\n :return: Whether or not ``user`` is a publisher of ``page``\n :rtype: bool\n \"\"\"\n if not page:\n return False\n return user in page.publishers.all()\n\n@predicate\n# pylint: disable=unused-argument\ndef can_edit_all_pages(user, page):\n \"\"\"\n This predicate checks whether the given user can edit all pages.\n\n :param user: The user who's permission should be checked\n :type user: ~django.contrib.auth.models.User\n\n :param page: Unused page parameter (the function signature must match the other predicates)\n :type page: ~cms.models.pages.page.Page\n\n :return: Whether or not ``user`` can edit all pages\n :rtype: bool\n \"\"\"\n return user.has_perm('cms.edit_pages')\n\n@predicate\n# pylint: disable=unused-argument\ndef can_publish_all_pages(user, page):\n \"\"\"\n This predicate checks whether the given user can publish all pages.\n\n :param user: The user who's permission should be checked\n :type user: ~django.contrib.auth.models.User\n\n :param page: Unused page parameter (the function signature must match the other predicates)\n :type page: ~cms.models.pages.page.Page\n\n :return: Whether or not ``user`` can publish all pages\n :rtype: bool\n \"\"\"\n return user.has_perm('cms.publish_pages')\n\n\n# Permissions\n\nadd_perm('cms.edit_page', can_edit_all_pages | is_page_editor | can_publish_all_pages | is_page_publisher)\nadd_perm('cms.publish_page', can_publish_all_pages | is_page_publisher)\n", "path": "src/cms/rules.py"}]} | 680 | 807 |
gh_patches_debug_10614 | rasdani/github-patches | git_diff | getredash__redash-2134 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use can't download dataset before saving query
Because the query results url contains the query id, before saving the user can't download the dataset.
We need to allow addressing query results without query id.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/handlers/api.py`
Content:
```
1 from flask_restful import Api
2 from werkzeug.wrappers import Response
3 from flask import make_response
4
5 from redash.utils import json_dumps
6 from redash.handlers.base import org_scoped_rule
7 from redash.handlers.permissions import ObjectPermissionsListResource, CheckPermissionResource
8 from redash.handlers.alerts import AlertResource, AlertListResource, AlertSubscriptionListResource, AlertSubscriptionResource
9 from redash.handlers.dashboards import DashboardListResource, RecentDashboardsResource, DashboardResource, DashboardShareResource, PublicDashboardResource
10 from redash.handlers.data_sources import DataSourceTypeListResource, DataSourceListResource, DataSourceSchemaResource, DataSourceResource, DataSourcePauseResource, DataSourceTestResource
11 from redash.handlers.events import EventResource
12 from redash.handlers.queries import QueryForkResource, QueryRefreshResource, QueryListResource, QueryRecentResource, QuerySearchResource, QueryResource, MyQueriesResource
13 from redash.handlers.query_results import QueryResultListResource, QueryResultResource, JobResource
14 from redash.handlers.users import UserResource, UserListResource, UserInviteResource, UserResetPasswordResource
15 from redash.handlers.visualizations import VisualizationListResource
16 from redash.handlers.visualizations import VisualizationResource
17 from redash.handlers.widgets import WidgetResource, WidgetListResource
18 from redash.handlers.groups import GroupListResource, GroupResource, GroupMemberListResource, GroupMemberResource, \
19 GroupDataSourceListResource, GroupDataSourceResource
20 from redash.handlers.destinations import DestinationTypeListResource, DestinationResource, DestinationListResource
21 from redash.handlers.query_snippets import QuerySnippetListResource, QuerySnippetResource
22
23
24 class ApiExt(Api):
25 def add_org_resource(self, resource, *urls, **kwargs):
26 urls = [org_scoped_rule(url) for url in urls]
27 return self.add_resource(resource, *urls, **kwargs)
28
29 api = ApiExt()
30
31
32 @api.representation('application/json')
33 def json_representation(data, code, headers=None):
34 # Flask-Restful checks only for flask.Response but flask-login uses werkzeug.wrappers.Response
35 if isinstance(data, Response):
36 return data
37 resp = make_response(json_dumps(data), code)
38 resp.headers.extend(headers or {})
39 return resp
40
41
42 api.add_org_resource(AlertResource, '/api/alerts/<alert_id>', endpoint='alert')
43 api.add_org_resource(AlertSubscriptionListResource, '/api/alerts/<alert_id>/subscriptions', endpoint='alert_subscriptions')
44 api.add_org_resource(AlertSubscriptionResource, '/api/alerts/<alert_id>/subscriptions/<subscriber_id>', endpoint='alert_subscription')
45 api.add_org_resource(AlertListResource, '/api/alerts', endpoint='alerts')
46
47 api.add_org_resource(DashboardListResource, '/api/dashboards', endpoint='dashboards')
48 api.add_org_resource(RecentDashboardsResource, '/api/dashboards/recent', endpoint='recent_dashboards')
49 api.add_org_resource(DashboardResource, '/api/dashboards/<dashboard_slug>', endpoint='dashboard')
50 api.add_org_resource(PublicDashboardResource, '/api/dashboards/public/<token>', endpoint='public_dashboard')
51 api.add_org_resource(DashboardShareResource, '/api/dashboards/<dashboard_id>/share', endpoint='dashboard_share')
52
53 api.add_org_resource(DataSourceTypeListResource, '/api/data_sources/types', endpoint='data_source_types')
54 api.add_org_resource(DataSourceListResource, '/api/data_sources', endpoint='data_sources')
55 api.add_org_resource(DataSourceSchemaResource, '/api/data_sources/<data_source_id>/schema')
56 api.add_org_resource(DataSourcePauseResource, '/api/data_sources/<data_source_id>/pause')
57 api.add_org_resource(DataSourceTestResource, '/api/data_sources/<data_source_id>/test')
58 api.add_org_resource(DataSourceResource, '/api/data_sources/<data_source_id>', endpoint='data_source')
59
60 api.add_org_resource(GroupListResource, '/api/groups', endpoint='groups')
61 api.add_org_resource(GroupResource, '/api/groups/<group_id>', endpoint='group')
62 api.add_org_resource(GroupMemberListResource, '/api/groups/<group_id>/members', endpoint='group_members')
63 api.add_org_resource(GroupMemberResource, '/api/groups/<group_id>/members/<user_id>', endpoint='group_member')
64 api.add_org_resource(GroupDataSourceListResource, '/api/groups/<group_id>/data_sources', endpoint='group_data_sources')
65 api.add_org_resource(GroupDataSourceResource, '/api/groups/<group_id>/data_sources/<data_source_id>', endpoint='group_data_source')
66
67 api.add_org_resource(EventResource, '/api/events', endpoint='events')
68
69 api.add_org_resource(QuerySearchResource, '/api/queries/search', endpoint='queries_search')
70 api.add_org_resource(QueryRecentResource, '/api/queries/recent', endpoint='recent_queries')
71 api.add_org_resource(QueryListResource, '/api/queries', endpoint='queries')
72 api.add_org_resource(MyQueriesResource, '/api/queries/my', endpoint='my_queries')
73 api.add_org_resource(QueryRefreshResource, '/api/queries/<query_id>/refresh', endpoint='query_refresh')
74 api.add_org_resource(QueryResource, '/api/queries/<query_id>', endpoint='query')
75 api.add_org_resource(QueryForkResource, '/api/queries/<query_id>/fork', endpoint='query_fork')
76
77 api.add_org_resource(ObjectPermissionsListResource, '/api/<object_type>/<object_id>/acl', endpoint='object_permissions')
78 api.add_org_resource(CheckPermissionResource, '/api/<object_type>/<object_id>/acl/<access_type>', endpoint='check_permissions')
79
80 api.add_org_resource(QueryResultListResource, '/api/query_results', endpoint='query_results')
81 api.add_org_resource(QueryResultResource,
82 '/api/query_results/<query_result_id>',
83 '/api/queries/<query_id>/results.<filetype>',
84 '/api/queries/<query_id>/results/<query_result_id>.<filetype>',
85 endpoint='query_result')
86 api.add_org_resource(JobResource, '/api/jobs/<job_id>', endpoint='job')
87
88 api.add_org_resource(UserListResource, '/api/users', endpoint='users')
89 api.add_org_resource(UserResource, '/api/users/<user_id>', endpoint='user')
90 api.add_org_resource(UserInviteResource, '/api/users/<user_id>/invite', endpoint='user_invite')
91 api.add_org_resource(UserResetPasswordResource, '/api/users/<user_id>/reset_password', endpoint='user_reset_password')
92
93 api.add_org_resource(VisualizationListResource, '/api/visualizations', endpoint='visualizations')
94 api.add_org_resource(VisualizationResource, '/api/visualizations/<visualization_id>', endpoint='visualization')
95
96 api.add_org_resource(WidgetListResource, '/api/widgets', endpoint='widgets')
97 api.add_org_resource(WidgetResource, '/api/widgets/<int:widget_id>', endpoint='widget')
98
99 api.add_org_resource(DestinationTypeListResource, '/api/destinations/types', endpoint='destination_types')
100 api.add_org_resource(DestinationResource, '/api/destinations/<destination_id>', endpoint='destination')
101 api.add_org_resource(DestinationListResource, '/api/destinations', endpoint='destinations')
102
103 api.add_org_resource(QuerySnippetResource, '/api/query_snippets/<snippet_id>', endpoint='query_snippet')
104 api.add_org_resource(QuerySnippetListResource, '/api/query_snippets', endpoint='query_snippets')
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redash/handlers/api.py b/redash/handlers/api.py
--- a/redash/handlers/api.py
+++ b/redash/handlers/api.py
@@ -79,6 +79,7 @@
api.add_org_resource(QueryResultListResource, '/api/query_results', endpoint='query_results')
api.add_org_resource(QueryResultResource,
+ '/api/query_results/<query_result_id>.<filetype>',
'/api/query_results/<query_result_id>',
'/api/queries/<query_id>/results.<filetype>',
'/api/queries/<query_id>/results/<query_result_id>.<filetype>',
| {"golden_diff": "diff --git a/redash/handlers/api.py b/redash/handlers/api.py\n--- a/redash/handlers/api.py\n+++ b/redash/handlers/api.py\n@@ -79,6 +79,7 @@\n \n api.add_org_resource(QueryResultListResource, '/api/query_results', endpoint='query_results')\n api.add_org_resource(QueryResultResource,\n+ '/api/query_results/<query_result_id>.<filetype>',\n '/api/query_results/<query_result_id>',\n '/api/queries/<query_id>/results.<filetype>',\n '/api/queries/<query_id>/results/<query_result_id>.<filetype>',\n", "issue": "Use can't download dataset before saving query\nBecause the query results url contains the query id, before saving the user can't download the dataset. \n\nWe need to allow addressing query results without query id.\n\n", "before_files": [{"content": "from flask_restful import Api\nfrom werkzeug.wrappers import Response\nfrom flask import make_response\n\nfrom redash.utils import json_dumps\nfrom redash.handlers.base import org_scoped_rule\nfrom redash.handlers.permissions import ObjectPermissionsListResource, CheckPermissionResource\nfrom redash.handlers.alerts import AlertResource, AlertListResource, AlertSubscriptionListResource, AlertSubscriptionResource\nfrom redash.handlers.dashboards import DashboardListResource, RecentDashboardsResource, DashboardResource, DashboardShareResource, PublicDashboardResource \nfrom redash.handlers.data_sources import DataSourceTypeListResource, DataSourceListResource, DataSourceSchemaResource, DataSourceResource, DataSourcePauseResource, DataSourceTestResource\nfrom redash.handlers.events import EventResource\nfrom redash.handlers.queries import QueryForkResource, QueryRefreshResource, QueryListResource, QueryRecentResource, QuerySearchResource, QueryResource, MyQueriesResource\nfrom redash.handlers.query_results import QueryResultListResource, QueryResultResource, JobResource\nfrom redash.handlers.users import UserResource, UserListResource, UserInviteResource, UserResetPasswordResource\nfrom redash.handlers.visualizations import VisualizationListResource\nfrom redash.handlers.visualizations import VisualizationResource\nfrom redash.handlers.widgets import WidgetResource, WidgetListResource\nfrom redash.handlers.groups import GroupListResource, GroupResource, GroupMemberListResource, GroupMemberResource, \\\n GroupDataSourceListResource, GroupDataSourceResource\nfrom redash.handlers.destinations import DestinationTypeListResource, DestinationResource, DestinationListResource\nfrom redash.handlers.query_snippets import QuerySnippetListResource, QuerySnippetResource\n\n\nclass ApiExt(Api):\n def add_org_resource(self, resource, *urls, **kwargs):\n urls = [org_scoped_rule(url) for url in urls]\n return self.add_resource(resource, *urls, **kwargs)\n\napi = ApiExt()\n\n\[email protected]('application/json')\ndef json_representation(data, code, headers=None):\n # Flask-Restful checks only for flask.Response but flask-login uses werkzeug.wrappers.Response\n if isinstance(data, Response):\n return data\n resp = make_response(json_dumps(data), code)\n resp.headers.extend(headers or {})\n return resp\n\n\napi.add_org_resource(AlertResource, '/api/alerts/<alert_id>', endpoint='alert')\napi.add_org_resource(AlertSubscriptionListResource, '/api/alerts/<alert_id>/subscriptions', endpoint='alert_subscriptions')\napi.add_org_resource(AlertSubscriptionResource, '/api/alerts/<alert_id>/subscriptions/<subscriber_id>', endpoint='alert_subscription')\napi.add_org_resource(AlertListResource, '/api/alerts', endpoint='alerts')\n\napi.add_org_resource(DashboardListResource, '/api/dashboards', endpoint='dashboards')\napi.add_org_resource(RecentDashboardsResource, '/api/dashboards/recent', endpoint='recent_dashboards')\napi.add_org_resource(DashboardResource, '/api/dashboards/<dashboard_slug>', endpoint='dashboard')\napi.add_org_resource(PublicDashboardResource, '/api/dashboards/public/<token>', endpoint='public_dashboard')\napi.add_org_resource(DashboardShareResource, '/api/dashboards/<dashboard_id>/share', endpoint='dashboard_share')\n\napi.add_org_resource(DataSourceTypeListResource, '/api/data_sources/types', endpoint='data_source_types')\napi.add_org_resource(DataSourceListResource, '/api/data_sources', endpoint='data_sources')\napi.add_org_resource(DataSourceSchemaResource, '/api/data_sources/<data_source_id>/schema')\napi.add_org_resource(DataSourcePauseResource, '/api/data_sources/<data_source_id>/pause')\napi.add_org_resource(DataSourceTestResource, '/api/data_sources/<data_source_id>/test')\napi.add_org_resource(DataSourceResource, '/api/data_sources/<data_source_id>', endpoint='data_source')\n\napi.add_org_resource(GroupListResource, '/api/groups', endpoint='groups')\napi.add_org_resource(GroupResource, '/api/groups/<group_id>', endpoint='group')\napi.add_org_resource(GroupMemberListResource, '/api/groups/<group_id>/members', endpoint='group_members')\napi.add_org_resource(GroupMemberResource, '/api/groups/<group_id>/members/<user_id>', endpoint='group_member')\napi.add_org_resource(GroupDataSourceListResource, '/api/groups/<group_id>/data_sources', endpoint='group_data_sources')\napi.add_org_resource(GroupDataSourceResource, '/api/groups/<group_id>/data_sources/<data_source_id>', endpoint='group_data_source')\n\napi.add_org_resource(EventResource, '/api/events', endpoint='events')\n\napi.add_org_resource(QuerySearchResource, '/api/queries/search', endpoint='queries_search')\napi.add_org_resource(QueryRecentResource, '/api/queries/recent', endpoint='recent_queries')\napi.add_org_resource(QueryListResource, '/api/queries', endpoint='queries')\napi.add_org_resource(MyQueriesResource, '/api/queries/my', endpoint='my_queries')\napi.add_org_resource(QueryRefreshResource, '/api/queries/<query_id>/refresh', endpoint='query_refresh')\napi.add_org_resource(QueryResource, '/api/queries/<query_id>', endpoint='query')\napi.add_org_resource(QueryForkResource, '/api/queries/<query_id>/fork', endpoint='query_fork')\n\napi.add_org_resource(ObjectPermissionsListResource, '/api/<object_type>/<object_id>/acl', endpoint='object_permissions')\napi.add_org_resource(CheckPermissionResource, '/api/<object_type>/<object_id>/acl/<access_type>', endpoint='check_permissions')\n\napi.add_org_resource(QueryResultListResource, '/api/query_results', endpoint='query_results')\napi.add_org_resource(QueryResultResource,\n '/api/query_results/<query_result_id>',\n '/api/queries/<query_id>/results.<filetype>',\n '/api/queries/<query_id>/results/<query_result_id>.<filetype>',\n endpoint='query_result')\napi.add_org_resource(JobResource, '/api/jobs/<job_id>', endpoint='job')\n\napi.add_org_resource(UserListResource, '/api/users', endpoint='users')\napi.add_org_resource(UserResource, '/api/users/<user_id>', endpoint='user')\napi.add_org_resource(UserInviteResource, '/api/users/<user_id>/invite', endpoint='user_invite')\napi.add_org_resource(UserResetPasswordResource, '/api/users/<user_id>/reset_password', endpoint='user_reset_password')\n\napi.add_org_resource(VisualizationListResource, '/api/visualizations', endpoint='visualizations')\napi.add_org_resource(VisualizationResource, '/api/visualizations/<visualization_id>', endpoint='visualization')\n\napi.add_org_resource(WidgetListResource, '/api/widgets', endpoint='widgets')\napi.add_org_resource(WidgetResource, '/api/widgets/<int:widget_id>', endpoint='widget')\n\napi.add_org_resource(DestinationTypeListResource, '/api/destinations/types', endpoint='destination_types')\napi.add_org_resource(DestinationResource, '/api/destinations/<destination_id>', endpoint='destination')\napi.add_org_resource(DestinationListResource, '/api/destinations', endpoint='destinations')\n\napi.add_org_resource(QuerySnippetResource, '/api/query_snippets/<snippet_id>', endpoint='query_snippet')\napi.add_org_resource(QuerySnippetListResource, '/api/query_snippets', endpoint='query_snippets')\n", "path": "redash/handlers/api.py"}], "after_files": [{"content": "from flask_restful import Api\nfrom werkzeug.wrappers import Response\nfrom flask import make_response\n\nfrom redash.utils import json_dumps\nfrom redash.handlers.base import org_scoped_rule\nfrom redash.handlers.permissions import ObjectPermissionsListResource, CheckPermissionResource\nfrom redash.handlers.alerts import AlertResource, AlertListResource, AlertSubscriptionListResource, AlertSubscriptionResource\nfrom redash.handlers.dashboards import DashboardListResource, RecentDashboardsResource, DashboardResource, DashboardShareResource, PublicDashboardResource \nfrom redash.handlers.data_sources import DataSourceTypeListResource, DataSourceListResource, DataSourceSchemaResource, DataSourceResource, DataSourcePauseResource, DataSourceTestResource\nfrom redash.handlers.events import EventResource\nfrom redash.handlers.queries import QueryForkResource, QueryRefreshResource, QueryListResource, QueryRecentResource, QuerySearchResource, QueryResource, MyQueriesResource\nfrom redash.handlers.query_results import QueryResultListResource, QueryResultResource, JobResource\nfrom redash.handlers.users import UserResource, UserListResource, UserInviteResource, UserResetPasswordResource\nfrom redash.handlers.visualizations import VisualizationListResource\nfrom redash.handlers.visualizations import VisualizationResource\nfrom redash.handlers.widgets import WidgetResource, WidgetListResource\nfrom redash.handlers.groups import GroupListResource, GroupResource, GroupMemberListResource, GroupMemberResource, \\\n GroupDataSourceListResource, GroupDataSourceResource\nfrom redash.handlers.destinations import DestinationTypeListResource, DestinationResource, DestinationListResource\nfrom redash.handlers.query_snippets import QuerySnippetListResource, QuerySnippetResource\n\n\nclass ApiExt(Api):\n def add_org_resource(self, resource, *urls, **kwargs):\n urls = [org_scoped_rule(url) for url in urls]\n return self.add_resource(resource, *urls, **kwargs)\n\napi = ApiExt()\n\n\[email protected]('application/json')\ndef json_representation(data, code, headers=None):\n # Flask-Restful checks only for flask.Response but flask-login uses werkzeug.wrappers.Response\n if isinstance(data, Response):\n return data\n resp = make_response(json_dumps(data), code)\n resp.headers.extend(headers or {})\n return resp\n\n\napi.add_org_resource(AlertResource, '/api/alerts/<alert_id>', endpoint='alert')\napi.add_org_resource(AlertSubscriptionListResource, '/api/alerts/<alert_id>/subscriptions', endpoint='alert_subscriptions')\napi.add_org_resource(AlertSubscriptionResource, '/api/alerts/<alert_id>/subscriptions/<subscriber_id>', endpoint='alert_subscription')\napi.add_org_resource(AlertListResource, '/api/alerts', endpoint='alerts')\n\napi.add_org_resource(DashboardListResource, '/api/dashboards', endpoint='dashboards')\napi.add_org_resource(RecentDashboardsResource, '/api/dashboards/recent', endpoint='recent_dashboards')\napi.add_org_resource(DashboardResource, '/api/dashboards/<dashboard_slug>', endpoint='dashboard')\napi.add_org_resource(PublicDashboardResource, '/api/dashboards/public/<token>', endpoint='public_dashboard')\napi.add_org_resource(DashboardShareResource, '/api/dashboards/<dashboard_id>/share', endpoint='dashboard_share')\n\napi.add_org_resource(DataSourceTypeListResource, '/api/data_sources/types', endpoint='data_source_types')\napi.add_org_resource(DataSourceListResource, '/api/data_sources', endpoint='data_sources')\napi.add_org_resource(DataSourceSchemaResource, '/api/data_sources/<data_source_id>/schema')\napi.add_org_resource(DataSourcePauseResource, '/api/data_sources/<data_source_id>/pause')\napi.add_org_resource(DataSourceTestResource, '/api/data_sources/<data_source_id>/test')\napi.add_org_resource(DataSourceResource, '/api/data_sources/<data_source_id>', endpoint='data_source')\n\napi.add_org_resource(GroupListResource, '/api/groups', endpoint='groups')\napi.add_org_resource(GroupResource, '/api/groups/<group_id>', endpoint='group')\napi.add_org_resource(GroupMemberListResource, '/api/groups/<group_id>/members', endpoint='group_members')\napi.add_org_resource(GroupMemberResource, '/api/groups/<group_id>/members/<user_id>', endpoint='group_member')\napi.add_org_resource(GroupDataSourceListResource, '/api/groups/<group_id>/data_sources', endpoint='group_data_sources')\napi.add_org_resource(GroupDataSourceResource, '/api/groups/<group_id>/data_sources/<data_source_id>', endpoint='group_data_source')\n\napi.add_org_resource(EventResource, '/api/events', endpoint='events')\n\napi.add_org_resource(QuerySearchResource, '/api/queries/search', endpoint='queries_search')\napi.add_org_resource(QueryRecentResource, '/api/queries/recent', endpoint='recent_queries')\napi.add_org_resource(QueryListResource, '/api/queries', endpoint='queries')\napi.add_org_resource(MyQueriesResource, '/api/queries/my', endpoint='my_queries')\napi.add_org_resource(QueryRefreshResource, '/api/queries/<query_id>/refresh', endpoint='query_refresh')\napi.add_org_resource(QueryResource, '/api/queries/<query_id>', endpoint='query')\napi.add_org_resource(QueryForkResource, '/api/queries/<query_id>/fork', endpoint='query_fork')\n\napi.add_org_resource(ObjectPermissionsListResource, '/api/<object_type>/<object_id>/acl', endpoint='object_permissions')\napi.add_org_resource(CheckPermissionResource, '/api/<object_type>/<object_id>/acl/<access_type>', endpoint='check_permissions')\n\napi.add_org_resource(QueryResultListResource, '/api/query_results', endpoint='query_results')\napi.add_org_resource(QueryResultResource,\n '/api/query_results/<query_result_id>.<filetype>',\n '/api/query_results/<query_result_id>',\n '/api/queries/<query_id>/results.<filetype>',\n '/api/queries/<query_id>/results/<query_result_id>.<filetype>',\n endpoint='query_result')\napi.add_org_resource(JobResource, '/api/jobs/<job_id>', endpoint='job')\n\napi.add_org_resource(UserListResource, '/api/users', endpoint='users')\napi.add_org_resource(UserResource, '/api/users/<user_id>', endpoint='user')\napi.add_org_resource(UserInviteResource, '/api/users/<user_id>/invite', endpoint='user_invite')\napi.add_org_resource(UserResetPasswordResource, '/api/users/<user_id>/reset_password', endpoint='user_reset_password')\n\napi.add_org_resource(VisualizationListResource, '/api/visualizations', endpoint='visualizations')\napi.add_org_resource(VisualizationResource, '/api/visualizations/<visualization_id>', endpoint='visualization')\n\napi.add_org_resource(WidgetListResource, '/api/widgets', endpoint='widgets')\napi.add_org_resource(WidgetResource, '/api/widgets/<int:widget_id>', endpoint='widget')\n\napi.add_org_resource(DestinationTypeListResource, '/api/destinations/types', endpoint='destination_types')\napi.add_org_resource(DestinationResource, '/api/destinations/<destination_id>', endpoint='destination')\napi.add_org_resource(DestinationListResource, '/api/destinations', endpoint='destinations')\n\napi.add_org_resource(QuerySnippetResource, '/api/query_snippets/<snippet_id>', endpoint='query_snippet')\napi.add_org_resource(QuerySnippetListResource, '/api/query_snippets', endpoint='query_snippets')\n", "path": "redash/handlers/api.py"}]} | 2,015 | 138 |
gh_patches_debug_3675 | rasdani/github-patches | git_diff | conan-io__conan-center-index-8132 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[request] perfetto/v21.0
### Package Details
* Package Name/Version: **perfetto/v21.0**
* Changelog: **https://github.com/google/perfetto/releases/tag/v21.0**
The above mentioned version is newly released by the upstream project and not yet available as a recipe. PR follows
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/perfetto/all/conanfile.py`
Content:
```
1 from conans import ConanFile, CMake, tools
2 from conans.errors import ConanInvalidConfiguration
3
4 import os
5
6 required_conan_version = ">=1.33.0"
7
8
9 class PerfettoConan(ConanFile):
10 name = "perfetto"
11 license = "Apache-2.0"
12 homepage = "https://perfetto.dev"
13 url = "https://github.com/conan-io/conan-center-index"
14 description = "Performance instrumentation and tracing for Android, Linux and Chrome"
15 topics = ("linux", "profiling", "tracing")
16 settings = "os", "compiler", "build_type", "arch"
17 options = {
18 "shared": [True, False],
19 "fPIC": [True, False]
20 }
21 default_options = {
22 "shared": False,
23 "fPIC": True
24 }
25
26 exports_sources = ["CMakeLists.txt"]
27 generators = "cmake"
28
29 _cmake = None
30
31 @property
32 def _source_subfolder(self):
33 return "source_subfolder"
34
35 def config_options(self):
36 if self.settings.os == "Windows":
37 del self.options.fPIC
38
39 def configure(self):
40 if self.options.shared:
41 del self.options.fPIC
42
43 def validate(self):
44 if self.settings.compiler == "gcc" and tools.Version(self.settings.compiler.version) < 7:
45 raise ConanInvalidConfiguration ("perfetto requires gcc >= 7")
46 if self.settings.compiler.cppstd:
47 tools.check_min_cppstd(self, 11)
48
49 def source(self):
50 tools.get(**self.conan_data["sources"][self.version],
51 strip_root=True, destination=self._source_subfolder)
52
53 def _configure_cmake(self):
54 if self._cmake:
55 return self._cmake
56 self._cmake = CMake(self)
57 self._cmake.configure()
58 return self._cmake
59
60 def build(self):
61 cmake = self._configure_cmake()
62 cmake.build()
63
64 def package(self):
65 self.copy("LICENSE", src=self._source_subfolder, dst="licenses")
66 cmake = self._configure_cmake()
67 cmake.install()
68
69 def package_info(self):
70 self.cpp_info.libs = ["perfetto"]
71 self.cpp_info.names["pkgconfig"] = "perfetto"
72 if self.settings.os == "Linux":
73 self.cpp_info.system_libs.append("pthread")
74 if self.settings.os == "Windows":
75 self.cpp_info.system_libs.append("ws2_32")
76
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/perfetto/all/conanfile.py b/recipes/perfetto/all/conanfile.py
--- a/recipes/perfetto/all/conanfile.py
+++ b/recipes/perfetto/all/conanfile.py
@@ -68,7 +68,6 @@
def package_info(self):
self.cpp_info.libs = ["perfetto"]
- self.cpp_info.names["pkgconfig"] = "perfetto"
if self.settings.os == "Linux":
self.cpp_info.system_libs.append("pthread")
if self.settings.os == "Windows":
| {"golden_diff": "diff --git a/recipes/perfetto/all/conanfile.py b/recipes/perfetto/all/conanfile.py\n--- a/recipes/perfetto/all/conanfile.py\n+++ b/recipes/perfetto/all/conanfile.py\n@@ -68,7 +68,6 @@\n \n def package_info(self):\n self.cpp_info.libs = [\"perfetto\"]\n- self.cpp_info.names[\"pkgconfig\"] = \"perfetto\"\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"pthread\")\n if self.settings.os == \"Windows\":\n", "issue": "[request] perfetto/v21.0\n### Package Details\r\n * Package Name/Version: **perfetto/v21.0**\r\n * Changelog: **https://github.com/google/perfetto/releases/tag/v21.0**\r\n\r\n\r\nThe above mentioned version is newly released by the upstream project and not yet available as a recipe. PR follows\r\n\n", "before_files": [{"content": "from conans import ConanFile, CMake, tools\nfrom conans.errors import ConanInvalidConfiguration\n\nimport os\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass PerfettoConan(ConanFile):\n name = \"perfetto\"\n license = \"Apache-2.0\"\n homepage = \"https://perfetto.dev\"\n url = \"https://github.com/conan-io/conan-center-index\"\n description = \"Performance instrumentation and tracing for Android, Linux and Chrome\"\n topics = (\"linux\", \"profiling\", \"tracing\")\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True\n }\n\n exports_sources = [\"CMakeLists.txt\"]\n generators = \"cmake\"\n\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n\n def validate(self):\n if self.settings.compiler == \"gcc\" and tools.Version(self.settings.compiler.version) < 7:\n raise ConanInvalidConfiguration (\"perfetto requires gcc >= 7\")\n if self.settings.compiler.cppstd:\n tools.check_min_cppstd(self, 11)\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n strip_root=True, destination=self._source_subfolder)\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.configure()\n return self._cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(\"LICENSE\", src=self._source_subfolder, dst=\"licenses\")\n cmake = self._configure_cmake()\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs = [\"perfetto\"]\n self.cpp_info.names[\"pkgconfig\"] = \"perfetto\"\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"pthread\")\n if self.settings.os == \"Windows\":\n self.cpp_info.system_libs.append(\"ws2_32\")\n\n", "path": "recipes/perfetto/all/conanfile.py"}], "after_files": [{"content": "from conans import ConanFile, CMake, tools\nfrom conans.errors import ConanInvalidConfiguration\n\nimport os\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass PerfettoConan(ConanFile):\n name = \"perfetto\"\n license = \"Apache-2.0\"\n homepage = \"https://perfetto.dev\"\n url = \"https://github.com/conan-io/conan-center-index\"\n description = \"Performance instrumentation and tracing for Android, Linux and Chrome\"\n topics = (\"linux\", \"profiling\", \"tracing\")\n settings = \"os\", \"compiler\", \"build_type\", \"arch\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True\n }\n\n exports_sources = [\"CMakeLists.txt\"]\n generators = \"cmake\"\n\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n\n def validate(self):\n if self.settings.compiler == \"gcc\" and tools.Version(self.settings.compiler.version) < 7:\n raise ConanInvalidConfiguration (\"perfetto requires gcc >= 7\")\n if self.settings.compiler.cppstd:\n tools.check_min_cppstd(self, 11)\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n strip_root=True, destination=self._source_subfolder)\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n self._cmake.configure()\n return self._cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(\"LICENSE\", src=self._source_subfolder, dst=\"licenses\")\n cmake = self._configure_cmake()\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs = [\"perfetto\"]\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"pthread\")\n if self.settings.os == \"Windows\":\n self.cpp_info.system_libs.append(\"ws2_32\")\n\n", "path": "recipes/perfetto/all/conanfile.py"}]} | 1,029 | 125 |
gh_patches_debug_15591 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-2714 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: `_smtp` is missing at SMTPMailer's __init__ patching what is introduced in zope.sendmail from version 4.1.0
### What I did:
I am working on internal Addon development (adding support for Plone 5.2) , sending mail notification one of the part functionalities. FYI: some how I forget to active MockMailhost.
When I run all my existing unittests and got unexpected errors :
```
File "/home/nazrul/.cache/buildout/eggs/plone.testing-7.0.0-py2.7.egg/plone/testing/zope.py", line 859, in testTearDown
transaction.abort()
File "/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_manager.py", line 255, in abort
return self.manager.abort()
File "/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_manager.py", line 136, in abort
return self.get().abort()
File "/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_transaction.py", line 529, in abort
reraise(t, v, tb)
File "/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_transaction.py", line 515, in abort
rm.abort(self)
File "/home/nazrul/.cache/buildout/eggs/zope.sendmail-4.2-py2.7.egg/zope/sendmail/delivery.py", line 57, in abort
self.onAbort()
File "/home/nazrul/.cache/buildout/eggs/zope.sendmail-4.2-py2.7.egg/zope/sendmail/mailer.py", line 78, in abort
if self.connection is None:
File "/home/nazrul/.cache/buildout/eggs/zope.sendmail-4.2-py2.7.egg/zope/sendmail/mailer.py", line 48, in <lambda>
return property(lambda self: getattr(self._smtp, name),
AttributeError: 'SMTPMailer' object has no attribute '_smtp'
```
All tests are passing for earlier version of Plone.
### What is my prediction:
After day long investigation, I found [SMTPMailer __init__ method is patched here](https://github.com/plone/Products.CMFPlone/blob/master/Products/CMFPlone/patches/sendmail.py#L39) , beside Also found that [zope.sendmail from version 4.1.0 the SMTPMailer's __init__](https://github.com/zopefoundation/zope.sendmail/blob/4.1.0/src/zope/sendmail/mailer.py#L45)
has been introduced a new attribute `_smtp` what is ignored during patching.
### How to reproduce:
This is only for Plone 5.2.x
1. disable Mock Mail
2. Try to send email from your tests code
3. Or try send mail from your production/testing server.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Products/CMFPlone/patches/sendmail.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from plone.registry.interfaces import IRegistry
3 from Products.CMFPlone.interfaces import IMailSchema
4 from transaction._transaction import Status
5 from zope.component import getUtility
6 from zope.sendmail.mailer import SMTPMailer
7
8 import logging
9 import transaction
10
11 log = logging.getLogger("MailDataManager")
12
13
14 # BBB remove when zope.sendmail 3.8.0 is released.
15 def catchAllExceptions(func):
16 def _catch(*args, **kwargs):
17 try:
18 return func(*args, **kwargs)
19 except Exception as e:
20 txn = transaction.get()
21 if txn.status == Status.ACTIVE:
22 # sent with immediate=True
23 raise
24 else:
25 # Avoid raising errors during tpc_finish as these could lead to
26 # inconsistent state
27 log.exception(e)
28
29 return _catch
30
31
32 def applyPatches():
33 from zope.sendmail.mailer import SMTPMailer
34 old_mailer = getattr(SMTPMailer, 'vote', None) is None
35 if old_mailer:
36 SMTPMailer.send = catchAllExceptions(SMTPMailer.send)
37
38
39 def new_init(
40 self,
41 hostname='localhost',
42 port=25,
43 username=None,
44 password=None,
45 no_tls=False,
46 force_tls=False):
47
48 registry = getUtility(IRegistry)
49 mail_settings = registry.forInterface(IMailSchema, prefix='plone')
50 self.hostname = mail_settings.smtp_host
51 self.port = mail_settings.smtp_port
52 self.username = mail_settings.smtp_userid
53 self.password = mail_settings.smtp_pass
54 self.force_tls = force_tls
55 self.no_tls = no_tls
56
57 SMTPMailer.__init__ = new_init
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Products/CMFPlone/patches/sendmail.py b/Products/CMFPlone/patches/sendmail.py
--- a/Products/CMFPlone/patches/sendmail.py
+++ b/Products/CMFPlone/patches/sendmail.py
@@ -3,11 +3,13 @@
from Products.CMFPlone.interfaces import IMailSchema
from transaction._transaction import Status
from zope.component import getUtility
+from zope.sendmail.mailer import _SMTPState
from zope.sendmail.mailer import SMTPMailer
import logging
import transaction
+
log = logging.getLogger("MailDataManager")
@@ -53,5 +55,7 @@
self.password = mail_settings.smtp_pass
self.force_tls = force_tls
self.no_tls = no_tls
+ self._smtp = _SMTPState()
+
SMTPMailer.__init__ = new_init
| {"golden_diff": "diff --git a/Products/CMFPlone/patches/sendmail.py b/Products/CMFPlone/patches/sendmail.py\n--- a/Products/CMFPlone/patches/sendmail.py\n+++ b/Products/CMFPlone/patches/sendmail.py\n@@ -3,11 +3,13 @@\n from Products.CMFPlone.interfaces import IMailSchema\n from transaction._transaction import Status\n from zope.component import getUtility\n+from zope.sendmail.mailer import _SMTPState\n from zope.sendmail.mailer import SMTPMailer\n \n import logging\n import transaction\n \n+\n log = logging.getLogger(\"MailDataManager\")\n \n \n@@ -53,5 +55,7 @@\n self.password = mail_settings.smtp_pass\n self.force_tls = force_tls\n self.no_tls = no_tls\n+ self._smtp = _SMTPState()\n+\n \n SMTPMailer.__init__ = new_init\n", "issue": "Bug: `_smtp` is missing at SMTPMailer's __init__ patching what is introduced in zope.sendmail from version 4.1.0\n### What I did:\r\nI am working on internal Addon development (adding support for Plone 5.2) , sending mail notification one of the part functionalities. FYI: some how I forget to active MockMailhost.\r\nWhen I run all my existing unittests and got unexpected errors : \r\n\r\n```\r\nFile \"/home/nazrul/.cache/buildout/eggs/plone.testing-7.0.0-py2.7.egg/plone/testing/zope.py\", line 859, in testTearDown\r\n transaction.abort()\r\n File \"/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_manager.py\", line 255, in abort\r\n return self.manager.abort()\r\n File \"/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_manager.py\", line 136, in abort\r\n return self.get().abort()\r\n File \"/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_transaction.py\", line 529, in abort\r\n reraise(t, v, tb)\r\n File \"/home/nazrul/.cache/buildout/eggs/transaction-2.4.0-py2.7.egg/transaction/_transaction.py\", line 515, in abort\r\n rm.abort(self)\r\n File \"/home/nazrul/.cache/buildout/eggs/zope.sendmail-4.2-py2.7.egg/zope/sendmail/delivery.py\", line 57, in abort\r\n self.onAbort()\r\n File \"/home/nazrul/.cache/buildout/eggs/zope.sendmail-4.2-py2.7.egg/zope/sendmail/mailer.py\", line 78, in abort\r\n if self.connection is None:\r\n File \"/home/nazrul/.cache/buildout/eggs/zope.sendmail-4.2-py2.7.egg/zope/sendmail/mailer.py\", line 48, in <lambda>\r\n return property(lambda self: getattr(self._smtp, name),\r\nAttributeError: 'SMTPMailer' object has no attribute '_smtp'\r\n```\r\nAll tests are passing for earlier version of Plone.\r\n\r\n### What is my prediction: \r\nAfter day long investigation, I found [SMTPMailer __init__ method is patched here](https://github.com/plone/Products.CMFPlone/blob/master/Products/CMFPlone/patches/sendmail.py#L39) , beside Also found that [zope.sendmail from version 4.1.0 the SMTPMailer's __init__](https://github.com/zopefoundation/zope.sendmail/blob/4.1.0/src/zope/sendmail/mailer.py#L45)\r\nhas been introduced a new attribute `_smtp` what is ignored during patching.\r\n\r\n### How to reproduce:\r\n\r\nThis is only for Plone 5.2.x\r\n\r\n1. disable Mock Mail\r\n2. Try to send email from your tests code \r\n3. Or try send mail from your production/testing server.\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom plone.registry.interfaces import IRegistry\nfrom Products.CMFPlone.interfaces import IMailSchema\nfrom transaction._transaction import Status\nfrom zope.component import getUtility\nfrom zope.sendmail.mailer import SMTPMailer\n\nimport logging\nimport transaction\n\nlog = logging.getLogger(\"MailDataManager\")\n\n\n# BBB remove when zope.sendmail 3.8.0 is released.\ndef catchAllExceptions(func):\n def _catch(*args, **kwargs):\n try:\n return func(*args, **kwargs)\n except Exception as e:\n txn = transaction.get()\n if txn.status == Status.ACTIVE:\n # sent with immediate=True\n raise\n else:\n # Avoid raising errors during tpc_finish as these could lead to\n # inconsistent state\n log.exception(e)\n\n return _catch\n\n\ndef applyPatches():\n from zope.sendmail.mailer import SMTPMailer\n old_mailer = getattr(SMTPMailer, 'vote', None) is None\n if old_mailer:\n SMTPMailer.send = catchAllExceptions(SMTPMailer.send)\n\n\ndef new_init(\n self,\n hostname='localhost',\n port=25,\n username=None,\n password=None,\n no_tls=False,\n force_tls=False):\n\n registry = getUtility(IRegistry)\n mail_settings = registry.forInterface(IMailSchema, prefix='plone')\n self.hostname = mail_settings.smtp_host\n self.port = mail_settings.smtp_port\n self.username = mail_settings.smtp_userid\n self.password = mail_settings.smtp_pass\n self.force_tls = force_tls\n self.no_tls = no_tls\n\nSMTPMailer.__init__ = new_init\n", "path": "Products/CMFPlone/patches/sendmail.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom plone.registry.interfaces import IRegistry\nfrom Products.CMFPlone.interfaces import IMailSchema\nfrom transaction._transaction import Status\nfrom zope.component import getUtility\nfrom zope.sendmail.mailer import _SMTPState\nfrom zope.sendmail.mailer import SMTPMailer\n\nimport logging\nimport transaction\n\n\nlog = logging.getLogger(\"MailDataManager\")\n\n\n# BBB remove when zope.sendmail 3.8.0 is released.\ndef catchAllExceptions(func):\n def _catch(*args, **kwargs):\n try:\n return func(*args, **kwargs)\n except Exception as e:\n txn = transaction.get()\n if txn.status == Status.ACTIVE:\n # sent with immediate=True\n raise\n else:\n # Avoid raising errors during tpc_finish as these could lead to\n # inconsistent state\n log.exception(e)\n\n return _catch\n\n\ndef applyPatches():\n from zope.sendmail.mailer import SMTPMailer\n old_mailer = getattr(SMTPMailer, 'vote', None) is None\n if old_mailer:\n SMTPMailer.send = catchAllExceptions(SMTPMailer.send)\n\n\ndef new_init(\n self,\n hostname='localhost',\n port=25,\n username=None,\n password=None,\n no_tls=False,\n force_tls=False):\n\n registry = getUtility(IRegistry)\n mail_settings = registry.forInterface(IMailSchema, prefix='plone')\n self.hostname = mail_settings.smtp_host\n self.port = mail_settings.smtp_port\n self.username = mail_settings.smtp_userid\n self.password = mail_settings.smtp_pass\n self.force_tls = force_tls\n self.no_tls = no_tls\n self._smtp = _SMTPState()\n\n\nSMTPMailer.__init__ = new_init\n", "path": "Products/CMFPlone/patches/sendmail.py"}]} | 1,465 | 197 |
gh_patches_debug_28476 | rasdani/github-patches | git_diff | pantsbuild__pants-13669 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docker environment not passed to docker publish command
**Describe the bug**
The configured `[docker].env_vars` are not passed to `docker publish`.
**Pants version**
2.9.0.dev1
**OS**
Any
**Additional info**
As reported by chenkai036 on [Slack](https://pantsbuild.slack.com/archives/C046T6T9U/p1637248172462800?thread_ts=1637136003.393600&cid=C046T6T9U)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/backend/docker/util_rules/docker_binary.py`
Content:
```
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 from dataclasses import dataclass
7 from typing import Mapping
8
9 from pants.backend.docker.util_rules.docker_build_args import DockerBuildArgs
10 from pants.engine.fs import Digest
11 from pants.engine.process import (
12 BinaryNotFoundError,
13 BinaryPath,
14 BinaryPathRequest,
15 BinaryPaths,
16 BinaryPathTest,
17 Process,
18 ProcessCacheScope,
19 SearchPath,
20 )
21 from pants.engine.rules import Get, collect_rules, rule
22 from pants.util.logging import LogLevel
23 from pants.util.strutil import pluralize
24
25
26 class DockerBinary(BinaryPath):
27 """The `docker` binary."""
28
29 DEFAULT_SEARCH_PATH = SearchPath(("/usr/bin", "/bin", "/usr/local/bin"))
30
31 def build_image(
32 self,
33 tags: tuple[str, ...],
34 digest: Digest,
35 dockerfile: str | None = None,
36 build_args: DockerBuildArgs | None = None,
37 env: Mapping[str, str] | None = None,
38 ) -> Process:
39 args = [self.path, "build"]
40
41 for tag in tags:
42 args.extend(["-t", tag])
43
44 if build_args:
45 for build_arg in build_args:
46 args.extend(["--build-arg", build_arg])
47
48 if dockerfile:
49 args.extend(["-f", dockerfile])
50
51 # Add build context root.
52 args.append(".")
53
54 return Process(
55 argv=tuple(args),
56 description=(
57 f"Building docker image {tags[0]}"
58 + (f" +{pluralize(len(tags)-1, 'additional tag')}." if len(tags) > 1 else ".")
59 ),
60 env=env,
61 input_digest=digest,
62 cache_scope=ProcessCacheScope.PER_SESSION,
63 )
64
65 def push_image(self, tags: tuple[str, ...]) -> Process | None:
66 if not tags:
67 return None
68
69 return Process(
70 argv=(self.path, "push", *tags),
71 cache_scope=ProcessCacheScope.PER_SESSION,
72 description=f"Pushing docker image {tags[0]}",
73 )
74
75
76 @dataclass(frozen=True)
77 class DockerBinaryRequest:
78 search_path: SearchPath = DockerBinary.DEFAULT_SEARCH_PATH
79
80
81 @rule(desc="Finding the `docker` binary", level=LogLevel.DEBUG)
82 async def find_docker(docker_request: DockerBinaryRequest) -> DockerBinary:
83 request = BinaryPathRequest(
84 binary_name="docker",
85 search_path=docker_request.search_path,
86 test=BinaryPathTest(args=["-v"]),
87 )
88 paths = await Get(BinaryPaths, BinaryPathRequest, request)
89 first_path = paths.first_path
90 if not first_path:
91 raise BinaryNotFoundError.from_request(request, rationale="interact with the docker daemon")
92 return DockerBinary(first_path.path, first_path.fingerprint)
93
94
95 @rule
96 async def get_docker() -> DockerBinary:
97 return await Get(DockerBinary, DockerBinaryRequest())
98
99
100 def rules():
101 return collect_rules()
102
```
Path: `src/python/pants/backend/docker/goals/publish.py`
Content:
```
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import logging
7 from dataclasses import dataclass
8 from itertools import chain
9 from typing import cast
10
11 from pants.backend.docker.goals.package_image import BuiltDockerImage
12 from pants.backend.docker.subsystems.docker_options import DockerOptions
13 from pants.backend.docker.target_types import DockerRegistriesField, DockerSkipPushField
14 from pants.backend.docker.util_rules.docker_binary import DockerBinary
15 from pants.core.goals.publish import (
16 PublishFieldSet,
17 PublishOutputData,
18 PublishPackages,
19 PublishProcesses,
20 PublishRequest,
21 )
22 from pants.engine.process import InteractiveProcess
23 from pants.engine.rules import collect_rules, rule
24
25 logger = logging.getLogger(__name__)
26
27
28 class PublishDockerImageRequest(PublishRequest):
29 pass
30
31
32 @dataclass(frozen=True)
33 class PublishDockerImageFieldSet(PublishFieldSet):
34 publish_request_type = PublishDockerImageRequest
35 required_fields = (DockerRegistriesField,)
36
37 registries: DockerRegistriesField
38 skip_push: DockerSkipPushField
39
40 def get_output_data(self) -> PublishOutputData:
41 return PublishOutputData(
42 {
43 "publisher": "docker",
44 "registries": self.registries.value or (),
45 **super().get_output_data(),
46 }
47 )
48
49
50 @rule
51 async def push_docker_images(
52 request: PublishDockerImageRequest, docker: DockerBinary, options: DockerOptions
53 ) -> PublishProcesses:
54 tags = tuple(
55 chain.from_iterable(
56 cast(BuiltDockerImage, image).tags
57 for pkg in request.packages
58 for image in pkg.artifacts
59 )
60 )
61
62 if request.field_set.skip_push.value:
63 return PublishProcesses(
64 [
65 PublishPackages(
66 names=tags,
67 description=f"(by `{request.field_set.skip_push.alias}` on {request.field_set.address})",
68 ),
69 ]
70 )
71
72 process = docker.push_image(tags)
73 return PublishProcesses(
74 [
75 PublishPackages(
76 names=tags,
77 process=InteractiveProcess.from_process(process) if process else None,
78 ),
79 ]
80 )
81
82
83 def rules():
84 return (
85 *collect_rules(),
86 *PublishDockerImageFieldSet.rules(),
87 )
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/python/pants/backend/docker/goals/publish.py b/src/python/pants/backend/docker/goals/publish.py
--- a/src/python/pants/backend/docker/goals/publish.py
+++ b/src/python/pants/backend/docker/goals/publish.py
@@ -19,8 +19,9 @@
PublishProcesses,
PublishRequest,
)
+from pants.engine.environment import Environment, EnvironmentRequest
from pants.engine.process import InteractiveProcess
-from pants.engine.rules import collect_rules, rule
+from pants.engine.rules import Get, collect_rules, rule
logger = logging.getLogger(__name__)
@@ -69,7 +70,8 @@
]
)
- process = docker.push_image(tags)
+ env = await Get(Environment, EnvironmentRequest(options.env_vars))
+ process = docker.push_image(tags, env)
return PublishProcesses(
[
PublishPackages(
diff --git a/src/python/pants/backend/docker/util_rules/docker_binary.py b/src/python/pants/backend/docker/util_rules/docker_binary.py
--- a/src/python/pants/backend/docker/util_rules/docker_binary.py
+++ b/src/python/pants/backend/docker/util_rules/docker_binary.py
@@ -62,7 +62,9 @@
cache_scope=ProcessCacheScope.PER_SESSION,
)
- def push_image(self, tags: tuple[str, ...]) -> Process | None:
+ def push_image(
+ self, tags: tuple[str, ...], env: Mapping[str, str] | None = None
+ ) -> Process | None:
if not tags:
return None
@@ -70,6 +72,7 @@
argv=(self.path, "push", *tags),
cache_scope=ProcessCacheScope.PER_SESSION,
description=f"Pushing docker image {tags[0]}",
+ env=env,
)
| {"golden_diff": "diff --git a/src/python/pants/backend/docker/goals/publish.py b/src/python/pants/backend/docker/goals/publish.py\n--- a/src/python/pants/backend/docker/goals/publish.py\n+++ b/src/python/pants/backend/docker/goals/publish.py\n@@ -19,8 +19,9 @@\n PublishProcesses,\n PublishRequest,\n )\n+from pants.engine.environment import Environment, EnvironmentRequest\n from pants.engine.process import InteractiveProcess\n-from pants.engine.rules import collect_rules, rule\n+from pants.engine.rules import Get, collect_rules, rule\n \n logger = logging.getLogger(__name__)\n \n@@ -69,7 +70,8 @@\n ]\n )\n \n- process = docker.push_image(tags)\n+ env = await Get(Environment, EnvironmentRequest(options.env_vars))\n+ process = docker.push_image(tags, env)\n return PublishProcesses(\n [\n PublishPackages(\ndiff --git a/src/python/pants/backend/docker/util_rules/docker_binary.py b/src/python/pants/backend/docker/util_rules/docker_binary.py\n--- a/src/python/pants/backend/docker/util_rules/docker_binary.py\n+++ b/src/python/pants/backend/docker/util_rules/docker_binary.py\n@@ -62,7 +62,9 @@\n cache_scope=ProcessCacheScope.PER_SESSION,\n )\n \n- def push_image(self, tags: tuple[str, ...]) -> Process | None:\n+ def push_image(\n+ self, tags: tuple[str, ...], env: Mapping[str, str] | None = None\n+ ) -> Process | None:\n if not tags:\n return None\n \n@@ -70,6 +72,7 @@\n argv=(self.path, \"push\", *tags),\n cache_scope=ProcessCacheScope.PER_SESSION,\n description=f\"Pushing docker image {tags[0]}\",\n+ env=env,\n )\n", "issue": "Docker environment not passed to docker publish command\n**Describe the bug**\r\nThe configured `[docker].env_vars` are not passed to `docker publish`.\r\n\r\n**Pants version**\r\n2.9.0.dev1\r\n\r\n**OS**\r\nAny\r\n\r\n**Additional info**\r\nAs reported by chenkai036 on [Slack](https://pantsbuild.slack.com/archives/C046T6T9U/p1637248172462800?thread_ts=1637136003.393600&cid=C046T6T9U)\r\n\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom typing import Mapping\n\nfrom pants.backend.docker.util_rules.docker_build_args import DockerBuildArgs\nfrom pants.engine.fs import Digest\nfrom pants.engine.process import (\n BinaryNotFoundError,\n BinaryPath,\n BinaryPathRequest,\n BinaryPaths,\n BinaryPathTest,\n Process,\n ProcessCacheScope,\n SearchPath,\n)\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.util.logging import LogLevel\nfrom pants.util.strutil import pluralize\n\n\nclass DockerBinary(BinaryPath):\n \"\"\"The `docker` binary.\"\"\"\n\n DEFAULT_SEARCH_PATH = SearchPath((\"/usr/bin\", \"/bin\", \"/usr/local/bin\"))\n\n def build_image(\n self,\n tags: tuple[str, ...],\n digest: Digest,\n dockerfile: str | None = None,\n build_args: DockerBuildArgs | None = None,\n env: Mapping[str, str] | None = None,\n ) -> Process:\n args = [self.path, \"build\"]\n\n for tag in tags:\n args.extend([\"-t\", tag])\n\n if build_args:\n for build_arg in build_args:\n args.extend([\"--build-arg\", build_arg])\n\n if dockerfile:\n args.extend([\"-f\", dockerfile])\n\n # Add build context root.\n args.append(\".\")\n\n return Process(\n argv=tuple(args),\n description=(\n f\"Building docker image {tags[0]}\"\n + (f\" +{pluralize(len(tags)-1, 'additional tag')}.\" if len(tags) > 1 else \".\")\n ),\n env=env,\n input_digest=digest,\n cache_scope=ProcessCacheScope.PER_SESSION,\n )\n\n def push_image(self, tags: tuple[str, ...]) -> Process | None:\n if not tags:\n return None\n\n return Process(\n argv=(self.path, \"push\", *tags),\n cache_scope=ProcessCacheScope.PER_SESSION,\n description=f\"Pushing docker image {tags[0]}\",\n )\n\n\n@dataclass(frozen=True)\nclass DockerBinaryRequest:\n search_path: SearchPath = DockerBinary.DEFAULT_SEARCH_PATH\n\n\n@rule(desc=\"Finding the `docker` binary\", level=LogLevel.DEBUG)\nasync def find_docker(docker_request: DockerBinaryRequest) -> DockerBinary:\n request = BinaryPathRequest(\n binary_name=\"docker\",\n search_path=docker_request.search_path,\n test=BinaryPathTest(args=[\"-v\"]),\n )\n paths = await Get(BinaryPaths, BinaryPathRequest, request)\n first_path = paths.first_path\n if not first_path:\n raise BinaryNotFoundError.from_request(request, rationale=\"interact with the docker daemon\")\n return DockerBinary(first_path.path, first_path.fingerprint)\n\n\n@rule\nasync def get_docker() -> DockerBinary:\n return await Get(DockerBinary, DockerBinaryRequest())\n\n\ndef rules():\n return collect_rules()\n", "path": "src/python/pants/backend/docker/util_rules/docker_binary.py"}, {"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport logging\nfrom dataclasses import dataclass\nfrom itertools import chain\nfrom typing import cast\n\nfrom pants.backend.docker.goals.package_image import BuiltDockerImage\nfrom pants.backend.docker.subsystems.docker_options import DockerOptions\nfrom pants.backend.docker.target_types import DockerRegistriesField, DockerSkipPushField\nfrom pants.backend.docker.util_rules.docker_binary import DockerBinary\nfrom pants.core.goals.publish import (\n PublishFieldSet,\n PublishOutputData,\n PublishPackages,\n PublishProcesses,\n PublishRequest,\n)\nfrom pants.engine.process import InteractiveProcess\nfrom pants.engine.rules import collect_rules, rule\n\nlogger = logging.getLogger(__name__)\n\n\nclass PublishDockerImageRequest(PublishRequest):\n pass\n\n\n@dataclass(frozen=True)\nclass PublishDockerImageFieldSet(PublishFieldSet):\n publish_request_type = PublishDockerImageRequest\n required_fields = (DockerRegistriesField,)\n\n registries: DockerRegistriesField\n skip_push: DockerSkipPushField\n\n def get_output_data(self) -> PublishOutputData:\n return PublishOutputData(\n {\n \"publisher\": \"docker\",\n \"registries\": self.registries.value or (),\n **super().get_output_data(),\n }\n )\n\n\n@rule\nasync def push_docker_images(\n request: PublishDockerImageRequest, docker: DockerBinary, options: DockerOptions\n) -> PublishProcesses:\n tags = tuple(\n chain.from_iterable(\n cast(BuiltDockerImage, image).tags\n for pkg in request.packages\n for image in pkg.artifacts\n )\n )\n\n if request.field_set.skip_push.value:\n return PublishProcesses(\n [\n PublishPackages(\n names=tags,\n description=f\"(by `{request.field_set.skip_push.alias}` on {request.field_set.address})\",\n ),\n ]\n )\n\n process = docker.push_image(tags)\n return PublishProcesses(\n [\n PublishPackages(\n names=tags,\n process=InteractiveProcess.from_process(process) if process else None,\n ),\n ]\n )\n\n\ndef rules():\n return (\n *collect_rules(),\n *PublishDockerImageFieldSet.rules(),\n )\n", "path": "src/python/pants/backend/docker/goals/publish.py"}], "after_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom typing import Mapping\n\nfrom pants.backend.docker.util_rules.docker_build_args import DockerBuildArgs\nfrom pants.engine.fs import Digest\nfrom pants.engine.process import (\n BinaryNotFoundError,\n BinaryPath,\n BinaryPathRequest,\n BinaryPaths,\n BinaryPathTest,\n Process,\n ProcessCacheScope,\n SearchPath,\n)\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.util.logging import LogLevel\nfrom pants.util.strutil import pluralize\n\n\nclass DockerBinary(BinaryPath):\n \"\"\"The `docker` binary.\"\"\"\n\n DEFAULT_SEARCH_PATH = SearchPath((\"/usr/bin\", \"/bin\", \"/usr/local/bin\"))\n\n def build_image(\n self,\n tags: tuple[str, ...],\n digest: Digest,\n dockerfile: str | None = None,\n build_args: DockerBuildArgs | None = None,\n env: Mapping[str, str] | None = None,\n ) -> Process:\n args = [self.path, \"build\"]\n\n for tag in tags:\n args.extend([\"-t\", tag])\n\n if build_args:\n for build_arg in build_args:\n args.extend([\"--build-arg\", build_arg])\n\n if dockerfile:\n args.extend([\"-f\", dockerfile])\n\n # Add build context root.\n args.append(\".\")\n\n return Process(\n argv=tuple(args),\n description=(\n f\"Building docker image {tags[0]}\"\n + (f\" +{pluralize(len(tags)-1, 'additional tag')}.\" if len(tags) > 1 else \".\")\n ),\n env=env,\n input_digest=digest,\n cache_scope=ProcessCacheScope.PER_SESSION,\n )\n\n def push_image(\n self, tags: tuple[str, ...], env: Mapping[str, str] | None = None\n ) -> Process | None:\n if not tags:\n return None\n\n return Process(\n argv=(self.path, \"push\", *tags),\n cache_scope=ProcessCacheScope.PER_SESSION,\n description=f\"Pushing docker image {tags[0]}\",\n env=env,\n )\n\n\n@dataclass(frozen=True)\nclass DockerBinaryRequest:\n search_path: SearchPath = DockerBinary.DEFAULT_SEARCH_PATH\n\n\n@rule(desc=\"Finding the `docker` binary\", level=LogLevel.DEBUG)\nasync def find_docker(docker_request: DockerBinaryRequest) -> DockerBinary:\n request = BinaryPathRequest(\n binary_name=\"docker\",\n search_path=docker_request.search_path,\n test=BinaryPathTest(args=[\"-v\"]),\n )\n paths = await Get(BinaryPaths, BinaryPathRequest, request)\n first_path = paths.first_path\n if not first_path:\n raise BinaryNotFoundError.from_request(request, rationale=\"interact with the docker daemon\")\n return DockerBinary(first_path.path, first_path.fingerprint)\n\n\n@rule\nasync def get_docker() -> DockerBinary:\n return await Get(DockerBinary, DockerBinaryRequest())\n\n\ndef rules():\n return collect_rules()\n", "path": "src/python/pants/backend/docker/util_rules/docker_binary.py"}, {"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport logging\nfrom dataclasses import dataclass\nfrom itertools import chain\nfrom typing import cast\n\nfrom pants.backend.docker.goals.package_image import BuiltDockerImage\nfrom pants.backend.docker.subsystems.docker_options import DockerOptions\nfrom pants.backend.docker.target_types import DockerRegistriesField, DockerSkipPushField\nfrom pants.backend.docker.util_rules.docker_binary import DockerBinary\nfrom pants.core.goals.publish import (\n PublishFieldSet,\n PublishOutputData,\n PublishPackages,\n PublishProcesses,\n PublishRequest,\n)\nfrom pants.engine.environment import Environment, EnvironmentRequest\nfrom pants.engine.process import InteractiveProcess\nfrom pants.engine.rules import Get, collect_rules, rule\n\nlogger = logging.getLogger(__name__)\n\n\nclass PublishDockerImageRequest(PublishRequest):\n pass\n\n\n@dataclass(frozen=True)\nclass PublishDockerImageFieldSet(PublishFieldSet):\n publish_request_type = PublishDockerImageRequest\n required_fields = (DockerRegistriesField,)\n\n registries: DockerRegistriesField\n skip_push: DockerSkipPushField\n\n def get_output_data(self) -> PublishOutputData:\n return PublishOutputData(\n {\n \"publisher\": \"docker\",\n \"registries\": self.registries.value or (),\n **super().get_output_data(),\n }\n )\n\n\n@rule\nasync def push_docker_images(\n request: PublishDockerImageRequest, docker: DockerBinary, options: DockerOptions\n) -> PublishProcesses:\n tags = tuple(\n chain.from_iterable(\n cast(BuiltDockerImage, image).tags\n for pkg in request.packages\n for image in pkg.artifacts\n )\n )\n\n if request.field_set.skip_push.value:\n return PublishProcesses(\n [\n PublishPackages(\n names=tags,\n description=f\"(by `{request.field_set.skip_push.alias}` on {request.field_set.address})\",\n ),\n ]\n )\n\n env = await Get(Environment, EnvironmentRequest(options.env_vars))\n process = docker.push_image(tags, env)\n return PublishProcesses(\n [\n PublishPackages(\n names=tags,\n process=InteractiveProcess.from_process(process) if process else None,\n ),\n ]\n )\n\n\ndef rules():\n return (\n *collect_rules(),\n *PublishDockerImageFieldSet.rules(),\n )\n", "path": "src/python/pants/backend/docker/goals/publish.py"}]} | 1,962 | 391 |
gh_patches_debug_10806 | rasdani/github-patches | git_diff | Kinto__kinto-850 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Return 400 if a group contains system.Everyone or a group URL
Unless I'm mistaken:
- We don't support groups for anonymous requests
- We don't support recursivity in groups definitions
So we should reject with `400` if such groups definitons are created
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/views/groups.py`
Content:
```
1 import colander
2
3 from kinto.core import resource, utils
4 from kinto.core.events import ResourceChanged, ACTIONS
5 from pyramid.events import subscriber
6
7
8 class GroupSchema(resource.ResourceSchema):
9 members = colander.SchemaNode(colander.Sequence(),
10 colander.SchemaNode(colander.String()))
11
12
13 @resource.register(name='group',
14 collection_path='/buckets/{{bucket_id}}/groups',
15 record_path='/buckets/{{bucket_id}}/groups/{{id}}')
16 class Group(resource.ShareableResource):
17 mapping = GroupSchema()
18
19 def get_parent_id(self, request):
20 bucket_id = request.matchdict['bucket_id']
21 parent_id = utils.instance_uri(request, 'bucket', id=bucket_id)
22 return parent_id
23
24
25 @subscriber(ResourceChanged,
26 for_resources=('group',),
27 for_actions=(ACTIONS.DELETE,))
28 def on_groups_deleted(event):
29 """Some groups were deleted, remove them from users principals.
30 """
31 permission_backend = event.request.registry.permission
32
33 for change in event.impacted_records:
34 group = change['old']
35 bucket_id = event.payload['bucket_id']
36 group_uri = utils.instance_uri(event.request, 'group',
37 bucket_id=bucket_id,
38 id=group['id'])
39
40 permission_backend.remove_principal(group_uri)
41
42
43 @subscriber(ResourceChanged,
44 for_resources=('group',),
45 for_actions=(ACTIONS.CREATE, ACTIONS.UPDATE))
46 def on_groups_changed(event):
47 """Some groups were changed, update users principals.
48 """
49 permission_backend = event.request.registry.permission
50
51 for change in event.impacted_records:
52 if 'old' in change:
53 existing_record_members = set(change['old'].get('members', []))
54 else:
55 existing_record_members = set()
56
57 group = change['new']
58 group_uri = '/buckets/{bucket_id}/groups/{id}'.format(id=group['id'],
59 **event.payload)
60 new_record_members = set(group.get('members', []))
61 new_members = new_record_members - existing_record_members
62 removed_members = existing_record_members - new_record_members
63
64 for member in new_members:
65 # Add the group to the member principal.
66 permission_backend.add_user_principal(member, group_uri)
67
68 for member in removed_members:
69 # Remove the group from the member principal.
70 permission_backend.remove_user_principal(member, group_uri)
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kinto/views/groups.py b/kinto/views/groups.py
--- a/kinto/views/groups.py
+++ b/kinto/views/groups.py
@@ -5,9 +5,15 @@
from pyramid.events import subscriber
+def validate_member(node, member):
+ if member.startswith('/buckets/') or member == 'system.Everyone':
+ raise colander.Invalid(node, "%r is not a valid user ID." % member)
+
+
class GroupSchema(resource.ResourceSchema):
members = colander.SchemaNode(colander.Sequence(),
- colander.SchemaNode(colander.String()))
+ colander.SchemaNode(colander.String(),
+ validator=validate_member))
@resource.register(name='group',
| {"golden_diff": "diff --git a/kinto/views/groups.py b/kinto/views/groups.py\n--- a/kinto/views/groups.py\n+++ b/kinto/views/groups.py\n@@ -5,9 +5,15 @@\n from pyramid.events import subscriber\n \n \n+def validate_member(node, member):\n+ if member.startswith('/buckets/') or member == 'system.Everyone':\n+ raise colander.Invalid(node, \"%r is not a valid user ID.\" % member)\n+\n+\n class GroupSchema(resource.ResourceSchema):\n members = colander.SchemaNode(colander.Sequence(),\n- colander.SchemaNode(colander.String()))\n+ colander.SchemaNode(colander.String(),\n+ validator=validate_member))\n \n \n @resource.register(name='group',\n", "issue": "Return 400 if a group contains system.Everyone or a group URL\nUnless I'm mistaken:\n- We don't support groups for anonymous requests\n- We don't support recursivity in groups definitions\n\nSo we should reject with `400` if such groups definitons are created\n\n", "before_files": [{"content": "import colander\n\nfrom kinto.core import resource, utils\nfrom kinto.core.events import ResourceChanged, ACTIONS\nfrom pyramid.events import subscriber\n\n\nclass GroupSchema(resource.ResourceSchema):\n members = colander.SchemaNode(colander.Sequence(),\n colander.SchemaNode(colander.String()))\n\n\[email protected](name='group',\n collection_path='/buckets/{{bucket_id}}/groups',\n record_path='/buckets/{{bucket_id}}/groups/{{id}}')\nclass Group(resource.ShareableResource):\n mapping = GroupSchema()\n\n def get_parent_id(self, request):\n bucket_id = request.matchdict['bucket_id']\n parent_id = utils.instance_uri(request, 'bucket', id=bucket_id)\n return parent_id\n\n\n@subscriber(ResourceChanged,\n for_resources=('group',),\n for_actions=(ACTIONS.DELETE,))\ndef on_groups_deleted(event):\n \"\"\"Some groups were deleted, remove them from users principals.\n \"\"\"\n permission_backend = event.request.registry.permission\n\n for change in event.impacted_records:\n group = change['old']\n bucket_id = event.payload['bucket_id']\n group_uri = utils.instance_uri(event.request, 'group',\n bucket_id=bucket_id,\n id=group['id'])\n\n permission_backend.remove_principal(group_uri)\n\n\n@subscriber(ResourceChanged,\n for_resources=('group',),\n for_actions=(ACTIONS.CREATE, ACTIONS.UPDATE))\ndef on_groups_changed(event):\n \"\"\"Some groups were changed, update users principals.\n \"\"\"\n permission_backend = event.request.registry.permission\n\n for change in event.impacted_records:\n if 'old' in change:\n existing_record_members = set(change['old'].get('members', []))\n else:\n existing_record_members = set()\n\n group = change['new']\n group_uri = '/buckets/{bucket_id}/groups/{id}'.format(id=group['id'],\n **event.payload)\n new_record_members = set(group.get('members', []))\n new_members = new_record_members - existing_record_members\n removed_members = existing_record_members - new_record_members\n\n for member in new_members:\n # Add the group to the member principal.\n permission_backend.add_user_principal(member, group_uri)\n\n for member in removed_members:\n # Remove the group from the member principal.\n permission_backend.remove_user_principal(member, group_uri)\n", "path": "kinto/views/groups.py"}], "after_files": [{"content": "import colander\n\nfrom kinto.core import resource, utils\nfrom kinto.core.events import ResourceChanged, ACTIONS\nfrom pyramid.events import subscriber\n\n\ndef validate_member(node, member):\n if member.startswith('/buckets/') or member == 'system.Everyone':\n raise colander.Invalid(node, \"%r is not a valid user ID.\" % member)\n\n\nclass GroupSchema(resource.ResourceSchema):\n members = colander.SchemaNode(colander.Sequence(),\n colander.SchemaNode(colander.String(),\n validator=validate_member))\n\n\[email protected](name='group',\n collection_path='/buckets/{{bucket_id}}/groups',\n record_path='/buckets/{{bucket_id}}/groups/{{id}}')\nclass Group(resource.ShareableResource):\n mapping = GroupSchema()\n\n def get_parent_id(self, request):\n bucket_id = request.matchdict['bucket_id']\n parent_id = utils.instance_uri(request, 'bucket', id=bucket_id)\n return parent_id\n\n\n@subscriber(ResourceChanged,\n for_resources=('group',),\n for_actions=(ACTIONS.DELETE,))\ndef on_groups_deleted(event):\n \"\"\"Some groups were deleted, remove them from users principals.\n \"\"\"\n permission_backend = event.request.registry.permission\n\n for change in event.impacted_records:\n group = change['old']\n bucket_id = event.payload['bucket_id']\n group_uri = utils.instance_uri(event.request, 'group',\n bucket_id=bucket_id,\n id=group['id'])\n\n permission_backend.remove_principal(group_uri)\n\n\n@subscriber(ResourceChanged,\n for_resources=('group',),\n for_actions=(ACTIONS.CREATE, ACTIONS.UPDATE))\ndef on_groups_changed(event):\n \"\"\"Some groups were changed, update users principals.\n \"\"\"\n permission_backend = event.request.registry.permission\n\n for change in event.impacted_records:\n if 'old' in change:\n existing_record_members = set(change['old'].get('members', []))\n else:\n existing_record_members = set()\n\n group = change['new']\n group_uri = '/buckets/{bucket_id}/groups/{id}'.format(id=group['id'],\n **event.payload)\n new_record_members = set(group.get('members', []))\n new_members = new_record_members - existing_record_members\n removed_members = existing_record_members - new_record_members\n\n for member in new_members:\n # Add the group to the member principal.\n permission_backend.add_user_principal(member, group_uri)\n\n for member in removed_members:\n # Remove the group from the member principal.\n permission_backend.remove_user_principal(member, group_uri)\n", "path": "kinto/views/groups.py"}]} | 948 | 150 |
gh_patches_debug_16119 | rasdani/github-patches | git_diff | conan-io__conan-center-index-549 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] make/4.2.1: not building for Linux Clang 9
The recipe for `make/4.2.1` does not build under Linux Clang 9.
After generating all the index packages for Clang 9, the ones for this library failed to compile. In this case it doesn't matter that much as this is packaging a tool to be used as a build requirement.
Related to #211
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **make/4.2.1**
* Operating System+version: **Linux Ubuntu 18.04**
* Compiler+version: **Clang 9**
* Conan version: **conan 1.21.0**
* Python version: **Python 3.7.4**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/make/all/conanfile.py`
Content:
```
1 from conans import ConanFile, tools, AutoToolsBuildEnvironment
2 import os
3
4
5 class MakeConan(ConanFile):
6 name = "make"
7 description = "GNU Make is a tool which controls the generation of executables and other non-source files of a program from the program's source files"
8 topics = ("conan", "make", "build", "makefile")
9 url = "https://github.com/conan-io/conan-center-index"
10 homepage = "https://www.gnu.org/software/make/"
11 license = "GPL-3.0-or-later"
12 settings = "os_build", "arch_build", "compiler"
13 _source_subfolder = "source_subfolder"
14
15 def source(self):
16 tools.get(**self.conan_data["sources"][self.version])
17 extracted_dir = "make-" + self.version
18 os.rename(extracted_dir, self._source_subfolder)
19
20 def configure(self):
21 del self.settings.compiler.libcxx
22 del self.settings.compiler.cppstd
23
24 def build(self):
25 with tools.chdir(self._source_subfolder):
26 # README.W32
27 if self.settings.os_build == "Windows":
28 if self.settings.compiler == "Visual Studio":
29 command = "build_w32.bat --without-guile"
30 else:
31 command = "build_w32.bat --without-guile gcc"
32 else:
33 env_build = AutoToolsBuildEnvironment(self)
34 env_build.configure()
35 command = "./build.sh"
36 with tools.vcvars(self.settings) if self.settings.compiler == "Visual Studio" else tools.no_op():
37 self.run(command)
38
39 def package(self):
40 self.copy(pattern="COPYING", dst="licenses", src=self._source_subfolder)
41 self.copy(pattern="make", dst="bin", src=self._source_subfolder, keep_path=False)
42 self.copy(pattern="*gnumake.exe", dst="bin", src=self._source_subfolder, keep_path=False)
43
44 def package_info(self):
45 make = "gnumake.exe" if self.settings.os_build == "Windows" else "make"
46 make = os.path.join(self.package_folder, "bin", make)
47 self.output.info('Creating CONAN_MAKE_PROGRAM environment variable: %s' % make)
48 self.env_info.CONAN_MAKE_PROGRAM = make
49
50 def package_id(self):
51 del self.info.settings.compiler
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/make/all/conanfile.py b/recipes/make/all/conanfile.py
--- a/recipes/make/all/conanfile.py
+++ b/recipes/make/all/conanfile.py
@@ -10,6 +10,7 @@
homepage = "https://www.gnu.org/software/make/"
license = "GPL-3.0-or-later"
settings = "os_build", "arch_build", "compiler"
+ exports_sources = ["patches/*"]
_source_subfolder = "source_subfolder"
def source(self):
@@ -22,6 +23,9 @@
del self.settings.compiler.cppstd
def build(self):
+ for patch in self.conan_data["patches"][self.version]:
+ tools.patch(**patch)
+
with tools.chdir(self._source_subfolder):
# README.W32
if self.settings.os_build == "Windows":
| {"golden_diff": "diff --git a/recipes/make/all/conanfile.py b/recipes/make/all/conanfile.py\n--- a/recipes/make/all/conanfile.py\n+++ b/recipes/make/all/conanfile.py\n@@ -10,6 +10,7 @@\n homepage = \"https://www.gnu.org/software/make/\"\n license = \"GPL-3.0-or-later\"\n settings = \"os_build\", \"arch_build\", \"compiler\"\n+ exports_sources = [\"patches/*\"]\n _source_subfolder = \"source_subfolder\"\n \n def source(self):\n@@ -22,6 +23,9 @@\n del self.settings.compiler.cppstd\n \n def build(self):\n+ for patch in self.conan_data[\"patches\"][self.version]:\n+ tools.patch(**patch)\n+\n with tools.chdir(self._source_subfolder):\n # README.W32\n if self.settings.os_build == \"Windows\":\n", "issue": "[package] make/4.2.1: not building for Linux Clang 9\nThe recipe for `make/4.2.1` does not build under Linux Clang 9.\r\n\r\nAfter generating all the index packages for Clang 9, the ones for this library failed to compile. In this case it doesn't matter that much as this is packaging a tool to be used as a build requirement.\r\n\r\nRelated to #211 \r\n\r\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **make/4.2.1**\r\n * Operating System+version: **Linux Ubuntu 18.04**\r\n * Compiler+version: **Clang 9**\r\n * Conan version: **conan 1.21.0**\r\n * Python version: **Python 3.7.4**\n", "before_files": [{"content": "from conans import ConanFile, tools, AutoToolsBuildEnvironment\nimport os\n\n\nclass MakeConan(ConanFile):\n name = \"make\"\n description = \"GNU Make is a tool which controls the generation of executables and other non-source files of a program from the program's source files\"\n topics = (\"conan\", \"make\", \"build\", \"makefile\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://www.gnu.org/software/make/\"\n license = \"GPL-3.0-or-later\"\n settings = \"os_build\", \"arch_build\", \"compiler\"\n _source_subfolder = \"source_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"make-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def build(self):\n with tools.chdir(self._source_subfolder):\n # README.W32\n if self.settings.os_build == \"Windows\":\n if self.settings.compiler == \"Visual Studio\":\n command = \"build_w32.bat --without-guile\"\n else:\n command = \"build_w32.bat --without-guile gcc\"\n else:\n env_build = AutoToolsBuildEnvironment(self)\n env_build.configure()\n command = \"./build.sh\"\n with tools.vcvars(self.settings) if self.settings.compiler == \"Visual Studio\" else tools.no_op():\n self.run(command)\n\n def package(self):\n self.copy(pattern=\"COPYING\", dst=\"licenses\", src=self._source_subfolder)\n self.copy(pattern=\"make\", dst=\"bin\", src=self._source_subfolder, keep_path=False)\n self.copy(pattern=\"*gnumake.exe\", dst=\"bin\", src=self._source_subfolder, keep_path=False)\n\n def package_info(self):\n make = \"gnumake.exe\" if self.settings.os_build == \"Windows\" else \"make\"\n make = os.path.join(self.package_folder, \"bin\", make)\n self.output.info('Creating CONAN_MAKE_PROGRAM environment variable: %s' % make)\n self.env_info.CONAN_MAKE_PROGRAM = make\n\n def package_id(self):\n del self.info.settings.compiler\n", "path": "recipes/make/all/conanfile.py"}], "after_files": [{"content": "from conans import ConanFile, tools, AutoToolsBuildEnvironment\nimport os\n\n\nclass MakeConan(ConanFile):\n name = \"make\"\n description = \"GNU Make is a tool which controls the generation of executables and other non-source files of a program from the program's source files\"\n topics = (\"conan\", \"make\", \"build\", \"makefile\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://www.gnu.org/software/make/\"\n license = \"GPL-3.0-or-later\"\n settings = \"os_build\", \"arch_build\", \"compiler\"\n exports_sources = [\"patches/*\"]\n _source_subfolder = \"source_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"make-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def build(self):\n for patch in self.conan_data[\"patches\"][self.version]:\n tools.patch(**patch)\n\n with tools.chdir(self._source_subfolder):\n # README.W32\n if self.settings.os_build == \"Windows\":\n if self.settings.compiler == \"Visual Studio\":\n command = \"build_w32.bat --without-guile\"\n else:\n command = \"build_w32.bat --without-guile gcc\"\n else:\n env_build = AutoToolsBuildEnvironment(self)\n env_build.configure()\n command = \"./build.sh\"\n with tools.vcvars(self.settings) if self.settings.compiler == \"Visual Studio\" else tools.no_op():\n self.run(command)\n\n def package(self):\n self.copy(pattern=\"COPYING\", dst=\"licenses\", src=self._source_subfolder)\n self.copy(pattern=\"make\", dst=\"bin\", src=self._source_subfolder, keep_path=False)\n self.copy(pattern=\"*gnumake.exe\", dst=\"bin\", src=self._source_subfolder, keep_path=False)\n\n def package_info(self):\n make = \"gnumake.exe\" if self.settings.os_build == \"Windows\" else \"make\"\n make = os.path.join(self.package_folder, \"bin\", make)\n self.output.info('Creating CONAN_MAKE_PROGRAM environment variable: %s' % make)\n self.env_info.CONAN_MAKE_PROGRAM = make\n\n def package_id(self):\n del self.info.settings.compiler\n", "path": "recipes/make/all/conanfile.py"}]} | 1,042 | 201 |
gh_patches_debug_25779 | rasdani/github-patches | git_diff | weecology__retriever-1004 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update FIA links
It's that time of year again where FIA puts out a new release and moves things around. See https://github.com/weecology/retriever/issues/865#issuecomment-325588307
We need to track down the most recent links and update the script.
Thanks to @nestorperez for reporting this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/forest_inventory_analysis.py`
Content:
```
1 #retriever
2 """Retriever script for Forest Inventory and Analysis
3
4 """
5 from __future__ import print_function
6 from future import standard_library
7 standard_library.install_aliases()
8
9 import os
10
11 from retriever.lib.templates import Script
12 from retriever.lib.models import Table
13 from retriever import open_fr, open_fw, VERSION
14 from pkg_resources import parse_version
15
16
17 class main(Script):
18 def __init__(self, **kwargs):
19 Script.__init__(self, **kwargs)
20 self.title = "Forest Inventory and Analysis"
21 self.name = "forest-inventory-analysis"
22 self.retriever_minimum_version = '2.0.dev'
23 self.version = '1.4.0'
24 self.ref = "http://fia.fs.fed.us/"
25 self.urls = {"main": "https://apps.fs.usda.gov/fiadb-downloads/CSV/",
26 'species': 'https://apps.fs.usda.gov/fiadb-downloads/CSV/REF_SPECIES.csv'}
27 self.keywords = ["plants", "continental-scale", "observational"]
28 self.citation = "DATEOFDOWNLOAD. Forest Inventory and Analysis Database, St. Paul, MN: U.S. Department of Agriculture, Forest Service, Northern Research Station. [Available only on internet: http://apps.fs.fed.us/fiadb-downloads/datamart.html]"
29 self.description = """WARNING: This dataset requires downloading many large files and will probably take several hours to finish installing."""
30 self.addendum = """This dataset requires downloading many large files - please be patient."""
31
32 if parse_version(VERSION) <= parse_version("2.0.0"):
33 self.shortname = self.name
34 self.name = self.title
35 self.tags = self.keywords
36
37 def download(self, engine=None, debug=False):
38 Script.download(self, engine, debug)
39 engine = self.engine
40
41 # download and create species table
42 table = Table('species')
43 self.engine.auto_create_table(table, url=self.urls['species'])
44 self.engine.insert_data_from_url(self.urls['species'])
45
46 # State abbreviations with the year annual inventory began for that state
47 stateslist = [('AL', 2001), ('AK', 2004), ('AZ', 2001), ('AR', 2000),
48 ('CA', 2001), ('CO', 2002), ('CT', 2003), ('DE', 2004),
49 ('FL', 2003), ('GA', 1998), ('ID', 2004), ('IL', 2001),
50 ('IN', 1999), ('IA', 1999), ('KS', 2001), ('KY', 1999),
51 ('LA', 2001), ('ME', 1999), ('MD', 2004), ('MA', 2003),
52 ('MI', 2000), ('MN', 1999), ('MO', 1999), ('MS', 2006),
53 ('MT', 2003), ('NE', 2001), ('NV', 2004), ('NH', 2002),
54 ('NJ', 2004), ('NM', 1999), ('NY', 2002), ('NC', 2003),
55 ('ND', 2001), ('OH', 2001), ('OK', 2008), ('OR', 2001),
56 ('PA', 2000), ('RI', 2003), ('SC', 1999), ('SD', 2001),
57 ('TN', 2000), ('TX', 2001), ('UT', 2000), ('VT', 2003),
58 ('VA', 1998), ('WA', 2002), ('WV', 2004), ('WI', 2000),
59 ('WY', 2000), ('PR', 2001)]
60
61 tablelist = ["SURVEY", "PLOT", "COND", "SUBPLOT", "SUBP_COND", "TREE", "SEEDLING"]
62
63 for table in tablelist:
64 for state, year in stateslist:
65 engine.download_files_from_archive(self.urls["main"] + state + "_" + table + ".ZIP",
66 [state + "_" + table + ".csv"])
67
68 for table in tablelist:
69 print("Scanning data for table %s..." % table)
70 prep_file_name = "%s.csv" % table
71 prep_file = open_fw(engine.format_filename(prep_file_name))
72 this_file = open_fr(engine.format_filename(stateslist[0][0] + "_" + table + ".csv"))
73 col_names = this_file.readline()
74 prep_file.write(col_names)
75 column_names = [col.strip('"') for col in col_names.split(',')]
76 year_column = column_names.index("INVYR")
77 this_file.close()
78
79 for state, year in stateslist:
80 this_file = open_fr(engine.format_filename(state + "_" + table + ".csv"))
81 this_file.readline()
82 for line in this_file:
83 values = line.split(',')
84 this_year = values[year_column]
85 if int(this_year) >= year:
86 prep_file.write(line)
87 prep_file.close()
88 engine.auto_create_table(Table(table), filename=prep_file_name)
89
90 engine.insert_data_from_file(engine.format_filename(prep_file_name))
91
92 try:
93 os.remove(engine.format_filename(prep_file_name))
94 except:
95 pass
96
97 return engine
98
99
100 SCRIPT = main()
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/forest_inventory_analysis.py b/scripts/forest_inventory_analysis.py
--- a/scripts/forest_inventory_analysis.py
+++ b/scripts/forest_inventory_analysis.py
@@ -20,10 +20,10 @@
self.title = "Forest Inventory and Analysis"
self.name = "forest-inventory-analysis"
self.retriever_minimum_version = '2.0.dev'
- self.version = '1.4.0'
+ self.version = '1.4.1'
self.ref = "http://fia.fs.fed.us/"
- self.urls = {"main": "https://apps.fs.usda.gov/fiadb-downloads/CSV/",
- 'species': 'https://apps.fs.usda.gov/fiadb-downloads/CSV/REF_SPECIES.csv'}
+ self.urls = {"main": "https://apps.fs.usda.gov/fia/datamart/CSV/",
+ 'species': 'https://apps.fs.usda.gov/fia/datamart/CSV/REF_SPECIES.csv'}
self.keywords = ["plants", "continental-scale", "observational"]
self.citation = "DATEOFDOWNLOAD. Forest Inventory and Analysis Database, St. Paul, MN: U.S. Department of Agriculture, Forest Service, Northern Research Station. [Available only on internet: http://apps.fs.fed.us/fiadb-downloads/datamart.html]"
self.description = """WARNING: This dataset requires downloading many large files and will probably take several hours to finish installing."""
| {"golden_diff": "diff --git a/scripts/forest_inventory_analysis.py b/scripts/forest_inventory_analysis.py\n--- a/scripts/forest_inventory_analysis.py\n+++ b/scripts/forest_inventory_analysis.py\n@@ -20,10 +20,10 @@\n self.title = \"Forest Inventory and Analysis\"\n self.name = \"forest-inventory-analysis\"\n self.retriever_minimum_version = '2.0.dev'\n- self.version = '1.4.0'\n+ self.version = '1.4.1'\n self.ref = \"http://fia.fs.fed.us/\"\n- self.urls = {\"main\": \"https://apps.fs.usda.gov/fiadb-downloads/CSV/\",\n- 'species': 'https://apps.fs.usda.gov/fiadb-downloads/CSV/REF_SPECIES.csv'}\n+ self.urls = {\"main\": \"https://apps.fs.usda.gov/fia/datamart/CSV/\",\n+ 'species': 'https://apps.fs.usda.gov/fia/datamart/CSV/REF_SPECIES.csv'}\n self.keywords = [\"plants\", \"continental-scale\", \"observational\"]\n self.citation = \"DATEOFDOWNLOAD. Forest Inventory and Analysis Database, St. Paul, MN: U.S. Department of Agriculture, Forest Service, Northern Research Station. [Available only on internet: http://apps.fs.fed.us/fiadb-downloads/datamart.html]\"\n self.description = \"\"\"WARNING: This dataset requires downloading many large files and will probably take several hours to finish installing.\"\"\"\n", "issue": "Update FIA links\nIt's that time of year again where FIA puts out a new release and moves things around. See https://github.com/weecology/retriever/issues/865#issuecomment-325588307\r\n\r\nWe need to track down the most recent links and update the script.\r\n\r\nThanks to @nestorperez for reporting this.\n", "before_files": [{"content": "#retriever\n\"\"\"Retriever script for Forest Inventory and Analysis\n\n\"\"\"\nfrom __future__ import print_function\nfrom future import standard_library\nstandard_library.install_aliases()\n\nimport os\n\nfrom retriever.lib.templates import Script\nfrom retriever.lib.models import Table\nfrom retriever import open_fr, open_fw, VERSION\nfrom pkg_resources import parse_version\n\n\nclass main(Script):\n def __init__(self, **kwargs):\n Script.__init__(self, **kwargs)\n self.title = \"Forest Inventory and Analysis\"\n self.name = \"forest-inventory-analysis\"\n self.retriever_minimum_version = '2.0.dev'\n self.version = '1.4.0'\n self.ref = \"http://fia.fs.fed.us/\"\n self.urls = {\"main\": \"https://apps.fs.usda.gov/fiadb-downloads/CSV/\",\n 'species': 'https://apps.fs.usda.gov/fiadb-downloads/CSV/REF_SPECIES.csv'}\n self.keywords = [\"plants\", \"continental-scale\", \"observational\"]\n self.citation = \"DATEOFDOWNLOAD. Forest Inventory and Analysis Database, St. Paul, MN: U.S. Department of Agriculture, Forest Service, Northern Research Station. [Available only on internet: http://apps.fs.fed.us/fiadb-downloads/datamart.html]\"\n self.description = \"\"\"WARNING: This dataset requires downloading many large files and will probably take several hours to finish installing.\"\"\"\n self.addendum = \"\"\"This dataset requires downloading many large files - please be patient.\"\"\"\n \n if parse_version(VERSION) <= parse_version(\"2.0.0\"):\n self.shortname = self.name\n self.name = self.title\n self.tags = self.keywords\n\n def download(self, engine=None, debug=False):\n Script.download(self, engine, debug)\n engine = self.engine\n\n # download and create species table\n table = Table('species')\n self.engine.auto_create_table(table, url=self.urls['species'])\n self.engine.insert_data_from_url(self.urls['species'])\n\n # State abbreviations with the year annual inventory began for that state\n stateslist = [('AL', 2001), ('AK', 2004), ('AZ', 2001), ('AR', 2000),\n ('CA', 2001), ('CO', 2002), ('CT', 2003), ('DE', 2004),\n ('FL', 2003), ('GA', 1998), ('ID', 2004), ('IL', 2001),\n ('IN', 1999), ('IA', 1999), ('KS', 2001), ('KY', 1999),\n ('LA', 2001), ('ME', 1999), ('MD', 2004), ('MA', 2003),\n ('MI', 2000), ('MN', 1999), ('MO', 1999), ('MS', 2006),\n ('MT', 2003), ('NE', 2001), ('NV', 2004), ('NH', 2002),\n ('NJ', 2004), ('NM', 1999), ('NY', 2002), ('NC', 2003),\n ('ND', 2001), ('OH', 2001), ('OK', 2008), ('OR', 2001),\n ('PA', 2000), ('RI', 2003), ('SC', 1999), ('SD', 2001),\n ('TN', 2000), ('TX', 2001), ('UT', 2000), ('VT', 2003),\n ('VA', 1998), ('WA', 2002), ('WV', 2004), ('WI', 2000),\n ('WY', 2000), ('PR', 2001)]\n\n tablelist = [\"SURVEY\", \"PLOT\", \"COND\", \"SUBPLOT\", \"SUBP_COND\", \"TREE\", \"SEEDLING\"]\n\n for table in tablelist:\n for state, year in stateslist:\n engine.download_files_from_archive(self.urls[\"main\"] + state + \"_\" + table + \".ZIP\",\n [state + \"_\" + table + \".csv\"])\n\n for table in tablelist:\n print(\"Scanning data for table %s...\" % table)\n prep_file_name = \"%s.csv\" % table\n prep_file = open_fw(engine.format_filename(prep_file_name))\n this_file = open_fr(engine.format_filename(stateslist[0][0] + \"_\" + table + \".csv\"))\n col_names = this_file.readline()\n prep_file.write(col_names)\n column_names = [col.strip('\"') for col in col_names.split(',')]\n year_column = column_names.index(\"INVYR\")\n this_file.close()\n\n for state, year in stateslist:\n this_file = open_fr(engine.format_filename(state + \"_\" + table + \".csv\"))\n this_file.readline()\n for line in this_file:\n values = line.split(',')\n this_year = values[year_column]\n if int(this_year) >= year:\n prep_file.write(line)\n prep_file.close()\n engine.auto_create_table(Table(table), filename=prep_file_name)\n\n engine.insert_data_from_file(engine.format_filename(prep_file_name))\n\n try:\n os.remove(engine.format_filename(prep_file_name))\n except:\n pass\n\n return engine\n\n\nSCRIPT = main()\n", "path": "scripts/forest_inventory_analysis.py"}], "after_files": [{"content": "#retriever\n\"\"\"Retriever script for Forest Inventory and Analysis\n\n\"\"\"\nfrom __future__ import print_function\nfrom future import standard_library\nstandard_library.install_aliases()\n\nimport os\n\nfrom retriever.lib.templates import Script\nfrom retriever.lib.models import Table\nfrom retriever import open_fr, open_fw, VERSION\nfrom pkg_resources import parse_version\n\n\nclass main(Script):\n def __init__(self, **kwargs):\n Script.__init__(self, **kwargs)\n self.title = \"Forest Inventory and Analysis\"\n self.name = \"forest-inventory-analysis\"\n self.retriever_minimum_version = '2.0.dev'\n self.version = '1.4.1'\n self.ref = \"http://fia.fs.fed.us/\"\n self.urls = {\"main\": \"https://apps.fs.usda.gov/fia/datamart/CSV/\",\n 'species': 'https://apps.fs.usda.gov/fia/datamart/CSV/REF_SPECIES.csv'}\n self.keywords = [\"plants\", \"continental-scale\", \"observational\"]\n self.citation = \"DATEOFDOWNLOAD. Forest Inventory and Analysis Database, St. Paul, MN: U.S. Department of Agriculture, Forest Service, Northern Research Station. [Available only on internet: http://apps.fs.fed.us/fiadb-downloads/datamart.html]\"\n self.description = \"\"\"WARNING: This dataset requires downloading many large files and will probably take several hours to finish installing.\"\"\"\n self.addendum = \"\"\"This dataset requires downloading many large files - please be patient.\"\"\"\n \n if parse_version(VERSION) <= parse_version(\"2.0.0\"):\n self.shortname = self.name\n self.name = self.title\n self.tags = self.keywords\n\n def download(self, engine=None, debug=False):\n Script.download(self, engine, debug)\n engine = self.engine\n\n # download and create species table\n table = Table('species')\n self.engine.auto_create_table(table, url=self.urls['species'])\n self.engine.insert_data_from_url(self.urls['species'])\n\n # State abbreviations with the year annual inventory began for that state\n stateslist = [('AL', 2001), ('AK', 2004), ('AZ', 2001), ('AR', 2000),\n ('CA', 2001), ('CO', 2002), ('CT', 2003), ('DE', 2004),\n ('FL', 2003), ('GA', 1998), ('ID', 2004), ('IL', 2001),\n ('IN', 1999), ('IA', 1999), ('KS', 2001), ('KY', 1999),\n ('LA', 2001), ('ME', 1999), ('MD', 2004), ('MA', 2003),\n ('MI', 2000), ('MN', 1999), ('MO', 1999), ('MS', 2006),\n ('MT', 2003), ('NE', 2001), ('NV', 2004), ('NH', 2002),\n ('NJ', 2004), ('NM', 1999), ('NY', 2002), ('NC', 2003),\n ('ND', 2001), ('OH', 2001), ('OK', 2008), ('OR', 2001),\n ('PA', 2000), ('RI', 2003), ('SC', 1999), ('SD', 2001),\n ('TN', 2000), ('TX', 2001), ('UT', 2000), ('VT', 2003),\n ('VA', 1998), ('WA', 2002), ('WV', 2004), ('WI', 2000),\n ('WY', 2000), ('PR', 2001)]\n\n tablelist = [\"SURVEY\", \"PLOT\", \"COND\", \"SUBPLOT\", \"SUBP_COND\", \"TREE\", \"SEEDLING\"]\n\n for table in tablelist:\n for state, year in stateslist:\n engine.download_files_from_archive(self.urls[\"main\"] + state + \"_\" + table + \".ZIP\",\n [state + \"_\" + table + \".csv\"])\n\n for table in tablelist:\n print(\"Scanning data for table %s...\" % table)\n prep_file_name = \"%s.csv\" % table\n prep_file = open_fw(engine.format_filename(prep_file_name))\n this_file = open_fr(engine.format_filename(stateslist[0][0] + \"_\" + table + \".csv\"))\n col_names = this_file.readline()\n prep_file.write(col_names)\n column_names = [col.strip('\"') for col in col_names.split(',')]\n year_column = column_names.index(\"INVYR\")\n this_file.close()\n\n for state, year in stateslist:\n this_file = open_fr(engine.format_filename(state + \"_\" + table + \".csv\"))\n this_file.readline()\n for line in this_file:\n values = line.split(',')\n this_year = values[year_column]\n if int(this_year) >= year:\n prep_file.write(line)\n prep_file.close()\n engine.auto_create_table(Table(table), filename=prep_file_name)\n\n engine.insert_data_from_file(engine.format_filename(prep_file_name))\n\n try:\n os.remove(engine.format_filename(prep_file_name))\n except:\n pass\n\n return engine\n\n\nSCRIPT = main()\n", "path": "scripts/forest_inventory_analysis.py"}]} | 1,795 | 320 |
gh_patches_debug_8852 | rasdani/github-patches | git_diff | pwndbg__pwndbg-363 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Broken `entry` command
### Description
The `entry` command pass arguments differently then the `run` command.
### Steps to reproduce
```
[dc@dc:pwndbg|dev *$%]$ gdb python
Loaded 113 commands. Type pwndbg [filter] for a list.
Reading symbols from python...(no debugging symbols found)...done.
pwndbg> set exception-verbose on
Set whether to print a full stacktracefor exceptions raised in Pwndbg commands to True
pwndbg> run -c "print(1); print(2)"
Starting program: /usr/bin/python -c "print(1); print(2)"
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
1
2
[Inferior 1 (process 20590) exited normally]
pwndbg> entry -c "print(1); print(2)"
('-c', 'print(1); print(2)')
Running '%s' run -c print(1); print(2)
/bin/bash: -c: line 0: syntax error near unexpected token `('
/bin/bash: -c: line 0: `exec /usr/bin/python -c print(1); print(2)'
Traceback (most recent call last):
File "/home/dc/installed/pwndbg/pwndbg/commands/__init__.py", line 100, in __call__
return self.function(*args, **kwargs)
File "/home/dc/installed/pwndbg/pwndbg/commands/__init__.py", line 181, in _OnlyWithFile
return function(*a, **kw)
File "/home/dc/installed/pwndbg/pwndbg/commands/start.py", line 72, in entry
gdb.execute(run, from_tty=False)
gdb.error: During startup program exited with code 1.
If that is an issue, you can report it on https://github.com/pwndbg/pwndbg/issues
(Please don't forget to search if it hasn't been reported before)
PS: Pull requests are welcome
```
### My version
```
pwndbg> version
Gdb: GNU gdb (GDB) 8.0.1
Python: 3.6.3 (default, Oct 24 2017, 14:48:20) [GCC 7.2.0]
Pwndbg: 1.0.0 build: 5811010
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/commands/start.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """
4 Launches the target process after setting a breakpoint at a convenient
5 entry point.
6 """
7 from __future__ import absolute_import
8 from __future__ import division
9 from __future__ import print_function
10 from __future__ import unicode_literals
11
12 import gdb
13
14 import pwndbg.commands
15 import pwndbg.elf
16 import pwndbg.events
17 import pwndbg.symbol
18
19 break_on_first_instruction = False
20
21
22 @pwndbg.events.start
23 def on_start():
24 global break_on_first_instruction
25 if break_on_first_instruction:
26 spec = "*%#x" % (int(pwndbg.elf.entry()))
27 gdb.Breakpoint(spec, temporary=True)
28 break_on_first_instruction = False
29
30
31 @pwndbg.commands.Command
32 def start(*a):
33 """
34 Set a breakpoint at a convenient location in the binary,
35 generally 'main', 'init', or the entry point.
36 """
37 run = 'run ' + ' '.join(a)
38
39 symbols = ["main",
40 "_main",
41 "start",
42 "_start",
43 "init",
44 "_init"]
45
46 for symbol in symbols:
47 address = pwndbg.symbol.address(symbol)
48
49 if not address:
50 continue
51
52 b = gdb.Breakpoint(symbol, temporary=True)
53 gdb.execute(run, from_tty=False, to_string=True)
54 return
55
56 # Try a breakpoint at the binary entry
57 entry(*a)
58
59
60 @pwndbg.commands.Command
61 @pwndbg.commands.OnlyWithFile
62 def entry(*a):
63 """
64 Set a breakpoint at the first instruction executed in
65 the target binary.
66 """
67 global break_on_first_instruction
68 break_on_first_instruction = True
69 run = 'run ' + ' '.join(a)
70 gdb.execute(run, from_tty=False)
71
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwndbg/commands/start.py b/pwndbg/commands/start.py
--- a/pwndbg/commands/start.py
+++ b/pwndbg/commands/start.py
@@ -9,6 +9,8 @@
from __future__ import print_function
from __future__ import unicode_literals
+import shlex
+
import gdb
import pwndbg.commands
@@ -66,5 +68,5 @@
"""
global break_on_first_instruction
break_on_first_instruction = True
- run = 'run ' + ' '.join(a)
+ run = 'run ' + ' '.join(map(shlex.quote, a))
gdb.execute(run, from_tty=False)
| {"golden_diff": "diff --git a/pwndbg/commands/start.py b/pwndbg/commands/start.py\n--- a/pwndbg/commands/start.py\n+++ b/pwndbg/commands/start.py\n@@ -9,6 +9,8 @@\n from __future__ import print_function\n from __future__ import unicode_literals\n \n+import shlex\n+\n import gdb\n \n import pwndbg.commands\n@@ -66,5 +68,5 @@\n \"\"\"\n global break_on_first_instruction\n break_on_first_instruction = True\n- run = 'run ' + ' '.join(a)\n+ run = 'run ' + ' '.join(map(shlex.quote, a))\n gdb.execute(run, from_tty=False)\n", "issue": "Broken `entry` command\n### Description\r\n\r\nThe `entry` command pass arguments differently then the `run` command.\r\n\r\n### Steps to reproduce\r\n\r\n```\r\n[dc@dc:pwndbg|dev *$%]$ gdb python\r\nLoaded 113 commands. Type pwndbg [filter] for a list.\r\nReading symbols from python...(no debugging symbols found)...done.\r\npwndbg> set exception-verbose on\r\nSet whether to print a full stacktracefor exceptions raised in Pwndbg commands to True\r\npwndbg> run -c \"print(1); print(2)\"\r\nStarting program: /usr/bin/python -c \"print(1); print(2)\"\r\n[Thread debugging using libthread_db enabled]\r\nUsing host libthread_db library \"/usr/lib/libthread_db.so.1\".\r\n1\r\n2\r\n[Inferior 1 (process 20590) exited normally]\r\npwndbg> entry -c \"print(1); print(2)\"\r\n('-c', 'print(1); print(2)')\r\nRunning '%s' run -c print(1); print(2)\r\n/bin/bash: -c: line 0: syntax error near unexpected token `('\r\n/bin/bash: -c: line 0: `exec /usr/bin/python -c print(1); print(2)'\r\nTraceback (most recent call last):\r\n File \"/home/dc/installed/pwndbg/pwndbg/commands/__init__.py\", line 100, in __call__\r\n return self.function(*args, **kwargs)\r\n File \"/home/dc/installed/pwndbg/pwndbg/commands/__init__.py\", line 181, in _OnlyWithFile\r\n return function(*a, **kw)\r\n File \"/home/dc/installed/pwndbg/pwndbg/commands/start.py\", line 72, in entry\r\n gdb.execute(run, from_tty=False)\r\ngdb.error: During startup program exited with code 1.\r\n\r\nIf that is an issue, you can report it on https://github.com/pwndbg/pwndbg/issues\r\n(Please don't forget to search if it hasn't been reported before)\r\nPS: Pull requests are welcome\r\n```\r\n\r\n### My version\r\n\r\n```\r\npwndbg> version\r\nGdb: GNU gdb (GDB) 8.0.1\r\nPython: 3.6.3 (default, Oct 24 2017, 14:48:20) [GCC 7.2.0]\r\nPwndbg: 1.0.0 build: 5811010\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nLaunches the target process after setting a breakpoint at a convenient\nentry point.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport gdb\n\nimport pwndbg.commands\nimport pwndbg.elf\nimport pwndbg.events\nimport pwndbg.symbol\n\nbreak_on_first_instruction = False\n\n\[email protected]\ndef on_start():\n global break_on_first_instruction\n if break_on_first_instruction:\n spec = \"*%#x\" % (int(pwndbg.elf.entry()))\n gdb.Breakpoint(spec, temporary=True)\n break_on_first_instruction = False\n\n\[email protected]\ndef start(*a):\n \"\"\"\n Set a breakpoint at a convenient location in the binary,\n generally 'main', 'init', or the entry point.\n \"\"\"\n run = 'run ' + ' '.join(a)\n\n symbols = [\"main\",\n \"_main\",\n \"start\",\n \"_start\",\n \"init\",\n \"_init\"]\n\n for symbol in symbols:\n address = pwndbg.symbol.address(symbol)\n\n if not address:\n continue\n\n b = gdb.Breakpoint(symbol, temporary=True)\n gdb.execute(run, from_tty=False, to_string=True)\n return\n\n # Try a breakpoint at the binary entry\n entry(*a)\n\n\[email protected]\[email protected]\ndef entry(*a):\n \"\"\"\n Set a breakpoint at the first instruction executed in\n the target binary.\n \"\"\"\n global break_on_first_instruction\n break_on_first_instruction = True\n run = 'run ' + ' '.join(a)\n gdb.execute(run, from_tty=False)\n", "path": "pwndbg/commands/start.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nLaunches the target process after setting a breakpoint at a convenient\nentry point.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport shlex\n\nimport gdb\n\nimport pwndbg.commands\nimport pwndbg.elf\nimport pwndbg.events\nimport pwndbg.symbol\n\nbreak_on_first_instruction = False\n\n\[email protected]\ndef on_start():\n global break_on_first_instruction\n if break_on_first_instruction:\n spec = \"*%#x\" % (int(pwndbg.elf.entry()))\n gdb.Breakpoint(spec, temporary=True)\n break_on_first_instruction = False\n\n\[email protected]\ndef start(*a):\n \"\"\"\n Set a breakpoint at a convenient location in the binary,\n generally 'main', 'init', or the entry point.\n \"\"\"\n run = 'run ' + ' '.join(a)\n\n symbols = [\"main\",\n \"_main\",\n \"start\",\n \"_start\",\n \"init\",\n \"_init\"]\n\n for symbol in symbols:\n address = pwndbg.symbol.address(symbol)\n\n if not address:\n continue\n\n b = gdb.Breakpoint(symbol, temporary=True)\n gdb.execute(run, from_tty=False, to_string=True)\n return\n\n # Try a breakpoint at the binary entry\n entry(*a)\n\n\[email protected]\[email protected]\ndef entry(*a):\n \"\"\"\n Set a breakpoint at the first instruction executed in\n the target binary.\n \"\"\"\n global break_on_first_instruction\n break_on_first_instruction = True\n run = 'run ' + ' '.join(map(shlex.quote, a))\n gdb.execute(run, from_tty=False)\n", "path": "pwndbg/commands/start.py"}]} | 1,351 | 151 |
gh_patches_debug_7911 | rasdani/github-patches | git_diff | edgedb__edgedb-1946 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ISE when LIMIT/OFFSET correlated with query
```
sully> SELECT Object LIMIT len(<str>.id);
ERROR: InternalServerError: argument of LIMIT must not contain variables
```
```
sully> SELECT Object OFFSET len(<str>.id);
ERROR: InternalServerError: argument of OFFSET must not contain variables
```
Rejecting these is correct but we want a real error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `edb/edgeql/compiler/clauses.py`
Content:
```
1 #
2 # This source file is part of the EdgeDB open source project.
3 #
4 # Copyright 2008-present MagicStack Inc. and the EdgeDB authors.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17 #
18
19
20 """EdgeQL compiler functions to process shared clauses."""
21
22
23 from __future__ import annotations
24
25 from typing import *
26
27 from edb.edgeql import ast as qlast
28 from edb.ir import ast as irast
29
30 from edb import errors
31
32 from . import context
33 from . import dispatch
34 from . import inference
35 from . import polyres
36 from . import schemactx
37 from . import setgen
38
39
40 def compile_where_clause(
41 ir_stmt: irast.FilteredStmt,
42 where: Optional[qlast.Base], *,
43 ctx: context.ContextLevel) -> None:
44
45 if where is None:
46 return
47
48 with ctx.newscope(fenced=True) as subctx:
49 subctx.path_scope.unnest_fence = True
50 ir_expr = dispatch.compile(where, ctx=subctx)
51 bool_t = ctx.env.get_track_schema_type('std::bool')
52 ir_set = setgen.scoped_set(ir_expr, typehint=bool_t, ctx=subctx)
53
54 ir_stmt.where = ir_set
55
56
57 def compile_orderby_clause(
58 sortexprs: Optional[Iterable[qlast.SortExpr]], *,
59 ctx: context.ContextLevel) -> List[irast.SortExpr]:
60
61 result: List[irast.SortExpr] = []
62 if not sortexprs:
63 return result
64
65 with ctx.new() as subctx:
66 for sortexpr in sortexprs:
67 with subctx.newscope(fenced=True) as exprctx:
68 exprctx.path_scope.unnest_fence = True
69 ir_sortexpr = dispatch.compile(sortexpr.path, ctx=exprctx)
70 ir_sortexpr = setgen.scoped_set(
71 ir_sortexpr, force_reassign=True, ctx=exprctx)
72 ir_sortexpr.context = sortexpr.context
73
74 # Check that the sortexpr type is actually orderable
75 # with either '>' or '<' based on the DESC or ASC sort
76 # order.
77 env = exprctx.env
78 sort_type = inference.infer_type(ir_sortexpr, env)
79 # Postgres by default treats ASC as using '<' and DESC
80 # as using '>'. We should do the same.
81 if sortexpr.direction == qlast.SortDesc:
82 op_name = '>'
83 else:
84 op_name = '<'
85 opers = env.schema.get_operators(
86 op_name, module_aliases=exprctx.modaliases)
87
88 # Verify that a comparison operator is defined for 2
89 # sort_type expressions.
90 matched = polyres.find_callable(
91 opers,
92 args=[(sort_type, ir_sortexpr), (sort_type, ir_sortexpr)],
93 kwargs={},
94 ctx=exprctx)
95 if len(matched) != 1:
96 sort_type_name = schemactx.get_material_type(
97 sort_type, ctx=ctx).get_displayname(env.schema)
98 if len(matched) == 0:
99 raise errors.QueryError(
100 f'type {sort_type_name!r} cannot be used in '
101 f'ORDER BY clause because ordering is not '
102 f'defined for it',
103 context=sortexpr.context)
104
105 elif len(matched) > 1:
106 raise errors.QueryError(
107 f'type {sort_type_name!r} cannot be used in '
108 f'ORDER BY clause because ordering is '
109 f'ambiguous for it',
110 context=sortexpr.context)
111
112 result.append(
113 irast.SortExpr(
114 expr=ir_sortexpr,
115 direction=sortexpr.direction,
116 nones_order=sortexpr.nones_order))
117
118 return result
119
120
121 def compile_limit_offset_clause(
122 expr: Optional[qlast.Base], *,
123 ctx: context.ContextLevel) -> Optional[irast.Set]:
124 if expr is None:
125 ir_set = None
126 else:
127 with ctx.newscope(fenced=True) as subctx:
128 ir_expr = dispatch.compile(expr, ctx=subctx)
129 int_t = ctx.env.get_track_schema_type('std::int64')
130 ir_set = setgen.scoped_set(
131 ir_expr, force_reassign=True, typehint=int_t, ctx=subctx)
132 ir_set.context = expr.context
133
134 return ir_set
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/edb/edgeql/compiler/clauses.py b/edb/edgeql/compiler/clauses.py
--- a/edb/edgeql/compiler/clauses.py
+++ b/edb/edgeql/compiler/clauses.py
@@ -125,6 +125,10 @@
ir_set = None
else:
with ctx.newscope(fenced=True) as subctx:
+ # Clear out the partial_path_prefix, since we aren't in
+ # the scope of the select subject
+ subctx.partial_path_prefix = None
+
ir_expr = dispatch.compile(expr, ctx=subctx)
int_t = ctx.env.get_track_schema_type('std::int64')
ir_set = setgen.scoped_set(
| {"golden_diff": "diff --git a/edb/edgeql/compiler/clauses.py b/edb/edgeql/compiler/clauses.py\n--- a/edb/edgeql/compiler/clauses.py\n+++ b/edb/edgeql/compiler/clauses.py\n@@ -125,6 +125,10 @@\n ir_set = None\n else:\n with ctx.newscope(fenced=True) as subctx:\n+ # Clear out the partial_path_prefix, since we aren't in\n+ # the scope of the select subject\n+ subctx.partial_path_prefix = None\n+\n ir_expr = dispatch.compile(expr, ctx=subctx)\n int_t = ctx.env.get_track_schema_type('std::int64')\n ir_set = setgen.scoped_set(\n", "issue": "ISE when LIMIT/OFFSET correlated with query\n```\r\nsully> SELECT Object LIMIT len(<str>.id);\r\nERROR: InternalServerError: argument of LIMIT must not contain variables\r\n```\r\n\r\n```\r\nsully> SELECT Object OFFSET len(<str>.id);\r\nERROR: InternalServerError: argument of OFFSET must not contain variables\r\n```\r\n\r\nRejecting these is correct but we want a real error.\r\n\n", "before_files": [{"content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2008-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\n\"\"\"EdgeQL compiler functions to process shared clauses.\"\"\"\n\n\nfrom __future__ import annotations\n\nfrom typing import *\n\nfrom edb.edgeql import ast as qlast\nfrom edb.ir import ast as irast\n\nfrom edb import errors\n\nfrom . import context\nfrom . import dispatch\nfrom . import inference\nfrom . import polyres\nfrom . import schemactx\nfrom . import setgen\n\n\ndef compile_where_clause(\n ir_stmt: irast.FilteredStmt,\n where: Optional[qlast.Base], *,\n ctx: context.ContextLevel) -> None:\n\n if where is None:\n return\n\n with ctx.newscope(fenced=True) as subctx:\n subctx.path_scope.unnest_fence = True\n ir_expr = dispatch.compile(where, ctx=subctx)\n bool_t = ctx.env.get_track_schema_type('std::bool')\n ir_set = setgen.scoped_set(ir_expr, typehint=bool_t, ctx=subctx)\n\n ir_stmt.where = ir_set\n\n\ndef compile_orderby_clause(\n sortexprs: Optional[Iterable[qlast.SortExpr]], *,\n ctx: context.ContextLevel) -> List[irast.SortExpr]:\n\n result: List[irast.SortExpr] = []\n if not sortexprs:\n return result\n\n with ctx.new() as subctx:\n for sortexpr in sortexprs:\n with subctx.newscope(fenced=True) as exprctx:\n exprctx.path_scope.unnest_fence = True\n ir_sortexpr = dispatch.compile(sortexpr.path, ctx=exprctx)\n ir_sortexpr = setgen.scoped_set(\n ir_sortexpr, force_reassign=True, ctx=exprctx)\n ir_sortexpr.context = sortexpr.context\n\n # Check that the sortexpr type is actually orderable\n # with either '>' or '<' based on the DESC or ASC sort\n # order.\n env = exprctx.env\n sort_type = inference.infer_type(ir_sortexpr, env)\n # Postgres by default treats ASC as using '<' and DESC\n # as using '>'. We should do the same.\n if sortexpr.direction == qlast.SortDesc:\n op_name = '>'\n else:\n op_name = '<'\n opers = env.schema.get_operators(\n op_name, module_aliases=exprctx.modaliases)\n\n # Verify that a comparison operator is defined for 2\n # sort_type expressions.\n matched = polyres.find_callable(\n opers,\n args=[(sort_type, ir_sortexpr), (sort_type, ir_sortexpr)],\n kwargs={},\n ctx=exprctx)\n if len(matched) != 1:\n sort_type_name = schemactx.get_material_type(\n sort_type, ctx=ctx).get_displayname(env.schema)\n if len(matched) == 0:\n raise errors.QueryError(\n f'type {sort_type_name!r} cannot be used in '\n f'ORDER BY clause because ordering is not '\n f'defined for it',\n context=sortexpr.context)\n\n elif len(matched) > 1:\n raise errors.QueryError(\n f'type {sort_type_name!r} cannot be used in '\n f'ORDER BY clause because ordering is '\n f'ambiguous for it',\n context=sortexpr.context)\n\n result.append(\n irast.SortExpr(\n expr=ir_sortexpr,\n direction=sortexpr.direction,\n nones_order=sortexpr.nones_order))\n\n return result\n\n\ndef compile_limit_offset_clause(\n expr: Optional[qlast.Base], *,\n ctx: context.ContextLevel) -> Optional[irast.Set]:\n if expr is None:\n ir_set = None\n else:\n with ctx.newscope(fenced=True) as subctx:\n ir_expr = dispatch.compile(expr, ctx=subctx)\n int_t = ctx.env.get_track_schema_type('std::int64')\n ir_set = setgen.scoped_set(\n ir_expr, force_reassign=True, typehint=int_t, ctx=subctx)\n ir_set.context = expr.context\n\n return ir_set\n", "path": "edb/edgeql/compiler/clauses.py"}], "after_files": [{"content": "#\n# This source file is part of the EdgeDB open source project.\n#\n# Copyright 2008-present MagicStack Inc. and the EdgeDB authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\n\"\"\"EdgeQL compiler functions to process shared clauses.\"\"\"\n\n\nfrom __future__ import annotations\n\nfrom typing import *\n\nfrom edb.edgeql import ast as qlast\nfrom edb.ir import ast as irast\n\nfrom edb import errors\n\nfrom . import context\nfrom . import dispatch\nfrom . import inference\nfrom . import polyres\nfrom . import schemactx\nfrom . import setgen\n\n\ndef compile_where_clause(\n ir_stmt: irast.FilteredStmt,\n where: Optional[qlast.Base], *,\n ctx: context.ContextLevel) -> None:\n\n if where is None:\n return\n\n with ctx.newscope(fenced=True) as subctx:\n subctx.path_scope.unnest_fence = True\n ir_expr = dispatch.compile(where, ctx=subctx)\n bool_t = ctx.env.get_track_schema_type('std::bool')\n ir_set = setgen.scoped_set(ir_expr, typehint=bool_t, ctx=subctx)\n\n ir_stmt.where = ir_set\n\n\ndef compile_orderby_clause(\n sortexprs: Optional[Iterable[qlast.SortExpr]], *,\n ctx: context.ContextLevel) -> List[irast.SortExpr]:\n\n result: List[irast.SortExpr] = []\n if not sortexprs:\n return result\n\n with ctx.new() as subctx:\n for sortexpr in sortexprs:\n with subctx.newscope(fenced=True) as exprctx:\n exprctx.path_scope.unnest_fence = True\n ir_sortexpr = dispatch.compile(sortexpr.path, ctx=exprctx)\n ir_sortexpr = setgen.scoped_set(\n ir_sortexpr, force_reassign=True, ctx=exprctx)\n ir_sortexpr.context = sortexpr.context\n\n # Check that the sortexpr type is actually orderable\n # with either '>' or '<' based on the DESC or ASC sort\n # order.\n env = exprctx.env\n sort_type = inference.infer_type(ir_sortexpr, env)\n # Postgres by default treats ASC as using '<' and DESC\n # as using '>'. We should do the same.\n if sortexpr.direction == qlast.SortDesc:\n op_name = '>'\n else:\n op_name = '<'\n opers = env.schema.get_operators(\n op_name, module_aliases=exprctx.modaliases)\n\n # Verify that a comparison operator is defined for 2\n # sort_type expressions.\n matched = polyres.find_callable(\n opers,\n args=[(sort_type, ir_sortexpr), (sort_type, ir_sortexpr)],\n kwargs={},\n ctx=exprctx)\n if len(matched) != 1:\n sort_type_name = schemactx.get_material_type(\n sort_type, ctx=ctx).get_displayname(env.schema)\n if len(matched) == 0:\n raise errors.QueryError(\n f'type {sort_type_name!r} cannot be used in '\n f'ORDER BY clause because ordering is not '\n f'defined for it',\n context=sortexpr.context)\n\n elif len(matched) > 1:\n raise errors.QueryError(\n f'type {sort_type_name!r} cannot be used in '\n f'ORDER BY clause because ordering is '\n f'ambiguous for it',\n context=sortexpr.context)\n\n result.append(\n irast.SortExpr(\n expr=ir_sortexpr,\n direction=sortexpr.direction,\n nones_order=sortexpr.nones_order))\n\n return result\n\n\ndef compile_limit_offset_clause(\n expr: Optional[qlast.Base], *,\n ctx: context.ContextLevel) -> Optional[irast.Set]:\n if expr is None:\n ir_set = None\n else:\n with ctx.newscope(fenced=True) as subctx:\n # Clear out the partial_path_prefix, since we aren't in\n # the scope of the select subject\n subctx.partial_path_prefix = None\n\n ir_expr = dispatch.compile(expr, ctx=subctx)\n int_t = ctx.env.get_track_schema_type('std::int64')\n ir_set = setgen.scoped_set(\n ir_expr, force_reassign=True, typehint=int_t, ctx=subctx)\n ir_set.context = expr.context\n\n return ir_set\n", "path": "edb/edgeql/compiler/clauses.py"}]} | 1,686 | 161 |
gh_patches_debug_34984 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-3177 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Psycopg patching doesn't properly handle execute_values
The `execute_values` extension in psycopg2 composes and executes the query with b-string, even if you passed the query as a string. Below is the full function from psycopg2.extras
```python
def execute_values(cur, sql, argslist, template=None, page_size=100, fetch=False):
from psycopg2.sql import Composable
if isinstance(sql, Composable):
sql = sql.as_string(cur)
# we can't just use sql % vals because vals is bytes: if sql is bytes
# there will be some decoding error because of stupid codec used, and Py3
# doesn't implement % on bytes.
if not isinstance(sql, bytes):
sql = sql.encode(_ext.encodings[cur.connection.encoding])
pre, post = _split_sql(sql)
result = [] if fetch else None
for page in _paginate(argslist, page_size=page_size):
if template is None:
template = b'(' + b','.join([b'%s'] * len(page[0])) + b')'
parts = pre[:]
for args in page:
parts.append(cur.mogrify(template, args))
parts.append(b',')
parts[-1:] = post
cur.execute(b''.join(parts))
if fetch:
result.extend(cur.fetchall())
return result
```
The problem is that ddtrace assumes that the "resource" added to a span is a string. The result is that when `span.finish()` is called in the datadog lambda handler and it tries to serialize the span to json, it blows up with "TypeError: Object of type bytes is not JSON serializable". Upon investigation, I discovered that the ddtrace.internal.encoder.py's JSONEncoder just does a simple json.dumps() on all the spans and the `resource` attribute on the span from the using `execute_values` is bytes, not a string.
I think the solution here is simply to update the Psycopg2TracedCursor class to decode the resource from bytes if it is bytes, like this:
```python
class Psycopg2TracedCursor(dbapi.TracedCursor):
"""TracedCursor for psycopg2"""
def _trace_method(self, method, name, resource, extra_tags, *args, **kwargs):
# treat psycopg2.sql.Composable resource objects as strings
if isinstance(resource, Composable):
resource = resource.as_string(self.__wrapped__)
# THIS IS THE NEW PART BELOW (next 2 lines)
if isinstance(resource, bytes):
resource = resource.decode('utf-8')
return super(Psycopg2TracedCursor, self)._trace_method(method, name, resource, extra_tags, *args, **kwargs)
```
### Which version of dd-trace-py are you using?
Lambda layer, v50.
### Which version of pip are you using?
n/a
### How can we reproduce your problem?
Use `execute_values` while inside a tracing context. It should have a 100% failure rate.
### What is the result that you get?
A type error when span.finish() is called and the metrics are furnished to DD.
### What is the result that you expected?
It should work as normal, with the resource decoded as a string.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/internal/encoding.py`
Content:
```
1 import json
2 from typing import Any
3 from typing import Dict
4 from typing import List
5 from typing import Optional
6 from typing import TYPE_CHECKING
7
8 from ._encoding import ListStringTable
9 from ._encoding import MsgpackEncoderV03
10 from ._encoding import MsgpackEncoderV05
11 from .logger import get_logger
12
13
14 __all__ = ["MsgpackEncoderV03", "MsgpackEncoderV05", "ListStringTable", "MSGPACK_ENCODERS"]
15
16
17 if TYPE_CHECKING:
18 from ..span import Span
19
20
21 log = get_logger(__name__)
22
23
24 class _EncoderBase(object):
25 """
26 Encoder interface that provides the logic to encode traces and service.
27 """
28
29 def encode_traces(self, traces):
30 # type: (List[List[Span]]) -> str
31 """
32 Encodes a list of traces, expecting a list of items where each items
33 is a list of spans. Before dumping the string in a serialized format all
34 traces are normalized according to the encoding format. The trace
35 nesting is not changed.
36
37 :param traces: A list of traces that should be serialized
38 """
39 raise NotImplementedError()
40
41 def encode(self, obj):
42 # type: (List[List[Any]]) -> str
43 """
44 Defines the underlying format used during traces or services encoding.
45 This method must be implemented and should only be used by the internal
46 functions.
47 """
48 raise NotImplementedError()
49
50
51 class JSONEncoder(_EncoderBase):
52 content_type = "application/json"
53
54 def encode_traces(self, traces):
55 normalized_traces = [[span.to_dict() for span in trace] for trace in traces]
56 return self.encode(normalized_traces)
57
58 @staticmethod
59 def encode(obj):
60 # type: (Any) -> str
61 return json.dumps(obj)
62
63
64 class JSONEncoderV2(JSONEncoder):
65 """
66 JSONEncoderV2 encodes traces to the new intake API format.
67 """
68
69 content_type = "application/json"
70
71 def encode_traces(self, traces):
72 # type: (List[List[Span]]) -> str
73 normalized_traces = [[JSONEncoderV2._convert_span(span) for span in trace] for trace in traces]
74 return self.encode({"traces": normalized_traces})
75
76 @staticmethod
77 def _convert_span(span):
78 # type: (Span) -> Dict[str, Any]
79 sp = span.to_dict()
80 sp["trace_id"] = JSONEncoderV2._encode_id_to_hex(sp.get("trace_id"))
81 sp["parent_id"] = JSONEncoderV2._encode_id_to_hex(sp.get("parent_id"))
82 sp["span_id"] = JSONEncoderV2._encode_id_to_hex(sp.get("span_id"))
83 return sp
84
85 @staticmethod
86 def _encode_id_to_hex(dd_id):
87 # type: (Optional[int]) -> str
88 if not dd_id:
89 return "0000000000000000"
90 return "%0.16X" % int(dd_id)
91
92 @staticmethod
93 def _decode_id_to_hex(hex_id):
94 # type: (Optional[str]) -> int
95 if not hex_id:
96 return 0
97 return int(hex_id, 16)
98
99
100 MSGPACK_ENCODERS = {
101 "v0.3": MsgpackEncoderV03,
102 "v0.4": MsgpackEncoderV03,
103 "v0.5": MsgpackEncoderV05,
104 }
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddtrace/internal/encoding.py b/ddtrace/internal/encoding.py
--- a/ddtrace/internal/encoding.py
+++ b/ddtrace/internal/encoding.py
@@ -8,6 +8,9 @@
from ._encoding import ListStringTable
from ._encoding import MsgpackEncoderV03
from ._encoding import MsgpackEncoderV05
+from .compat import PY3
+from .compat import binary_type
+from .compat import ensure_text
from .logger import get_logger
@@ -48,17 +51,33 @@
raise NotImplementedError()
-class JSONEncoder(_EncoderBase):
+class JSONEncoder(json.JSONEncoder, _EncoderBase):
content_type = "application/json"
def encode_traces(self, traces):
- normalized_traces = [[span.to_dict() for span in trace] for trace in traces]
+ normalized_traces = [[JSONEncoder._normalize_span(span.to_dict()) for span in trace] for trace in traces]
return self.encode(normalized_traces)
@staticmethod
- def encode(obj):
- # type: (Any) -> str
- return json.dumps(obj)
+ def _normalize_span(span):
+ # Ensure all string attributes are actually strings and not bytes
+ # DEV: We are deferring meta/metrics to reduce any performance issues.
+ # Meta/metrics may still contain `bytes` and have encoding issues.
+ span["resource"] = JSONEncoder._normalize_str(span["resource"])
+ span["name"] = JSONEncoder._normalize_str(span["name"])
+ span["service"] = JSONEncoder._normalize_str(span["service"])
+ return span
+
+ @staticmethod
+ def _normalize_str(obj):
+ if obj is None:
+ return obj
+
+ if PY3:
+ return ensure_text(obj, errors="backslashreplace")
+ elif isinstance(obj, binary_type):
+ return obj.decode("utf-8", errors="replace")
+ return obj
class JSONEncoderV2(JSONEncoder):
@@ -77,6 +96,7 @@
def _convert_span(span):
# type: (Span) -> Dict[str, Any]
sp = span.to_dict()
+ sp = JSONEncoderV2._normalize_span(sp)
sp["trace_id"] = JSONEncoderV2._encode_id_to_hex(sp.get("trace_id"))
sp["parent_id"] = JSONEncoderV2._encode_id_to_hex(sp.get("parent_id"))
sp["span_id"] = JSONEncoderV2._encode_id_to_hex(sp.get("span_id"))
| {"golden_diff": "diff --git a/ddtrace/internal/encoding.py b/ddtrace/internal/encoding.py\n--- a/ddtrace/internal/encoding.py\n+++ b/ddtrace/internal/encoding.py\n@@ -8,6 +8,9 @@\n from ._encoding import ListStringTable\n from ._encoding import MsgpackEncoderV03\n from ._encoding import MsgpackEncoderV05\n+from .compat import PY3\n+from .compat import binary_type\n+from .compat import ensure_text\n from .logger import get_logger\n \n \n@@ -48,17 +51,33 @@\n raise NotImplementedError()\n \n \n-class JSONEncoder(_EncoderBase):\n+class JSONEncoder(json.JSONEncoder, _EncoderBase):\n content_type = \"application/json\"\n \n def encode_traces(self, traces):\n- normalized_traces = [[span.to_dict() for span in trace] for trace in traces]\n+ normalized_traces = [[JSONEncoder._normalize_span(span.to_dict()) for span in trace] for trace in traces]\n return self.encode(normalized_traces)\n \n @staticmethod\n- def encode(obj):\n- # type: (Any) -> str\n- return json.dumps(obj)\n+ def _normalize_span(span):\n+ # Ensure all string attributes are actually strings and not bytes\n+ # DEV: We are deferring meta/metrics to reduce any performance issues.\n+ # Meta/metrics may still contain `bytes` and have encoding issues.\n+ span[\"resource\"] = JSONEncoder._normalize_str(span[\"resource\"])\n+ span[\"name\"] = JSONEncoder._normalize_str(span[\"name\"])\n+ span[\"service\"] = JSONEncoder._normalize_str(span[\"service\"])\n+ return span\n+\n+ @staticmethod\n+ def _normalize_str(obj):\n+ if obj is None:\n+ return obj\n+\n+ if PY3:\n+ return ensure_text(obj, errors=\"backslashreplace\")\n+ elif isinstance(obj, binary_type):\n+ return obj.decode(\"utf-8\", errors=\"replace\")\n+ return obj\n \n \n class JSONEncoderV2(JSONEncoder):\n@@ -77,6 +96,7 @@\n def _convert_span(span):\n # type: (Span) -> Dict[str, Any]\n sp = span.to_dict()\n+ sp = JSONEncoderV2._normalize_span(sp)\n sp[\"trace_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"trace_id\"))\n sp[\"parent_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"parent_id\"))\n sp[\"span_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"span_id\"))\n", "issue": "Psycopg patching doesn't properly handle execute_values\nThe `execute_values` extension in psycopg2 composes and executes the query with b-string, even if you passed the query as a string. Below is the full function from psycopg2.extras\r\n\r\n```python\r\ndef execute_values(cur, sql, argslist, template=None, page_size=100, fetch=False):\r\n from psycopg2.sql import Composable\r\n if isinstance(sql, Composable):\r\n sql = sql.as_string(cur)\r\n\r\n # we can't just use sql % vals because vals is bytes: if sql is bytes\r\n # there will be some decoding error because of stupid codec used, and Py3\r\n # doesn't implement % on bytes.\r\n if not isinstance(sql, bytes):\r\n sql = sql.encode(_ext.encodings[cur.connection.encoding])\r\n pre, post = _split_sql(sql)\r\n\r\n result = [] if fetch else None\r\n for page in _paginate(argslist, page_size=page_size):\r\n if template is None:\r\n template = b'(' + b','.join([b'%s'] * len(page[0])) + b')'\r\n parts = pre[:]\r\n for args in page:\r\n parts.append(cur.mogrify(template, args))\r\n parts.append(b',')\r\n parts[-1:] = post\r\n cur.execute(b''.join(parts))\r\n if fetch:\r\n result.extend(cur.fetchall())\r\n\r\n return result\r\n```\r\n\r\nThe problem is that ddtrace assumes that the \"resource\" added to a span is a string. The result is that when `span.finish()` is called in the datadog lambda handler and it tries to serialize the span to json, it blows up with \"TypeError: Object of type bytes is not JSON serializable\". Upon investigation, I discovered that the ddtrace.internal.encoder.py's JSONEncoder just does a simple json.dumps() on all the spans and the `resource` attribute on the span from the using `execute_values` is bytes, not a string.\r\n\r\nI think the solution here is simply to update the Psycopg2TracedCursor class to decode the resource from bytes if it is bytes, like this:\r\n\r\n```python\r\nclass Psycopg2TracedCursor(dbapi.TracedCursor):\r\n \"\"\"TracedCursor for psycopg2\"\"\"\r\n\r\n def _trace_method(self, method, name, resource, extra_tags, *args, **kwargs):\r\n # treat psycopg2.sql.Composable resource objects as strings\r\n if isinstance(resource, Composable):\r\n resource = resource.as_string(self.__wrapped__)\r\n # THIS IS THE NEW PART BELOW (next 2 lines)\r\n if isinstance(resource, bytes):\r\n resource = resource.decode('utf-8')\r\n return super(Psycopg2TracedCursor, self)._trace_method(method, name, resource, extra_tags, *args, **kwargs)\r\n```\r\n\r\n### Which version of dd-trace-py are you using?\r\nLambda layer, v50.\r\n### Which version of pip are you using?\r\nn/a\r\n\r\n### How can we reproduce your problem?\r\nUse `execute_values` while inside a tracing context. It should have a 100% failure rate.\r\n\r\n### What is the result that you get?\r\nA type error when span.finish() is called and the metrics are furnished to DD.\r\n\r\n### What is the result that you expected?\r\nIt should work as normal, with the resource decoded as a string.\r\n\n", "before_files": [{"content": "import json\nfrom typing import Any\nfrom typing import Dict\nfrom typing import List\nfrom typing import Optional\nfrom typing import TYPE_CHECKING\n\nfrom ._encoding import ListStringTable\nfrom ._encoding import MsgpackEncoderV03\nfrom ._encoding import MsgpackEncoderV05\nfrom .logger import get_logger\n\n\n__all__ = [\"MsgpackEncoderV03\", \"MsgpackEncoderV05\", \"ListStringTable\", \"MSGPACK_ENCODERS\"]\n\n\nif TYPE_CHECKING:\n from ..span import Span\n\n\nlog = get_logger(__name__)\n\n\nclass _EncoderBase(object):\n \"\"\"\n Encoder interface that provides the logic to encode traces and service.\n \"\"\"\n\n def encode_traces(self, traces):\n # type: (List[List[Span]]) -> str\n \"\"\"\n Encodes a list of traces, expecting a list of items where each items\n is a list of spans. Before dumping the string in a serialized format all\n traces are normalized according to the encoding format. The trace\n nesting is not changed.\n\n :param traces: A list of traces that should be serialized\n \"\"\"\n raise NotImplementedError()\n\n def encode(self, obj):\n # type: (List[List[Any]]) -> str\n \"\"\"\n Defines the underlying format used during traces or services encoding.\n This method must be implemented and should only be used by the internal\n functions.\n \"\"\"\n raise NotImplementedError()\n\n\nclass JSONEncoder(_EncoderBase):\n content_type = \"application/json\"\n\n def encode_traces(self, traces):\n normalized_traces = [[span.to_dict() for span in trace] for trace in traces]\n return self.encode(normalized_traces)\n\n @staticmethod\n def encode(obj):\n # type: (Any) -> str\n return json.dumps(obj)\n\n\nclass JSONEncoderV2(JSONEncoder):\n \"\"\"\n JSONEncoderV2 encodes traces to the new intake API format.\n \"\"\"\n\n content_type = \"application/json\"\n\n def encode_traces(self, traces):\n # type: (List[List[Span]]) -> str\n normalized_traces = [[JSONEncoderV2._convert_span(span) for span in trace] for trace in traces]\n return self.encode({\"traces\": normalized_traces})\n\n @staticmethod\n def _convert_span(span):\n # type: (Span) -> Dict[str, Any]\n sp = span.to_dict()\n sp[\"trace_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"trace_id\"))\n sp[\"parent_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"parent_id\"))\n sp[\"span_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"span_id\"))\n return sp\n\n @staticmethod\n def _encode_id_to_hex(dd_id):\n # type: (Optional[int]) -> str\n if not dd_id:\n return \"0000000000000000\"\n return \"%0.16X\" % int(dd_id)\n\n @staticmethod\n def _decode_id_to_hex(hex_id):\n # type: (Optional[str]) -> int\n if not hex_id:\n return 0\n return int(hex_id, 16)\n\n\nMSGPACK_ENCODERS = {\n \"v0.3\": MsgpackEncoderV03,\n \"v0.4\": MsgpackEncoderV03,\n \"v0.5\": MsgpackEncoderV05,\n}\n", "path": "ddtrace/internal/encoding.py"}], "after_files": [{"content": "import json\nfrom typing import Any\nfrom typing import Dict\nfrom typing import List\nfrom typing import Optional\nfrom typing import TYPE_CHECKING\n\nfrom ._encoding import ListStringTable\nfrom ._encoding import MsgpackEncoderV03\nfrom ._encoding import MsgpackEncoderV05\nfrom .compat import PY3\nfrom .compat import binary_type\nfrom .compat import ensure_text\nfrom .logger import get_logger\n\n\n__all__ = [\"MsgpackEncoderV03\", \"MsgpackEncoderV05\", \"ListStringTable\", \"MSGPACK_ENCODERS\"]\n\n\nif TYPE_CHECKING:\n from ..span import Span\n\n\nlog = get_logger(__name__)\n\n\nclass _EncoderBase(object):\n \"\"\"\n Encoder interface that provides the logic to encode traces and service.\n \"\"\"\n\n def encode_traces(self, traces):\n # type: (List[List[Span]]) -> str\n \"\"\"\n Encodes a list of traces, expecting a list of items where each items\n is a list of spans. Before dumping the string in a serialized format all\n traces are normalized according to the encoding format. The trace\n nesting is not changed.\n\n :param traces: A list of traces that should be serialized\n \"\"\"\n raise NotImplementedError()\n\n def encode(self, obj):\n # type: (List[List[Any]]) -> str\n \"\"\"\n Defines the underlying format used during traces or services encoding.\n This method must be implemented and should only be used by the internal\n functions.\n \"\"\"\n raise NotImplementedError()\n\n\nclass JSONEncoder(json.JSONEncoder, _EncoderBase):\n content_type = \"application/json\"\n\n def encode_traces(self, traces):\n normalized_traces = [[JSONEncoder._normalize_span(span.to_dict()) for span in trace] for trace in traces]\n return self.encode(normalized_traces)\n\n @staticmethod\n def _normalize_span(span):\n # Ensure all string attributes are actually strings and not bytes\n # DEV: We are deferring meta/metrics to reduce any performance issues.\n # Meta/metrics may still contain `bytes` and have encoding issues.\n span[\"resource\"] = JSONEncoder._normalize_str(span[\"resource\"])\n span[\"name\"] = JSONEncoder._normalize_str(span[\"name\"])\n span[\"service\"] = JSONEncoder._normalize_str(span[\"service\"])\n return span\n\n @staticmethod\n def _normalize_str(obj):\n if obj is None:\n return obj\n\n if PY3:\n return ensure_text(obj, errors=\"backslashreplace\")\n elif isinstance(obj, binary_type):\n return obj.decode(\"utf-8\", errors=\"replace\")\n return obj\n\n\nclass JSONEncoderV2(JSONEncoder):\n \"\"\"\n JSONEncoderV2 encodes traces to the new intake API format.\n \"\"\"\n\n content_type = \"application/json\"\n\n def encode_traces(self, traces):\n # type: (List[List[Span]]) -> str\n normalized_traces = [[JSONEncoderV2._convert_span(span) for span in trace] for trace in traces]\n return self.encode({\"traces\": normalized_traces})\n\n @staticmethod\n def _convert_span(span):\n # type: (Span) -> Dict[str, Any]\n sp = span.to_dict()\n sp = JSONEncoderV2._normalize_span(sp)\n sp[\"trace_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"trace_id\"))\n sp[\"parent_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"parent_id\"))\n sp[\"span_id\"] = JSONEncoderV2._encode_id_to_hex(sp.get(\"span_id\"))\n return sp\n\n @staticmethod\n def _encode_id_to_hex(dd_id):\n # type: (Optional[int]) -> str\n if not dd_id:\n return \"0000000000000000\"\n return \"%0.16X\" % int(dd_id)\n\n @staticmethod\n def _decode_id_to_hex(hex_id):\n # type: (Optional[str]) -> int\n if not hex_id:\n return 0\n return int(hex_id, 16)\n\n\nMSGPACK_ENCODERS = {\n \"v0.3\": MsgpackEncoderV03,\n \"v0.4\": MsgpackEncoderV03,\n \"v0.5\": MsgpackEncoderV05,\n}\n", "path": "ddtrace/internal/encoding.py"}]} | 1,917 | 561 |
gh_patches_debug_30436 | rasdani/github-patches | git_diff | uccser__cs-unplugged-463 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add sorting networks lessons
- [ ] Lesson 2 (8-10) - needs generated resources
- [ ] Lesson 1 (11-14)
- [ ] Lesson 2 (11-14)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `csunplugged/resources/views/sorting_network_cards.py`
Content:
```
1 """Module for generating Sorting Network Cards resource."""
2
3 from random import sample
4 from PIL import Image, ImageDraw, ImageFont
5 from utils.retrieve_query_parameter import retrieve_query_parameter
6
7
8 def resource_image(request, resource):
9 """Create a image for Sorting Network Cards resource.
10
11 Args:
12 request: HTTP request object.
13 resource: Object of resource data.
14
15 Returns:
16 A list of Pillow image objects.
17 """
18 IMAGE_SIZE_X = 2000
19 IMAGE_SIZE_Y = 3000
20 LINE_COLOUR = "#000000"
21 LINE_WIDTH = 3
22 font_path = "static/fonts/PatrickHand-Regular.ttf"
23
24 # Retrieve parameters
25 parameter_options = valid_options()
26 card_type = retrieve_query_parameter(request, "type", parameter_options["type"])
27
28 # Create card outlines
29 card_outlines = Image.new("RGB", (IMAGE_SIZE_X, IMAGE_SIZE_Y), "#fff")
30 draw = ImageDraw.Draw(card_outlines)
31 for x_coord in range(0, IMAGE_SIZE_X, IMAGE_SIZE_X - LINE_WIDTH):
32 draw.line([(x_coord, 0), (x_coord, IMAGE_SIZE_Y)], fill=LINE_COLOUR, width=LINE_WIDTH)
33 for y_coord in range(0, IMAGE_SIZE_Y, int(IMAGE_SIZE_Y / 2 - LINE_WIDTH)):
34 draw.line([(0, y_coord), (IMAGE_SIZE_X, y_coord)], fill=LINE_COLOUR, width=LINE_WIDTH)
35
36 # Prepare text data
37 if card_type == "small_numbers":
38 font_size = 800
39 text = ["1", "2", "3", "4", "5", "6"]
40 elif card_type == "large_numbers":
41 font_size = 500
42 text = []
43 numbers = sample(range(1700000, 2100000), 6)
44 for number in numbers:
45 text.append("{:,}".format(number))
46 elif card_type == "fractions":
47 font_size = 900
48 font_path = "static/fonts/NotoSans-Regular.ttf"
49 text = [u"\u00bd", u"\u2153", u"\u2154", u"\u215c", u"\u00be", u"\u215d"]
50 else:
51 font_size = 300
52 text = [
53 "tahi",
54 "rua",
55 "toru",
56 "whā",
57 "rima",
58 "ono",
59 "whitu",
60 "waru",
61 "iwa",
62 "tekau",
63 "tekau mā tahi",
64 "tekau mā waru",
65 "tekau mā toru",
66 "tekau mā whā",
67 "rua tekau",
68 "rua tekau mā ono",
69 ]
70
71 font = ImageFont.truetype(font_path, font_size)
72 card_centers = [
73 (IMAGE_SIZE_X / 2, IMAGE_SIZE_Y / 4),
74 (IMAGE_SIZE_X / 2, (IMAGE_SIZE_Y / 4) * 3),
75 ]
76
77 # Add text to cards
78 images = []
79 for (text_number, text_string) in enumerate(text):
80 if text_number % 2 == 0:
81 page = card_outlines.copy()
82 draw = ImageDraw.Draw(page)
83 (x, y) = card_centers[0]
84 else:
85 (x, y) = card_centers[1]
86
87 text_width, text_height = draw.textsize(text_string, font=font)
88 coord_x = x - (text_width / 2)
89 coord_y = y - (text_height / 1.5)
90 draw.text(
91 (coord_x, coord_y),
92 text_string,
93 font=font,
94 fill="#000"
95 )
96 # If text on second card but not last page
97 if text_number % 2 == 1 and text_number != len(text) - 1:
98 images.append(page)
99 images.append(page)
100
101 return images
102
103
104 def subtitle(request, resource):
105 """Return the subtitle string of the resource.
106
107 Used after the resource name in the filename, and
108 also on the resource image.
109
110 Args:
111 request: HTTP request object
112 resource: Object of resource data.
113
114 Returns:
115 text for subtitle (string)
116 """
117 return "{} - {}".format(
118 retrieve_query_parameter(request, "type").replace("_", " "),
119 retrieve_query_parameter(request, "paper_size")
120 )
121
122
123 def valid_options():
124 """Provide dictionary of all valid parameters.
125
126 This excludes the header text parameter.
127
128 Returns:
129 All valid options (dict).
130 """
131 return {
132 "type": ["small_numbers", "large_numbers", "fractions", "maori_numbers"],
133 "paper_size": ["a4", "letter"],
134 }
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/csunplugged/resources/views/sorting_network_cards.py b/csunplugged/resources/views/sorting_network_cards.py
--- a/csunplugged/resources/views/sorting_network_cards.py
+++ b/csunplugged/resources/views/sorting_network_cards.py
@@ -47,25 +47,24 @@
font_size = 900
font_path = "static/fonts/NotoSans-Regular.ttf"
text = [u"\u00bd", u"\u2153", u"\u2154", u"\u215c", u"\u00be", u"\u215d"]
- else:
+ elif card_type == "maori_numbers":
font_size = 300
text = [
- "tahi",
- "rua",
- "toru",
- "whā",
- "rima",
- "ono",
- "whitu",
- "waru",
- "iwa",
- "tekau",
- "tekau mā tahi",
- "tekau mā waru",
- "tekau mā toru",
- "tekau mā whā",
- "rua tekau",
- "rua tekau mā ono",
+ "tahi", "rua", "toru", "whā", "rima", "ono", "whitu", "waru",
+ "iwa", "tekau", "tekau mā tahi", "tekau mā waru", "tekau mā toru",
+ "tekau mā whā", "rua tekau", "rua tekau mā ono"
+ ]
+ elif card_type == "words":
+ font_size = 500
+ text = ["crocodile", "crochet", "kiwi", "weka", "kiwi", "kiwano"]
+ elif card_type == "letters":
+ font_size = 800
+ text = ["L", "O", "N", "K", "E", "D", "S", "P", "G", "B", "I", "Y"]
+ else:
+ font_size = 500
+ text = [
+ "whero", "kākāriki", "kiwikiwi", "karaka",
+ "kōwhai", "pango", "māwhero", "mā"
]
font = ImageFont.truetype(font_path, font_size)
@@ -129,6 +128,9 @@
All valid options (dict).
"""
return {
- "type": ["small_numbers", "large_numbers", "fractions", "maori_numbers"],
+ "type": [
+ "small_numbers", "large_numbers", "fractions", "maori_numbers",
+ "words", "letters", "maori_colours"
+ ],
"paper_size": ["a4", "letter"],
}
| {"golden_diff": "diff --git a/csunplugged/resources/views/sorting_network_cards.py b/csunplugged/resources/views/sorting_network_cards.py\n--- a/csunplugged/resources/views/sorting_network_cards.py\n+++ b/csunplugged/resources/views/sorting_network_cards.py\n@@ -47,25 +47,24 @@\n font_size = 900\n font_path = \"static/fonts/NotoSans-Regular.ttf\"\n text = [u\"\\u00bd\", u\"\\u2153\", u\"\\u2154\", u\"\\u215c\", u\"\\u00be\", u\"\\u215d\"]\n- else:\n+ elif card_type == \"maori_numbers\":\n font_size = 300\n text = [\n- \"tahi\",\n- \"rua\",\n- \"toru\",\n- \"wh\u0101\",\n- \"rima\",\n- \"ono\",\n- \"whitu\",\n- \"waru\",\n- \"iwa\",\n- \"tekau\",\n- \"tekau m\u0101 tahi\",\n- \"tekau m\u0101 waru\",\n- \"tekau m\u0101 toru\",\n- \"tekau m\u0101 wh\u0101\",\n- \"rua tekau\",\n- \"rua tekau m\u0101 ono\",\n+ \"tahi\", \"rua\", \"toru\", \"wh\u0101\", \"rima\", \"ono\", \"whitu\", \"waru\",\n+ \"iwa\", \"tekau\", \"tekau m\u0101 tahi\", \"tekau m\u0101 waru\", \"tekau m\u0101 toru\",\n+ \"tekau m\u0101 wh\u0101\", \"rua tekau\", \"rua tekau m\u0101 ono\"\n+ ]\n+ elif card_type == \"words\":\n+ font_size = 500\n+ text = [\"crocodile\", \"crochet\", \"kiwi\", \"weka\", \"kiwi\", \"kiwano\"]\n+ elif card_type == \"letters\":\n+ font_size = 800\n+ text = [\"L\", \"O\", \"N\", \"K\", \"E\", \"D\", \"S\", \"P\", \"G\", \"B\", \"I\", \"Y\"]\n+ else:\n+ font_size = 500\n+ text = [\n+ \"whero\", \"k\u0101k\u0101riki\", \"kiwikiwi\", \"karaka\",\n+ \"k\u014dwhai\", \"pango\", \"m\u0101whero\", \"m\u0101\"\n ]\n \n font = ImageFont.truetype(font_path, font_size)\n@@ -129,6 +128,9 @@\n All valid options (dict).\n \"\"\"\n return {\n- \"type\": [\"small_numbers\", \"large_numbers\", \"fractions\", \"maori_numbers\"],\n+ \"type\": [\n+ \"small_numbers\", \"large_numbers\", \"fractions\", \"maori_numbers\",\n+ \"words\", \"letters\", \"maori_colours\"\n+ ],\n \"paper_size\": [\"a4\", \"letter\"],\n }\n", "issue": "Add sorting networks lessons\n- [ ] Lesson 2 (8-10) - needs generated resources\r\n- [ ] Lesson 1 (11-14)\r\n- [ ] Lesson 2 (11-14)\n", "before_files": [{"content": "\"\"\"Module for generating Sorting Network Cards resource.\"\"\"\n\nfrom random import sample\nfrom PIL import Image, ImageDraw, ImageFont\nfrom utils.retrieve_query_parameter import retrieve_query_parameter\n\n\ndef resource_image(request, resource):\n \"\"\"Create a image for Sorting Network Cards resource.\n\n Args:\n request: HTTP request object.\n resource: Object of resource data.\n\n Returns:\n A list of Pillow image objects.\n \"\"\"\n IMAGE_SIZE_X = 2000\n IMAGE_SIZE_Y = 3000\n LINE_COLOUR = \"#000000\"\n LINE_WIDTH = 3\n font_path = \"static/fonts/PatrickHand-Regular.ttf\"\n\n # Retrieve parameters\n parameter_options = valid_options()\n card_type = retrieve_query_parameter(request, \"type\", parameter_options[\"type\"])\n\n # Create card outlines\n card_outlines = Image.new(\"RGB\", (IMAGE_SIZE_X, IMAGE_SIZE_Y), \"#fff\")\n draw = ImageDraw.Draw(card_outlines)\n for x_coord in range(0, IMAGE_SIZE_X, IMAGE_SIZE_X - LINE_WIDTH):\n draw.line([(x_coord, 0), (x_coord, IMAGE_SIZE_Y)], fill=LINE_COLOUR, width=LINE_WIDTH)\n for y_coord in range(0, IMAGE_SIZE_Y, int(IMAGE_SIZE_Y / 2 - LINE_WIDTH)):\n draw.line([(0, y_coord), (IMAGE_SIZE_X, y_coord)], fill=LINE_COLOUR, width=LINE_WIDTH)\n\n # Prepare text data\n if card_type == \"small_numbers\":\n font_size = 800\n text = [\"1\", \"2\", \"3\", \"4\", \"5\", \"6\"]\n elif card_type == \"large_numbers\":\n font_size = 500\n text = []\n numbers = sample(range(1700000, 2100000), 6)\n for number in numbers:\n text.append(\"{:,}\".format(number))\n elif card_type == \"fractions\":\n font_size = 900\n font_path = \"static/fonts/NotoSans-Regular.ttf\"\n text = [u\"\\u00bd\", u\"\\u2153\", u\"\\u2154\", u\"\\u215c\", u\"\\u00be\", u\"\\u215d\"]\n else:\n font_size = 300\n text = [\n \"tahi\",\n \"rua\",\n \"toru\",\n \"wh\u0101\",\n \"rima\",\n \"ono\",\n \"whitu\",\n \"waru\",\n \"iwa\",\n \"tekau\",\n \"tekau m\u0101 tahi\",\n \"tekau m\u0101 waru\",\n \"tekau m\u0101 toru\",\n \"tekau m\u0101 wh\u0101\",\n \"rua tekau\",\n \"rua tekau m\u0101 ono\",\n ]\n\n font = ImageFont.truetype(font_path, font_size)\n card_centers = [\n (IMAGE_SIZE_X / 2, IMAGE_SIZE_Y / 4),\n (IMAGE_SIZE_X / 2, (IMAGE_SIZE_Y / 4) * 3),\n ]\n\n # Add text to cards\n images = []\n for (text_number, text_string) in enumerate(text):\n if text_number % 2 == 0:\n page = card_outlines.copy()\n draw = ImageDraw.Draw(page)\n (x, y) = card_centers[0]\n else:\n (x, y) = card_centers[1]\n\n text_width, text_height = draw.textsize(text_string, font=font)\n coord_x = x - (text_width / 2)\n coord_y = y - (text_height / 1.5)\n draw.text(\n (coord_x, coord_y),\n text_string,\n font=font,\n fill=\"#000\"\n )\n # If text on second card but not last page\n if text_number % 2 == 1 and text_number != len(text) - 1:\n images.append(page)\n images.append(page)\n\n return images\n\n\ndef subtitle(request, resource):\n \"\"\"Return the subtitle string of the resource.\n\n Used after the resource name in the filename, and\n also on the resource image.\n\n Args:\n request: HTTP request object\n resource: Object of resource data.\n\n Returns:\n text for subtitle (string)\n \"\"\"\n return \"{} - {}\".format(\n retrieve_query_parameter(request, \"type\").replace(\"_\", \" \"),\n retrieve_query_parameter(request, \"paper_size\")\n )\n\n\ndef valid_options():\n \"\"\"Provide dictionary of all valid parameters.\n\n This excludes the header text parameter.\n\n Returns:\n All valid options (dict).\n \"\"\"\n return {\n \"type\": [\"small_numbers\", \"large_numbers\", \"fractions\", \"maori_numbers\"],\n \"paper_size\": [\"a4\", \"letter\"],\n }\n", "path": "csunplugged/resources/views/sorting_network_cards.py"}], "after_files": [{"content": "\"\"\"Module for generating Sorting Network Cards resource.\"\"\"\n\nfrom random import sample\nfrom PIL import Image, ImageDraw, ImageFont\nfrom utils.retrieve_query_parameter import retrieve_query_parameter\n\n\ndef resource_image(request, resource):\n \"\"\"Create a image for Sorting Network Cards resource.\n\n Args:\n request: HTTP request object.\n resource: Object of resource data.\n\n Returns:\n A list of Pillow image objects.\n \"\"\"\n IMAGE_SIZE_X = 2000\n IMAGE_SIZE_Y = 3000\n LINE_COLOUR = \"#000000\"\n LINE_WIDTH = 3\n font_path = \"static/fonts/PatrickHand-Regular.ttf\"\n\n # Retrieve parameters\n parameter_options = valid_options()\n card_type = retrieve_query_parameter(request, \"type\", parameter_options[\"type\"])\n\n # Create card outlines\n card_outlines = Image.new(\"RGB\", (IMAGE_SIZE_X, IMAGE_SIZE_Y), \"#fff\")\n draw = ImageDraw.Draw(card_outlines)\n for x_coord in range(0, IMAGE_SIZE_X, IMAGE_SIZE_X - LINE_WIDTH):\n draw.line([(x_coord, 0), (x_coord, IMAGE_SIZE_Y)], fill=LINE_COLOUR, width=LINE_WIDTH)\n for y_coord in range(0, IMAGE_SIZE_Y, int(IMAGE_SIZE_Y / 2 - LINE_WIDTH)):\n draw.line([(0, y_coord), (IMAGE_SIZE_X, y_coord)], fill=LINE_COLOUR, width=LINE_WIDTH)\n\n # Prepare text data\n if card_type == \"small_numbers\":\n font_size = 800\n text = [\"1\", \"2\", \"3\", \"4\", \"5\", \"6\"]\n elif card_type == \"large_numbers\":\n font_size = 500\n text = []\n numbers = sample(range(1700000, 2100000), 6)\n for number in numbers:\n text.append(\"{:,}\".format(number))\n elif card_type == \"fractions\":\n font_size = 900\n font_path = \"static/fonts/NotoSans-Regular.ttf\"\n text = [u\"\\u00bd\", u\"\\u2153\", u\"\\u2154\", u\"\\u215c\", u\"\\u00be\", u\"\\u215d\"]\n elif card_type == \"maori_numbers\":\n font_size = 300\n text = [\n \"tahi\", \"rua\", \"toru\", \"wh\u0101\", \"rima\", \"ono\", \"whitu\", \"waru\",\n \"iwa\", \"tekau\", \"tekau m\u0101 tahi\", \"tekau m\u0101 waru\", \"tekau m\u0101 toru\",\n \"tekau m\u0101 wh\u0101\", \"rua tekau\", \"rua tekau m\u0101 ono\"\n ]\n elif card_type == \"words\":\n font_size = 500\n text = [\"crocodile\", \"crochet\", \"kiwi\", \"weka\", \"kiwi\", \"kiwano\"]\n elif card_type == \"letters\":\n font_size = 800\n text = [\"L\", \"O\", \"N\", \"K\", \"E\", \"D\", \"S\", \"P\", \"G\", \"B\", \"I\", \"Y\"]\n else:\n font_size = 500\n text = [\n \"whero\", \"k\u0101k\u0101riki\", \"kiwikiwi\", \"karaka\",\n \"k\u014dwhai\", \"pango\", \"m\u0101whero\", \"m\u0101\"\n ]\n\n font = ImageFont.truetype(font_path, font_size)\n card_centers = [\n (IMAGE_SIZE_X / 2, IMAGE_SIZE_Y / 4),\n (IMAGE_SIZE_X / 2, (IMAGE_SIZE_Y / 4) * 3),\n ]\n\n # Add text to cards\n images = []\n for (text_number, text_string) in enumerate(text):\n if text_number % 2 == 0:\n page = card_outlines.copy()\n draw = ImageDraw.Draw(page)\n (x, y) = card_centers[0]\n else:\n (x, y) = card_centers[1]\n\n text_width, text_height = draw.textsize(text_string, font=font)\n coord_x = x - (text_width / 2)\n coord_y = y - (text_height / 1.5)\n draw.text(\n (coord_x, coord_y),\n text_string,\n font=font,\n fill=\"#000\"\n )\n # If text on second card but not last page\n if text_number % 2 == 1 and text_number != len(text) - 1:\n images.append(page)\n images.append(page)\n\n return images\n\n\ndef subtitle(request, resource):\n \"\"\"Return the subtitle string of the resource.\n\n Used after the resource name in the filename, and\n also on the resource image.\n\n Args:\n request: HTTP request object\n resource: Object of resource data.\n\n Returns:\n text for subtitle (string)\n \"\"\"\n return \"{} - {}\".format(\n retrieve_query_parameter(request, \"type\").replace(\"_\", \" \"),\n retrieve_query_parameter(request, \"paper_size\")\n )\n\n\ndef valid_options():\n \"\"\"Provide dictionary of all valid parameters.\n\n This excludes the header text parameter.\n\n Returns:\n All valid options (dict).\n \"\"\"\n return {\n \"type\": [\n \"small_numbers\", \"large_numbers\", \"fractions\", \"maori_numbers\",\n \"words\", \"letters\", \"maori_colours\"\n ],\n \"paper_size\": [\"a4\", \"letter\"],\n }\n", "path": "csunplugged/resources/views/sorting_network_cards.py"}]} | 1,676 | 676 |
gh_patches_debug_26005 | rasdani/github-patches | git_diff | internetarchive__openlibrary-8350 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Subject tag autocomplete no longer sorts by frequency
When I developed autocomplete for subject tags, the results returned were automatically sorted by frequency — the most used subjects that matched were first on the list. This not only produced useful results, but helped concentrate the usage of particular subjects where multiple similar labels exist for a single topic, improving the quality of subject pages.
In subsequent refactoring of autocomplete support, this frequency sorting was lost.
### Steps to Reproduce
Edit a work and start typing in one of the subject fields to display the list of matching subjects.
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual: Subject suggestions are sorted in an apparently random order
* Expected: Subjects should be sorted with the "better" (more frequently used) choices at the top.
### Proposal & Constraints
I will be submitting a PR that restores the SOLR query that produces the frequency-sorted results.
### Stakeholders
@mheiman @cdrini
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openlibrary/plugins/worksearch/autocomplete.py`
Content:
```
1 import itertools
2 import web
3 import json
4
5
6 from infogami.utils import delegate
7 from infogami.utils.view import safeint
8 from openlibrary.core.models import Thing
9 from openlibrary.plugins.upstream import utils
10 from openlibrary.plugins.worksearch.search import get_solr
11 from openlibrary.utils import (
12 find_olid_in_string,
13 olid_to_key,
14 )
15
16
17 def to_json(d):
18 web.header('Content-Type', 'application/json')
19 return delegate.RawText(json.dumps(d))
20
21
22 class autocomplete(delegate.page):
23 path = "/_autocomplete"
24 fq = ['-type:edition']
25 fl = 'key,type,name,title,score'
26 olid_suffix: str | None = None
27 query = 'title:"{q}"^2 OR title:({q}*) OR name:"{q}"^2 OR name:({q}*)'
28
29 def db_fetch(self, key: str) -> Thing | None:
30 if thing := web.ctx.site.get(key):
31 return thing.as_fake_solr_record()
32 else:
33 return None
34
35 def doc_wrap(self, doc: dict):
36 """Modify the returned solr document in place."""
37 if 'name' not in doc:
38 doc['name'] = doc.get('title')
39
40 def doc_filter(self, doc: dict) -> bool:
41 """Exclude certain documents"""
42 return True
43
44 def GET(self):
45 return self.direct_get()
46
47 def direct_get(self, fq: list[str] | None = None):
48 i = web.input(q="", limit=5)
49 i.limit = safeint(i.limit, 5)
50
51 solr = get_solr()
52
53 # look for ID in query string here
54 q = solr.escape(i.q).strip()
55 embedded_olid = None
56 if self.olid_suffix:
57 embedded_olid = find_olid_in_string(q, self.olid_suffix)
58
59 if embedded_olid:
60 solr_q = f'key:"{olid_to_key(embedded_olid)}"'
61 else:
62 solr_q = self.query.format(q=q)
63
64 fq = fq or self.fq
65 params = {
66 'q_op': 'AND',
67 'rows': i.limit,
68 **({'fq': fq} if fq else {}),
69 # limit the fields returned for better performance
70 'fl': self.fl,
71 }
72
73 data = solr.select(solr_q, **params)
74 docs = data['docs']
75
76 if embedded_olid and not docs:
77 # Grumble! Work not in solr yet. Create a dummy.
78 fake_doc = self.db_fetch(olid_to_key(embedded_olid))
79 if fake_doc:
80 docs = [fake_doc]
81
82 result_docs = []
83 for d in docs:
84 if self.doc_filter(d):
85 self.doc_wrap(d)
86 result_docs.append(d)
87
88 return to_json(result_docs)
89
90
91 class languages_autocomplete(delegate.page):
92 path = "/languages/_autocomplete"
93
94 def GET(self):
95 i = web.input(q="", limit=5)
96 i.limit = safeint(i.limit, 5)
97 return to_json(
98 list(itertools.islice(utils.autocomplete_languages(i.q), i.limit))
99 )
100
101
102 class works_autocomplete(autocomplete):
103 path = "/works/_autocomplete"
104 fq = ['type:work']
105 fl = 'key,title,subtitle,cover_i,first_publish_year,author_name,edition_count'
106 olid_suffix = 'W'
107 query = 'title:"{q}"^2 OR title:({q}*)'
108
109 def doc_filter(self, doc: dict) -> bool:
110 # Exclude orphaned editions from autocomplete results
111 # Note: Do this here instead of with an `fq=key:*W` for performance
112 # reasons.
113 return doc['key'][-1] == 'W'
114
115 def doc_wrap(self, doc: dict):
116 doc['full_title'] = doc['title']
117 if 'subtitle' in doc:
118 doc['full_title'] += ": " + doc['subtitle']
119 doc['name'] = doc.get('title')
120
121
122 class authors_autocomplete(autocomplete):
123 path = "/authors/_autocomplete"
124 fq = ['type:author']
125 fl = 'key,name,alternate_names,birth_date,death_date,work_count,top_work,top_subjects'
126 olid_suffix = 'A'
127 query = 'name:({q}*) OR alternate_names:({q}*) OR name:"{q}"^2 OR alternate_names:"{q}"^2'
128
129 def doc_wrap(self, doc: dict):
130 if 'top_work' in doc:
131 doc['works'] = [doc.pop('top_work')]
132 else:
133 doc['works'] = []
134 doc['subjects'] = doc.pop('top_subjects', [])
135
136
137 class subjects_autocomplete(autocomplete):
138 # can't use /subjects/_autocomplete because the subjects endpoint = /subjects/[^/]+
139 path = "/subjects_autocomplete"
140 fq = ['type:subject']
141 fl = 'key,name'
142 query = 'name:({q}*)'
143
144 def GET(self):
145 i = web.input(type="")
146 fq = self.fq
147 if i.type:
148 fq = fq + [f'subject_type:{i.type}']
149
150 return super().direct_get(fq=fq)
151
152
153 def setup():
154 """Do required setup."""
155 pass
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openlibrary/plugins/worksearch/autocomplete.py b/openlibrary/plugins/worksearch/autocomplete.py
--- a/openlibrary/plugins/worksearch/autocomplete.py
+++ b/openlibrary/plugins/worksearch/autocomplete.py
@@ -24,6 +24,7 @@
fq = ['-type:edition']
fl = 'key,type,name,title,score'
olid_suffix: str | None = None
+ sort: str | None = None
query = 'title:"{q}"^2 OR title:({q}*) OR name:"{q}"^2 OR name:({q}*)'
def db_fetch(self, key: str) -> Thing | None:
@@ -68,6 +69,7 @@
**({'fq': fq} if fq else {}),
# limit the fields returned for better performance
'fl': self.fl,
+ **({'sort': self.sort} if self.sort else {}),
}
data = solr.select(solr_q, **params)
@@ -138,8 +140,9 @@
# can't use /subjects/_autocomplete because the subjects endpoint = /subjects/[^/]+
path = "/subjects_autocomplete"
fq = ['type:subject']
- fl = 'key,name'
+ fl = 'key,name,work_count'
query = 'name:({q}*)'
+ sort = 'work_count desc'
def GET(self):
i = web.input(type="")
| {"golden_diff": "diff --git a/openlibrary/plugins/worksearch/autocomplete.py b/openlibrary/plugins/worksearch/autocomplete.py\n--- a/openlibrary/plugins/worksearch/autocomplete.py\n+++ b/openlibrary/plugins/worksearch/autocomplete.py\n@@ -24,6 +24,7 @@\n fq = ['-type:edition']\n fl = 'key,type,name,title,score'\n olid_suffix: str | None = None\n+ sort: str | None = None\n query = 'title:\"{q}\"^2 OR title:({q}*) OR name:\"{q}\"^2 OR name:({q}*)'\n \n def db_fetch(self, key: str) -> Thing | None:\n@@ -68,6 +69,7 @@\n **({'fq': fq} if fq else {}),\n # limit the fields returned for better performance\n 'fl': self.fl,\n+ **({'sort': self.sort} if self.sort else {}),\n }\n \n data = solr.select(solr_q, **params)\n@@ -138,8 +140,9 @@\n # can't use /subjects/_autocomplete because the subjects endpoint = /subjects/[^/]+\n path = \"/subjects_autocomplete\"\n fq = ['type:subject']\n- fl = 'key,name'\n+ fl = 'key,name,work_count'\n query = 'name:({q}*)'\n+ sort = 'work_count desc'\n \n def GET(self):\n i = web.input(type=\"\")\n", "issue": "Subject tag autocomplete no longer sorts by frequency\nWhen I developed autocomplete for subject tags, the results returned were automatically sorted by frequency \u2014 the most used subjects that matched were first on the list. This not only produced useful results, but helped concentrate the usage of particular subjects where multiple similar labels exist for a single topic, improving the quality of subject pages.\r\n\r\nIn subsequent refactoring of autocomplete support, this frequency sorting was lost.\r\n\r\n### Steps to Reproduce\r\nEdit a work and start typing in one of the subject fields to display the list of matching subjects.\r\n\r\n<!-- What actually happened after these steps? What did you expect to happen? -->\r\n* Actual: Subject suggestions are sorted in an apparently random order\r\n* Expected: Subjects should be sorted with the \"better\" (more frequently used) choices at the top.\r\n\r\n### Proposal & Constraints\r\nI will be submitting a PR that restores the SOLR query that produces the frequency-sorted results.\r\n\r\n### Stakeholders\r\n@mheiman @cdrini \r\n\n", "before_files": [{"content": "import itertools\nimport web\nimport json\n\n\nfrom infogami.utils import delegate\nfrom infogami.utils.view import safeint\nfrom openlibrary.core.models import Thing\nfrom openlibrary.plugins.upstream import utils\nfrom openlibrary.plugins.worksearch.search import get_solr\nfrom openlibrary.utils import (\n find_olid_in_string,\n olid_to_key,\n)\n\n\ndef to_json(d):\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(d))\n\n\nclass autocomplete(delegate.page):\n path = \"/_autocomplete\"\n fq = ['-type:edition']\n fl = 'key,type,name,title,score'\n olid_suffix: str | None = None\n query = 'title:\"{q}\"^2 OR title:({q}*) OR name:\"{q}\"^2 OR name:({q}*)'\n\n def db_fetch(self, key: str) -> Thing | None:\n if thing := web.ctx.site.get(key):\n return thing.as_fake_solr_record()\n else:\n return None\n\n def doc_wrap(self, doc: dict):\n \"\"\"Modify the returned solr document in place.\"\"\"\n if 'name' not in doc:\n doc['name'] = doc.get('title')\n\n def doc_filter(self, doc: dict) -> bool:\n \"\"\"Exclude certain documents\"\"\"\n return True\n\n def GET(self):\n return self.direct_get()\n\n def direct_get(self, fq: list[str] | None = None):\n i = web.input(q=\"\", limit=5)\n i.limit = safeint(i.limit, 5)\n\n solr = get_solr()\n\n # look for ID in query string here\n q = solr.escape(i.q).strip()\n embedded_olid = None\n if self.olid_suffix:\n embedded_olid = find_olid_in_string(q, self.olid_suffix)\n\n if embedded_olid:\n solr_q = f'key:\"{olid_to_key(embedded_olid)}\"'\n else:\n solr_q = self.query.format(q=q)\n\n fq = fq or self.fq\n params = {\n 'q_op': 'AND',\n 'rows': i.limit,\n **({'fq': fq} if fq else {}),\n # limit the fields returned for better performance\n 'fl': self.fl,\n }\n\n data = solr.select(solr_q, **params)\n docs = data['docs']\n\n if embedded_olid and not docs:\n # Grumble! Work not in solr yet. Create a dummy.\n fake_doc = self.db_fetch(olid_to_key(embedded_olid))\n if fake_doc:\n docs = [fake_doc]\n\n result_docs = []\n for d in docs:\n if self.doc_filter(d):\n self.doc_wrap(d)\n result_docs.append(d)\n\n return to_json(result_docs)\n\n\nclass languages_autocomplete(delegate.page):\n path = \"/languages/_autocomplete\"\n\n def GET(self):\n i = web.input(q=\"\", limit=5)\n i.limit = safeint(i.limit, 5)\n return to_json(\n list(itertools.islice(utils.autocomplete_languages(i.q), i.limit))\n )\n\n\nclass works_autocomplete(autocomplete):\n path = \"/works/_autocomplete\"\n fq = ['type:work']\n fl = 'key,title,subtitle,cover_i,first_publish_year,author_name,edition_count'\n olid_suffix = 'W'\n query = 'title:\"{q}\"^2 OR title:({q}*)'\n\n def doc_filter(self, doc: dict) -> bool:\n # Exclude orphaned editions from autocomplete results\n # Note: Do this here instead of with an `fq=key:*W` for performance\n # reasons.\n return doc['key'][-1] == 'W'\n\n def doc_wrap(self, doc: dict):\n doc['full_title'] = doc['title']\n if 'subtitle' in doc:\n doc['full_title'] += \": \" + doc['subtitle']\n doc['name'] = doc.get('title')\n\n\nclass authors_autocomplete(autocomplete):\n path = \"/authors/_autocomplete\"\n fq = ['type:author']\n fl = 'key,name,alternate_names,birth_date,death_date,work_count,top_work,top_subjects'\n olid_suffix = 'A'\n query = 'name:({q}*) OR alternate_names:({q}*) OR name:\"{q}\"^2 OR alternate_names:\"{q}\"^2'\n\n def doc_wrap(self, doc: dict):\n if 'top_work' in doc:\n doc['works'] = [doc.pop('top_work')]\n else:\n doc['works'] = []\n doc['subjects'] = doc.pop('top_subjects', [])\n\n\nclass subjects_autocomplete(autocomplete):\n # can't use /subjects/_autocomplete because the subjects endpoint = /subjects/[^/]+\n path = \"/subjects_autocomplete\"\n fq = ['type:subject']\n fl = 'key,name'\n query = 'name:({q}*)'\n\n def GET(self):\n i = web.input(type=\"\")\n fq = self.fq\n if i.type:\n fq = fq + [f'subject_type:{i.type}']\n\n return super().direct_get(fq=fq)\n\n\ndef setup():\n \"\"\"Do required setup.\"\"\"\n pass\n", "path": "openlibrary/plugins/worksearch/autocomplete.py"}], "after_files": [{"content": "import itertools\nimport web\nimport json\n\n\nfrom infogami.utils import delegate\nfrom infogami.utils.view import safeint\nfrom openlibrary.core.models import Thing\nfrom openlibrary.plugins.upstream import utils\nfrom openlibrary.plugins.worksearch.search import get_solr\nfrom openlibrary.utils import (\n find_olid_in_string,\n olid_to_key,\n)\n\n\ndef to_json(d):\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(d))\n\n\nclass autocomplete(delegate.page):\n path = \"/_autocomplete\"\n fq = ['-type:edition']\n fl = 'key,type,name,title,score'\n olid_suffix: str | None = None\n sort: str | None = None\n query = 'title:\"{q}\"^2 OR title:({q}*) OR name:\"{q}\"^2 OR name:({q}*)'\n\n def db_fetch(self, key: str) -> Thing | None:\n if thing := web.ctx.site.get(key):\n return thing.as_fake_solr_record()\n else:\n return None\n\n def doc_wrap(self, doc: dict):\n \"\"\"Modify the returned solr document in place.\"\"\"\n if 'name' not in doc:\n doc['name'] = doc.get('title')\n\n def doc_filter(self, doc: dict) -> bool:\n \"\"\"Exclude certain documents\"\"\"\n return True\n\n def GET(self):\n return self.direct_get()\n\n def direct_get(self, fq: list[str] | None = None):\n i = web.input(q=\"\", limit=5)\n i.limit = safeint(i.limit, 5)\n\n solr = get_solr()\n\n # look for ID in query string here\n q = solr.escape(i.q).strip()\n embedded_olid = None\n if self.olid_suffix:\n embedded_olid = find_olid_in_string(q, self.olid_suffix)\n\n if embedded_olid:\n solr_q = f'key:\"{olid_to_key(embedded_olid)}\"'\n else:\n solr_q = self.query.format(q=q)\n\n fq = fq or self.fq\n params = {\n 'q_op': 'AND',\n 'rows': i.limit,\n **({'fq': fq} if fq else {}),\n # limit the fields returned for better performance\n 'fl': self.fl,\n **({'sort': self.sort} if self.sort else {}),\n }\n\n data = solr.select(solr_q, **params)\n docs = data['docs']\n\n if embedded_olid and not docs:\n # Grumble! Work not in solr yet. Create a dummy.\n fake_doc = self.db_fetch(olid_to_key(embedded_olid))\n if fake_doc:\n docs = [fake_doc]\n\n result_docs = []\n for d in docs:\n if self.doc_filter(d):\n self.doc_wrap(d)\n result_docs.append(d)\n\n return to_json(result_docs)\n\n\nclass languages_autocomplete(delegate.page):\n path = \"/languages/_autocomplete\"\n\n def GET(self):\n i = web.input(q=\"\", limit=5)\n i.limit = safeint(i.limit, 5)\n return to_json(\n list(itertools.islice(utils.autocomplete_languages(i.q), i.limit))\n )\n\n\nclass works_autocomplete(autocomplete):\n path = \"/works/_autocomplete\"\n fq = ['type:work']\n fl = 'key,title,subtitle,cover_i,first_publish_year,author_name,edition_count'\n olid_suffix = 'W'\n query = 'title:\"{q}\"^2 OR title:({q}*)'\n\n def doc_filter(self, doc: dict) -> bool:\n # Exclude orphaned editions from autocomplete results\n # Note: Do this here instead of with an `fq=key:*W` for performance\n # reasons.\n return doc['key'][-1] == 'W'\n\n def doc_wrap(self, doc: dict):\n doc['full_title'] = doc['title']\n if 'subtitle' in doc:\n doc['full_title'] += \": \" + doc['subtitle']\n doc['name'] = doc.get('title')\n\n\nclass authors_autocomplete(autocomplete):\n path = \"/authors/_autocomplete\"\n fq = ['type:author']\n fl = 'key,name,alternate_names,birth_date,death_date,work_count,top_work,top_subjects'\n olid_suffix = 'A'\n query = 'name:({q}*) OR alternate_names:({q}*) OR name:\"{q}\"^2 OR alternate_names:\"{q}\"^2'\n\n def doc_wrap(self, doc: dict):\n if 'top_work' in doc:\n doc['works'] = [doc.pop('top_work')]\n else:\n doc['works'] = []\n doc['subjects'] = doc.pop('top_subjects', [])\n\n\nclass subjects_autocomplete(autocomplete):\n # can't use /subjects/_autocomplete because the subjects endpoint = /subjects/[^/]+\n path = \"/subjects_autocomplete\"\n fq = ['type:subject']\n fl = 'key,name,work_count'\n query = 'name:({q}*)'\n sort = 'work_count desc'\n\n def GET(self):\n i = web.input(type=\"\")\n fq = self.fq\n if i.type:\n fq = fq + [f'subject_type:{i.type}']\n\n return super().direct_get(fq=fq)\n\n\ndef setup():\n \"\"\"Do required setup.\"\"\"\n pass\n", "path": "openlibrary/plugins/worksearch/autocomplete.py"}]} | 1,979 | 319 |
gh_patches_debug_11597 | rasdani/github-patches | git_diff | fedora-infra__bodhi-4037 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Side tag updates are not automatically pushed to testing
When I create a side tag update for a stable Fedora, it sits in pending until I manually click "push to testing". This is inconsistent with regular updates and a tad confusing.
See also https://pagure.io/fedora-infrastructure/issue/8930
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bodhi/server/consumers/signed.py`
Content:
```
1 # Copyright © 2016-2019 Red Hat, Inc.
2 #
3 # This file is part of Bodhi.
4 #
5 # This program is free software; you can redistribute it and/or
6 # modify it under the terms of the GNU General Public License
7 # as published by the Free Software Foundation; either version 2
8 # of the License, or (at your option) any later version.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License along with
16 # this program; if not, write to the Free Software Foundation, Inc., 51
17 # Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
18 """
19 The "signed handler".
20
21 This module is responsible for marking builds as "signed" when they get moved
22 from the pending-signing to pending-updates-testing tag by RoboSignatory.
23 """
24
25 import logging
26
27 import fedora_messaging
28 from sqlalchemy import func
29
30 from bodhi.server.config import config
31 from bodhi.server.models import Build, UpdateStatus, TestGatingStatus
32 from bodhi.server.util import transactional_session_maker
33
34 log = logging.getLogger('bodhi')
35
36
37 class SignedHandler(object):
38 """
39 The Bodhi Signed Handler.
40
41 A fedora-messaging listener waiting for messages from koji about builds being tagged.
42 """
43
44 def __init__(self):
45 """Initialize the SignedHandler."""
46 self.db_factory = transactional_session_maker()
47
48 def __call__(self, message: fedora_messaging.api.Message):
49 """
50 Handle messages arriving with the configured topic.
51
52 This marks a build as signed if it is assigned to the pending testing release tag.
53
54 Example message format::
55 {
56 'body': {
57 'build_id': 442562,
58 'name': 'colord',
59 'tag_id': 214,
60 'instance': 's390',
61 'tag': 'f26-updates-testing-pending',
62 'user': 'sharkcz',
63 'version': '1.3.4',
64 'owner': 'sharkcz',
65 'release': '1.fc26'
66 },
67 }
68
69 The message can contain additional keys.
70
71 Duplicate messages: this method is idempotent.
72
73 Args:
74 message: The incoming message in the format described above.
75 """
76 message = message.body
77 build_nvr = '%(name)s-%(version)s-%(release)s' % message
78 tag = message['tag']
79
80 log.info("%s tagged into %s" % (build_nvr, tag))
81
82 with self.db_factory() as dbsession:
83 build = Build.get(build_nvr)
84 if not build:
85 log.info("Build was not submitted, skipping")
86 return
87
88 if not build.release:
89 log.info('Build is not assigned to release, skipping')
90 return
91
92 if build.update and build.update.from_tag:
93 koji_testing_tag = build.release.get_testing_side_tag(build.update.from_tag)
94 if tag != koji_testing_tag:
95 log.info("Tag is not testing side tag, skipping")
96 return
97 else:
98 if build.release.pending_testing_tag != tag:
99 log.info("Tag is not pending_testing tag, skipping")
100 return
101
102 if build.signed:
103 log.info("Build was already marked as signed (maybe a duplicate message)")
104 return
105
106 # This build was moved into the pending_testing tag for the applicable release, which
107 # is done by RoboSignatory to indicate that the build has been correctly signed and
108 # written out. Mark it as such.
109 log.info("Build has been signed, marking")
110 build.signed = True
111 dbsession.flush()
112 log.info("Build %s has been marked as signed" % build_nvr)
113
114 # If every build in update is signed change status to testing
115 if build.update \
116 and not build.update.release.composed_by_bodhi \
117 and build.update.signed:
118 log.info("Every build in update is signed, set status to testing")
119
120 build.update.status = UpdateStatus.testing
121 build.update.date_testing = func.current_timestamp()
122 build.update.request = None
123 build.update.pushed = True
124
125 if config.get("test_gating.required"):
126 log.debug('Test gating is required, marking the update as waiting on test '
127 'gating and updating it from Greenwave to get the real status.')
128 build.update.test_gating_status = TestGatingStatus.waiting
129 build.update.update_test_gating_status()
130
131 log.info(f"Update {build.update.display_name} status has been set to testing")
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bodhi/server/consumers/signed.py b/bodhi/server/consumers/signed.py
--- a/bodhi/server/consumers/signed.py
+++ b/bodhi/server/consumers/signed.py
@@ -89,7 +89,9 @@
log.info('Build is not assigned to release, skipping')
return
- if build.update and build.update.from_tag:
+ if build.update \
+ and build.update.from_tag \
+ and not build.update.release.composed_by_bodhi:
koji_testing_tag = build.release.get_testing_side_tag(build.update.from_tag)
if tag != koji_testing_tag:
log.info("Tag is not testing side tag, skipping")
| {"golden_diff": "diff --git a/bodhi/server/consumers/signed.py b/bodhi/server/consumers/signed.py\n--- a/bodhi/server/consumers/signed.py\n+++ b/bodhi/server/consumers/signed.py\n@@ -89,7 +89,9 @@\n log.info('Build is not assigned to release, skipping')\n return\n \n- if build.update and build.update.from_tag:\n+ if build.update \\\n+ and build.update.from_tag \\\n+ and not build.update.release.composed_by_bodhi:\n koji_testing_tag = build.release.get_testing_side_tag(build.update.from_tag)\n if tag != koji_testing_tag:\n log.info(\"Tag is not testing side tag, skipping\")\n", "issue": "Side tag updates are not automatically pushed to testing \nWhen I create a side tag update for a stable Fedora, it sits in pending until I manually click \"push to testing\". This is inconsistent with regular updates and a tad confusing.\r\n\r\nSee also https://pagure.io/fedora-infrastructure/issue/8930\n", "before_files": [{"content": "# Copyright \u00a9 2016-2019 Red Hat, Inc.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"\nThe \"signed handler\".\n\nThis module is responsible for marking builds as \"signed\" when they get moved\nfrom the pending-signing to pending-updates-testing tag by RoboSignatory.\n\"\"\"\n\nimport logging\n\nimport fedora_messaging\nfrom sqlalchemy import func\n\nfrom bodhi.server.config import config\nfrom bodhi.server.models import Build, UpdateStatus, TestGatingStatus\nfrom bodhi.server.util import transactional_session_maker\n\nlog = logging.getLogger('bodhi')\n\n\nclass SignedHandler(object):\n \"\"\"\n The Bodhi Signed Handler.\n\n A fedora-messaging listener waiting for messages from koji about builds being tagged.\n \"\"\"\n\n def __init__(self):\n \"\"\"Initialize the SignedHandler.\"\"\"\n self.db_factory = transactional_session_maker()\n\n def __call__(self, message: fedora_messaging.api.Message):\n \"\"\"\n Handle messages arriving with the configured topic.\n\n This marks a build as signed if it is assigned to the pending testing release tag.\n\n Example message format::\n {\n 'body': {\n 'build_id': 442562,\n 'name': 'colord',\n 'tag_id': 214,\n 'instance': 's390',\n 'tag': 'f26-updates-testing-pending',\n 'user': 'sharkcz',\n 'version': '1.3.4',\n 'owner': 'sharkcz',\n 'release': '1.fc26'\n },\n }\n\n The message can contain additional keys.\n\n Duplicate messages: this method is idempotent.\n\n Args:\n message: The incoming message in the format described above.\n \"\"\"\n message = message.body\n build_nvr = '%(name)s-%(version)s-%(release)s' % message\n tag = message['tag']\n\n log.info(\"%s tagged into %s\" % (build_nvr, tag))\n\n with self.db_factory() as dbsession:\n build = Build.get(build_nvr)\n if not build:\n log.info(\"Build was not submitted, skipping\")\n return\n\n if not build.release:\n log.info('Build is not assigned to release, skipping')\n return\n\n if build.update and build.update.from_tag:\n koji_testing_tag = build.release.get_testing_side_tag(build.update.from_tag)\n if tag != koji_testing_tag:\n log.info(\"Tag is not testing side tag, skipping\")\n return\n else:\n if build.release.pending_testing_tag != tag:\n log.info(\"Tag is not pending_testing tag, skipping\")\n return\n\n if build.signed:\n log.info(\"Build was already marked as signed (maybe a duplicate message)\")\n return\n\n # This build was moved into the pending_testing tag for the applicable release, which\n # is done by RoboSignatory to indicate that the build has been correctly signed and\n # written out. Mark it as such.\n log.info(\"Build has been signed, marking\")\n build.signed = True\n dbsession.flush()\n log.info(\"Build %s has been marked as signed\" % build_nvr)\n\n # If every build in update is signed change status to testing\n if build.update \\\n and not build.update.release.composed_by_bodhi \\\n and build.update.signed:\n log.info(\"Every build in update is signed, set status to testing\")\n\n build.update.status = UpdateStatus.testing\n build.update.date_testing = func.current_timestamp()\n build.update.request = None\n build.update.pushed = True\n\n if config.get(\"test_gating.required\"):\n log.debug('Test gating is required, marking the update as waiting on test '\n 'gating and updating it from Greenwave to get the real status.')\n build.update.test_gating_status = TestGatingStatus.waiting\n build.update.update_test_gating_status()\n\n log.info(f\"Update {build.update.display_name} status has been set to testing\")\n", "path": "bodhi/server/consumers/signed.py"}], "after_files": [{"content": "# Copyright \u00a9 2016-2019 Red Hat, Inc.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"\nThe \"signed handler\".\n\nThis module is responsible for marking builds as \"signed\" when they get moved\nfrom the pending-signing to pending-updates-testing tag by RoboSignatory.\n\"\"\"\n\nimport logging\n\nimport fedora_messaging\nfrom sqlalchemy import func\n\nfrom bodhi.server.config import config\nfrom bodhi.server.models import Build, UpdateStatus, TestGatingStatus\nfrom bodhi.server.util import transactional_session_maker\n\nlog = logging.getLogger('bodhi')\n\n\nclass SignedHandler(object):\n \"\"\"\n The Bodhi Signed Handler.\n\n A fedora-messaging listener waiting for messages from koji about builds being tagged.\n \"\"\"\n\n def __init__(self):\n \"\"\"Initialize the SignedHandler.\"\"\"\n self.db_factory = transactional_session_maker()\n\n def __call__(self, message: fedora_messaging.api.Message):\n \"\"\"\n Handle messages arriving with the configured topic.\n\n This marks a build as signed if it is assigned to the pending testing release tag.\n\n Example message format::\n {\n 'body': {\n 'build_id': 442562,\n 'name': 'colord',\n 'tag_id': 214,\n 'instance': 's390',\n 'tag': 'f26-updates-testing-pending',\n 'user': 'sharkcz',\n 'version': '1.3.4',\n 'owner': 'sharkcz',\n 'release': '1.fc26'\n },\n }\n\n The message can contain additional keys.\n\n Duplicate messages: this method is idempotent.\n\n Args:\n message: The incoming message in the format described above.\n \"\"\"\n message = message.body\n build_nvr = '%(name)s-%(version)s-%(release)s' % message\n tag = message['tag']\n\n log.info(\"%s tagged into %s\" % (build_nvr, tag))\n\n with self.db_factory() as dbsession:\n build = Build.get(build_nvr)\n if not build:\n log.info(\"Build was not submitted, skipping\")\n return\n\n if not build.release:\n log.info('Build is not assigned to release, skipping')\n return\n\n if build.update \\\n and build.update.from_tag \\\n and not build.update.release.composed_by_bodhi:\n koji_testing_tag = build.release.get_testing_side_tag(build.update.from_tag)\n if tag != koji_testing_tag:\n log.info(\"Tag is not testing side tag, skipping\")\n return\n else:\n if build.release.pending_testing_tag != tag:\n log.info(\"Tag is not pending_testing tag, skipping\")\n return\n\n if build.signed:\n log.info(\"Build was already marked as signed (maybe a duplicate message)\")\n return\n\n # This build was moved into the pending_testing tag for the applicable release, which\n # is done by RoboSignatory to indicate that the build has been correctly signed and\n # written out. Mark it as such.\n log.info(\"Build has been signed, marking\")\n build.signed = True\n dbsession.flush()\n log.info(\"Build %s has been marked as signed\" % build_nvr)\n\n # If every build in update is signed change status to testing\n if build.update \\\n and not build.update.release.composed_by_bodhi \\\n and build.update.signed:\n log.info(\"Every build in update is signed, set status to testing\")\n\n build.update.status = UpdateStatus.testing\n build.update.date_testing = func.current_timestamp()\n build.update.request = None\n build.update.pushed = True\n\n if config.get(\"test_gating.required\"):\n log.debug('Test gating is required, marking the update as waiting on test '\n 'gating and updating it from Greenwave to get the real status.')\n build.update.test_gating_status = TestGatingStatus.waiting\n build.update.update_test_gating_status()\n\n log.info(f\"Update {build.update.display_name} status has been set to testing\")\n", "path": "bodhi/server/consumers/signed.py"}]} | 1,666 | 156 |
gh_patches_debug_22750 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-4761 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot inject websocket with binary or string content.
#### Problem Description
mitmproxy attempts to parse the injected websocket binary data as text, and replaces the binary data if it cannot be decoded into text, changing the injected websocket message. Also, when a string is given instead, there is error:
File "\mitmproxy\proxy\layers\websocket.py", line 230, in msg
data_str = data.decode(errors="replace")
AttributeError: 'str' object has no attribute 'decode'
#### Steps to reproduce the behavior:
1. from mitmproxy.ctx import master
2. master.commands.call("inject.websocket", flow, False, bytes([i for i in range(256)])) # No error, but message contents are changed
3. master.commands.call("inject.websocket", flow, False, "string") # AttributeError
#### System Information
Mitmproxy: 7.0.2
Python: 3.9.6
OpenSSL: OpenSSL 1.1.1k 25 Mar 2021
Platform: Windows-10-10.0.19043-SP0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/addons/websocket-inject-message.py`
Content:
```
1 """
2 Inject a WebSocket message into a running connection.
3
4 This example shows how to inject a WebSocket message into a running connection.
5 """
6 import asyncio
7
8 from mitmproxy import ctx, http
9
10
11 # Simple example: Inject a message as a response to an event
12
13 def websocket_message(flow: http.HTTPFlow):
14 assert flow.websocket is not None # make type checker happy
15 last_message = flow.websocket.messages[-1]
16 if b"secret" in last_message.content:
17 last_message.drop()
18 ctx.master.commands.call("inject.websocket", flow, last_message.from_client, "ssssssh")
19
20
21 # Complex example: Schedule a periodic timer
22
23 async def inject_async(flow: http.HTTPFlow):
24 msg = "hello from mitmproxy! "
25 assert flow.websocket is not None # make type checker happy
26 while flow.websocket.timestamp_end is None:
27 ctx.master.commands.call("inject.websocket", flow, True, msg)
28 await asyncio.sleep(1)
29 msg = msg[1:] + msg[:1]
30
31
32 def websocket_start(flow: http.HTTPFlow):
33 asyncio.create_task(inject_async(flow))
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/addons/websocket-inject-message.py b/examples/addons/websocket-inject-message.py
--- a/examples/addons/websocket-inject-message.py
+++ b/examples/addons/websocket-inject-message.py
@@ -13,9 +13,9 @@
def websocket_message(flow: http.HTTPFlow):
assert flow.websocket is not None # make type checker happy
last_message = flow.websocket.messages[-1]
- if b"secret" in last_message.content:
+ if last_message.is_text and "secret" in last_message.text:
last_message.drop()
- ctx.master.commands.call("inject.websocket", flow, last_message.from_client, "ssssssh")
+ ctx.master.commands.call("inject.websocket", flow, last_message.from_client, "ssssssh".encode())
# Complex example: Schedule a periodic timer
@@ -24,7 +24,7 @@
msg = "hello from mitmproxy! "
assert flow.websocket is not None # make type checker happy
while flow.websocket.timestamp_end is None:
- ctx.master.commands.call("inject.websocket", flow, True, msg)
+ ctx.master.commands.call("inject.websocket", flow, True, msg.encode())
await asyncio.sleep(1)
msg = msg[1:] + msg[:1]
| {"golden_diff": "diff --git a/examples/addons/websocket-inject-message.py b/examples/addons/websocket-inject-message.py\n--- a/examples/addons/websocket-inject-message.py\n+++ b/examples/addons/websocket-inject-message.py\n@@ -13,9 +13,9 @@\n def websocket_message(flow: http.HTTPFlow):\n assert flow.websocket is not None # make type checker happy\n last_message = flow.websocket.messages[-1]\n- if b\"secret\" in last_message.content:\n+ if last_message.is_text and \"secret\" in last_message.text:\n last_message.drop()\n- ctx.master.commands.call(\"inject.websocket\", flow, last_message.from_client, \"ssssssh\")\n+ ctx.master.commands.call(\"inject.websocket\", flow, last_message.from_client, \"ssssssh\".encode())\n \n \n # Complex example: Schedule a periodic timer\n@@ -24,7 +24,7 @@\n msg = \"hello from mitmproxy! \"\n assert flow.websocket is not None # make type checker happy\n while flow.websocket.timestamp_end is None:\n- ctx.master.commands.call(\"inject.websocket\", flow, True, msg)\n+ ctx.master.commands.call(\"inject.websocket\", flow, True, msg.encode())\n await asyncio.sleep(1)\n msg = msg[1:] + msg[:1]\n", "issue": "Cannot inject websocket with binary or string content.\n#### Problem Description\r\nmitmproxy attempts to parse the injected websocket binary data as text, and replaces the binary data if it cannot be decoded into text, changing the injected websocket message. Also, when a string is given instead, there is error:\r\nFile \"\\mitmproxy\\proxy\\layers\\websocket.py\", line 230, in msg\r\n data_str = data.decode(errors=\"replace\")\r\nAttributeError: 'str' object has no attribute 'decode'\r\n\r\n#### Steps to reproduce the behavior:\r\n1. from mitmproxy.ctx import master\r\n2. master.commands.call(\"inject.websocket\", flow, False, bytes([i for i in range(256)])) # No error, but message contents are changed\r\n3. master.commands.call(\"inject.websocket\", flow, False, \"string\") # AttributeError\r\n\r\n#### System Information\r\nMitmproxy: 7.0.2\r\nPython: 3.9.6\r\nOpenSSL: OpenSSL 1.1.1k 25 Mar 2021\r\nPlatform: Windows-10-10.0.19043-SP0\r\n\n", "before_files": [{"content": "\"\"\"\nInject a WebSocket message into a running connection.\n\nThis example shows how to inject a WebSocket message into a running connection.\n\"\"\"\nimport asyncio\n\nfrom mitmproxy import ctx, http\n\n\n# Simple example: Inject a message as a response to an event\n\ndef websocket_message(flow: http.HTTPFlow):\n assert flow.websocket is not None # make type checker happy\n last_message = flow.websocket.messages[-1]\n if b\"secret\" in last_message.content:\n last_message.drop()\n ctx.master.commands.call(\"inject.websocket\", flow, last_message.from_client, \"ssssssh\")\n\n\n# Complex example: Schedule a periodic timer\n\nasync def inject_async(flow: http.HTTPFlow):\n msg = \"hello from mitmproxy! \"\n assert flow.websocket is not None # make type checker happy\n while flow.websocket.timestamp_end is None:\n ctx.master.commands.call(\"inject.websocket\", flow, True, msg)\n await asyncio.sleep(1)\n msg = msg[1:] + msg[:1]\n\n\ndef websocket_start(flow: http.HTTPFlow):\n asyncio.create_task(inject_async(flow))\n", "path": "examples/addons/websocket-inject-message.py"}], "after_files": [{"content": "\"\"\"\nInject a WebSocket message into a running connection.\n\nThis example shows how to inject a WebSocket message into a running connection.\n\"\"\"\nimport asyncio\n\nfrom mitmproxy import ctx, http\n\n\n# Simple example: Inject a message as a response to an event\n\ndef websocket_message(flow: http.HTTPFlow):\n assert flow.websocket is not None # make type checker happy\n last_message = flow.websocket.messages[-1]\n if last_message.is_text and \"secret\" in last_message.text:\n last_message.drop()\n ctx.master.commands.call(\"inject.websocket\", flow, last_message.from_client, \"ssssssh\".encode())\n\n\n# Complex example: Schedule a periodic timer\n\nasync def inject_async(flow: http.HTTPFlow):\n msg = \"hello from mitmproxy! \"\n assert flow.websocket is not None # make type checker happy\n while flow.websocket.timestamp_end is None:\n ctx.master.commands.call(\"inject.websocket\", flow, True, msg.encode())\n await asyncio.sleep(1)\n msg = msg[1:] + msg[:1]\n\n\ndef websocket_start(flow: http.HTTPFlow):\n asyncio.create_task(inject_async(flow))\n", "path": "examples/addons/websocket-inject-message.py"}]} | 805 | 280 |
gh_patches_debug_385 | rasdani/github-patches | git_diff | searx__searx-1477 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
New release?
Hi,
I'm the Debian maintainer of searx and we are still shipping version 0.14.0 of searx because there has not been a more recent release since February.
Unfortunately, we see a number of services that do not work anymore with 0.14.0 but which are fixed in git. Would it be possible to make a release soon?
The last commit to the repository was back in August. Is the project still alive? Debian plans to release its next stable version soon and we should not include searx if it's not maintained anymore.
Thanks!
cheers, josch
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/version.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 '''
3 searx is free software: you can redistribute it and/or modify
4 it under the terms of the GNU Affero General Public License as published by
5 the Free Software Foundation, either version 3 of the License, or
6 (at your option) any later version.
7
8 searx is distributed in the hope that it will be useful,
9 but WITHOUT ANY WARRANTY; without even the implied warranty of
10 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11 GNU Affero General Public License for more details.
12
13 You should have received a copy of the GNU Affero General Public License
14 along with searx. If not, see < http://www.gnu.org/licenses/ >.
15
16 (C) 2013- by Adam Tauber, <[email protected]>
17 '''
18
19 # version of searx
20 VERSION_MAJOR = 0
21 VERSION_MINOR = 14
22 VERSION_BUILD = 0
23
24 VERSION_STRING = "{0}.{1}.{2}".format(VERSION_MAJOR,
25 VERSION_MINOR,
26 VERSION_BUILD)
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/version.py b/searx/version.py
--- a/searx/version.py
+++ b/searx/version.py
@@ -18,7 +18,7 @@
# version of searx
VERSION_MAJOR = 0
-VERSION_MINOR = 14
+VERSION_MINOR = 15
VERSION_BUILD = 0
VERSION_STRING = "{0}.{1}.{2}".format(VERSION_MAJOR,
| {"golden_diff": "diff --git a/searx/version.py b/searx/version.py\n--- a/searx/version.py\n+++ b/searx/version.py\n@@ -18,7 +18,7 @@\n \n # version of searx\n VERSION_MAJOR = 0\n-VERSION_MINOR = 14\n+VERSION_MINOR = 15\n VERSION_BUILD = 0\n \n VERSION_STRING = \"{0}.{1}.{2}\".format(VERSION_MAJOR,\n", "issue": "New release?\nHi,\r\n\r\nI'm the Debian maintainer of searx and we are still shipping version 0.14.0 of searx because there has not been a more recent release since February.\r\n\r\nUnfortunately, we see a number of services that do not work anymore with 0.14.0 but which are fixed in git. Would it be possible to make a release soon?\r\n\r\nThe last commit to the repository was back in August. Is the project still alive? Debian plans to release its next stable version soon and we should not include searx if it's not maintained anymore.\r\n\r\nThanks!\r\ncheers, josch\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2013- by Adam Tauber, <[email protected]>\n'''\n\n# version of searx\nVERSION_MAJOR = 0\nVERSION_MINOR = 14\nVERSION_BUILD = 0\n\nVERSION_STRING = \"{0}.{1}.{2}\".format(VERSION_MAJOR,\n VERSION_MINOR,\n VERSION_BUILD)\n", "path": "searx/version.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2013- by Adam Tauber, <[email protected]>\n'''\n\n# version of searx\nVERSION_MAJOR = 0\nVERSION_MINOR = 15\nVERSION_BUILD = 0\n\nVERSION_STRING = \"{0}.{1}.{2}\".format(VERSION_MAJOR,\n VERSION_MINOR,\n VERSION_BUILD)\n", "path": "searx/version.py"}]} | 657 | 95 |
gh_patches_debug_2628 | rasdani/github-patches | git_diff | praw-dev__praw-1441 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PRAW 6.5.1 and 7.0.0 require Python versions above 3.5.2
**Describe the bug**
At https://praw.readthedocs.io/en/latest/getting_started/installation.html, it says:
> PRAW supports Python 3.5+
3.5.2 seems to be insufficient for PRAW versions after 6.4.0. I *think* 3.5.3 is probably sufficient based on what I have read searching for information on this error message, but I am skipping that version on this particular system so I haven't confirmed.
**To Reproduce**
Steps to reproduce the behavior:
1. Upgrade PRAW to either version 6.5.1 or 7.0.0
2. Run a simple PRAW script
3. Get this error:
```
$ python3 ~/test.py
Traceback (most recent call last):
File "/home/myusername/test.py", line 5, in <module>
import praw
File "/home/myusername/.local/lib/python3.5/site-packages/praw/__init__.py", line 14, in <module>
from .reddit import Reddit # NOQA
File "/home/myusername/.local/lib/python3.5/site-packages/praw/reddit.py", line 50, in <module>
class Reddit:
File "/home/myusername/.local/lib/python3.5/site-packages/praw/reddit.py", line 128, in Reddit
requestor_kwargs: Dict[str, Any] = None,
File "/usr/lib/python3.5/typing.py", line 649, in __getitem__
return Union[arg, type(None)]
File "/usr/lib/python3.5/typing.py", line 552, in __getitem__
dict(self.__dict__), parameters, _root=True)
File "/usr/lib/python3.5/typing.py", line 512, in __new__
for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):
File "/usr/lib/python3.5/typing.py", line 512, in <genexpr>
for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):
File "/usr/lib/python3.5/typing.py", line 1077, in __subclasscheck__
if super().__subclasscheck__(cls):
File "/usr/lib/python3.5/abc.py", line 225, in __subclasscheck__
for scls in cls.__subclasses__():
TypeError: descriptor '__subclasses__' of 'type' object needs an argument
```
**Expected behavior**
Python 3.5.2 works fine with PRAW 6.4.0 and earlier.
**Code/Logs**
`import praw` will do the trick.
**System Info**
- OS: Linux
- Python: 3.5.2
- PRAW Version: 6.5.1 or 7.0.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 """praw setup.py"""
2
3 import re
4 from codecs import open
5 from os import path
6
7 from setuptools import find_packages, setup
8
9 PACKAGE_NAME = "praw"
10 HERE = path.abspath(path.dirname(__file__))
11 with open(path.join(HERE, "README.rst"), encoding="utf-8") as fp:
12 README = fp.read()
13 with open(path.join(HERE, PACKAGE_NAME, "const.py"), encoding="utf-8") as fp:
14 VERSION = re.search('__version__ = "([^"]+)"', fp.read()).group(1)
15
16 extras = {
17 "ci": ["coveralls"],
18 "dev": ["pre-commit"],
19 "lint": [
20 "black",
21 "flake8",
22 "pydocstyle",
23 "sphinx<3.0",
24 "sphinx_rtd_theme",
25 ],
26 "test": [
27 "betamax >=0.8, <0.9",
28 "betamax-matchers >=0.3.0, <0.5",
29 "pytest >=2.7.3",
30 ],
31 }
32 extras["dev"] += extras["lint"] + extras["test"]
33
34 setup(
35 name=PACKAGE_NAME,
36 author="Bryce Boe",
37 author_email="[email protected]",
38 python_requires=">=3.5",
39 classifiers=[
40 "Development Status :: 5 - Production/Stable",
41 "Environment :: Console",
42 "Intended Audience :: Developers",
43 "License :: OSI Approved :: BSD License",
44 "Natural Language :: English",
45 "Operating System :: OS Independent",
46 "Programming Language :: Python",
47 "Programming Language :: Python :: 3",
48 "Programming Language :: Python :: 3.5",
49 "Programming Language :: Python :: 3.6",
50 "Programming Language :: Python :: 3.7",
51 "Programming Language :: Python :: 3.8",
52 "Topic :: Utilities",
53 ],
54 description=(
55 "PRAW, an acronym for `Python Reddit API Wrapper`, is a "
56 "python package that allows for simple access to "
57 "reddit's API."
58 ),
59 extras_require=extras,
60 install_requires=[
61 "prawcore >=1.3.0, <2.0",
62 "update_checker >=0.16",
63 "websocket-client >=0.54.0",
64 ],
65 keywords="reddit api wrapper",
66 license="Simplified BSD License",
67 long_description=README,
68 package_data={
69 "": ["LICENSE.txt"],
70 PACKAGE_NAME: ["*.ini", "images/*.jpg"],
71 },
72 packages=find_packages(exclude=["tests", "tests.*", "tools", "tools.*"]),
73 url="https://praw.readthedocs.org/",
74 version=VERSION,
75 )
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -35,7 +35,7 @@
name=PACKAGE_NAME,
author="Bryce Boe",
author_email="[email protected]",
- python_requires=">=3.5",
+ python_requires=">3.5.3",
classifiers=[
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -35,7 +35,7 @@\n name=PACKAGE_NAME,\n author=\"Bryce Boe\",\n author_email=\"[email protected]\",\n- python_requires=\">=3.5\",\n+ python_requires=\">3.5.3\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n", "issue": "PRAW 6.5.1 and 7.0.0 require Python versions above 3.5.2\n**Describe the bug**\r\n\r\nAt https://praw.readthedocs.io/en/latest/getting_started/installation.html, it says:\r\n\r\n> PRAW supports Python 3.5+\r\n\r\n3.5.2 seems to be insufficient for PRAW versions after 6.4.0. I *think* 3.5.3 is probably sufficient based on what I have read searching for information on this error message, but I am skipping that version on this particular system so I haven't confirmed.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Upgrade PRAW to either version 6.5.1 or 7.0.0\r\n\r\n2. Run a simple PRAW script\r\n\r\n3. Get this error:\r\n\r\n```\r\n$ python3 ~/test.py\r\nTraceback (most recent call last):\r\n File \"/home/myusername/test.py\", line 5, in <module>\r\n import praw\r\n File \"/home/myusername/.local/lib/python3.5/site-packages/praw/__init__.py\", line 14, in <module>\r\n from .reddit import Reddit # NOQA\r\n File \"/home/myusername/.local/lib/python3.5/site-packages/praw/reddit.py\", line 50, in <module>\r\n class Reddit:\r\n File \"/home/myusername/.local/lib/python3.5/site-packages/praw/reddit.py\", line 128, in Reddit\r\n requestor_kwargs: Dict[str, Any] = None,\r\n File \"/usr/lib/python3.5/typing.py\", line 649, in __getitem__\r\n return Union[arg, type(None)]\r\n File \"/usr/lib/python3.5/typing.py\", line 552, in __getitem__\r\n dict(self.__dict__), parameters, _root=True)\r\n File \"/usr/lib/python3.5/typing.py\", line 512, in __new__\r\n for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):\r\n File \"/usr/lib/python3.5/typing.py\", line 512, in <genexpr>\r\n for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):\r\n File \"/usr/lib/python3.5/typing.py\", line 1077, in __subclasscheck__\r\n if super().__subclasscheck__(cls):\r\n File \"/usr/lib/python3.5/abc.py\", line 225, in __subclasscheck__\r\n for scls in cls.__subclasses__():\r\nTypeError: descriptor '__subclasses__' of 'type' object needs an argument\r\n```\r\n\r\n**Expected behavior**\r\n\r\nPython 3.5.2 works fine with PRAW 6.4.0 and earlier.\r\n\r\n**Code/Logs**\r\n\r\n`import praw` will do the trick.\r\n\r\n**System Info**\r\n - OS: Linux\r\n - Python: 3.5.2\r\n - PRAW Version: 6.5.1 or 7.0.0\r\n\r\n\n", "before_files": [{"content": "\"\"\"praw setup.py\"\"\"\n\nimport re\nfrom codecs import open\nfrom os import path\n\nfrom setuptools import find_packages, setup\n\nPACKAGE_NAME = \"praw\"\nHERE = path.abspath(path.dirname(__file__))\nwith open(path.join(HERE, \"README.rst\"), encoding=\"utf-8\") as fp:\n README = fp.read()\nwith open(path.join(HERE, PACKAGE_NAME, \"const.py\"), encoding=\"utf-8\") as fp:\n VERSION = re.search('__version__ = \"([^\"]+)\"', fp.read()).group(1)\n\nextras = {\n \"ci\": [\"coveralls\"],\n \"dev\": [\"pre-commit\"],\n \"lint\": [\n \"black\",\n \"flake8\",\n \"pydocstyle\",\n \"sphinx<3.0\",\n \"sphinx_rtd_theme\",\n ],\n \"test\": [\n \"betamax >=0.8, <0.9\",\n \"betamax-matchers >=0.3.0, <0.5\",\n \"pytest >=2.7.3\",\n ],\n}\nextras[\"dev\"] += extras[\"lint\"] + extras[\"test\"]\n\nsetup(\n name=PACKAGE_NAME,\n author=\"Bryce Boe\",\n author_email=\"[email protected]\",\n python_requires=\">=3.5\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Utilities\",\n ],\n description=(\n \"PRAW, an acronym for `Python Reddit API Wrapper`, is a \"\n \"python package that allows for simple access to \"\n \"reddit's API.\"\n ),\n extras_require=extras,\n install_requires=[\n \"prawcore >=1.3.0, <2.0\",\n \"update_checker >=0.16\",\n \"websocket-client >=0.54.0\",\n ],\n keywords=\"reddit api wrapper\",\n license=\"Simplified BSD License\",\n long_description=README,\n package_data={\n \"\": [\"LICENSE.txt\"],\n PACKAGE_NAME: [\"*.ini\", \"images/*.jpg\"],\n },\n packages=find_packages(exclude=[\"tests\", \"tests.*\", \"tools\", \"tools.*\"]),\n url=\"https://praw.readthedocs.org/\",\n version=VERSION,\n)\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"praw setup.py\"\"\"\n\nimport re\nfrom codecs import open\nfrom os import path\n\nfrom setuptools import find_packages, setup\n\nPACKAGE_NAME = \"praw\"\nHERE = path.abspath(path.dirname(__file__))\nwith open(path.join(HERE, \"README.rst\"), encoding=\"utf-8\") as fp:\n README = fp.read()\nwith open(path.join(HERE, PACKAGE_NAME, \"const.py\"), encoding=\"utf-8\") as fp:\n VERSION = re.search('__version__ = \"([^\"]+)\"', fp.read()).group(1)\n\nextras = {\n \"ci\": [\"coveralls\"],\n \"dev\": [\"pre-commit\"],\n \"lint\": [\n \"black\",\n \"flake8\",\n \"pydocstyle\",\n \"sphinx<3.0\",\n \"sphinx_rtd_theme\",\n ],\n \"test\": [\n \"betamax >=0.8, <0.9\",\n \"betamax-matchers >=0.3.0, <0.5\",\n \"pytest >=2.7.3\",\n ],\n}\nextras[\"dev\"] += extras[\"lint\"] + extras[\"test\"]\n\nsetup(\n name=PACKAGE_NAME,\n author=\"Bryce Boe\",\n author_email=\"[email protected]\",\n python_requires=\">3.5.3\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Utilities\",\n ],\n description=(\n \"PRAW, an acronym for `Python Reddit API Wrapper`, is a \"\n \"python package that allows for simple access to \"\n \"reddit's API.\"\n ),\n extras_require=extras,\n install_requires=[\n \"prawcore >=1.3.0, <2.0\",\n \"update_checker >=0.16\",\n \"websocket-client >=0.54.0\",\n ],\n keywords=\"reddit api wrapper\",\n license=\"Simplified BSD License\",\n long_description=README,\n package_data={\n \"\": [\"LICENSE.txt\"],\n PACKAGE_NAME: [\"*.ini\", \"images/*.jpg\"],\n },\n packages=find_packages(exclude=[\"tests\", \"tests.*\", \"tools\", \"tools.*\"]),\n url=\"https://praw.readthedocs.org/\",\n version=VERSION,\n)\n", "path": "setup.py"}]} | 1,637 | 101 |
gh_patches_debug_39983 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1613 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove py2 leftovers
This is an issue to track all of the py2 stuff we can remove once a sufficient waiting period after 6.0 has passed.
- [x] Remove `zerorpc` (py2 only)
- [x] Remove `compat.py`
- [x] Run `pyupgrade`?
Ref #928
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/contrib/zerorpc/__init__.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2012, the Sentry Team, see AUTHORS for more details
4 # Copyright (c) 2019, Elasticsearch BV
5 # All rights reserved.
6 #
7 # Redistribution and use in source and binary forms, with or without
8 # modification, are permitted provided that the following conditions are met:
9 #
10 # * Redistributions of source code must retain the above copyright notice, this
11 # list of conditions and the following disclaimer.
12 #
13 # * Redistributions in binary form must reproduce the above copyright notice,
14 # this list of conditions and the following disclaimer in the documentation
15 # and/or other materials provided with the distribution.
16 #
17 # * Neither the name of the copyright holder nor the names of its
18 # contributors may be used to endorse or promote products derived from
19 # this software without specific prior written permission.
20 #
21 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
22 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
23 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
24 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
25 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
26 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
27 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
28 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
29 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
30
31
32 import inspect
33
34 from elasticapm.base import Client
35
36
37 class Middleware(object):
38 """ElasticAPM middleware for ZeroRPC.
39
40 >>> elasticapm = Middleware(service_name='..', secret_token='...')
41 >>> zerorpc.Context.get_instance().register_middleware(elasticapm)
42
43 Exceptions detected server-side in ZeroRPC will be submitted to the apm server (and
44 propagated to the client as well).
45 """
46
47 def __init__(self, hide_zerorpc_frames=True, client=None, **kwargs):
48 """Create a middleware object that can be injected in a ZeroRPC server.
49
50 - hide_zerorpc_frames: modify the exception stacktrace to remove the
51 internal zerorpc frames (True by default to make
52 the stacktrace as readable as possible);
53 - client: use an existing raven.Client object, otherwise one will be
54 instantiated from the keyword arguments.
55
56 """
57 self._elasticapm_client = client or Client(**kwargs)
58 self._hide_zerorpc_frames = hide_zerorpc_frames
59
60 def server_inspect_exception(self, req_event, rep_event, task_ctx, exc_info):
61 """Called when an exception has been raised in the code run by ZeroRPC"""
62
63 # Hide the zerorpc internal frames for readability, for a REQ/REP or
64 # REQ/STREAM server the frames to hide are:
65 # - core.ServerBase._async_task
66 # - core.Pattern*.process_call
67 # - core.DecoratorBase.__call__
68 #
69 # For a PUSH/PULL or PUB/SUB server the frame to hide is:
70 # - core.Puller._receiver
71 if self._hide_zerorpc_frames:
72 traceback = exc_info[2]
73 while traceback:
74 zerorpc_frame = traceback.tb_frame
75 zerorpc_frame.f_locals["__traceback_hide__"] = True
76 frame_info = inspect.getframeinfo(zerorpc_frame)
77 # Is there a better way than this (or looking up the filenames
78 # or hardcoding the number of frames to skip) to know when we
79 # are out of zerorpc?
80 if frame_info.function == "__call__" or frame_info.function == "_receiver":
81 break
82 traceback = traceback.tb_next
83
84 self._elasticapm_client.capture_exception(exc_info, extra=task_ctx, handled=False)
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticapm/contrib/zerorpc/__init__.py b/elasticapm/contrib/zerorpc/__init__.py
deleted file mode 100644
--- a/elasticapm/contrib/zerorpc/__init__.py
+++ /dev/null
@@ -1,84 +0,0 @@
-# BSD 3-Clause License
-#
-# Copyright (c) 2012, the Sentry Team, see AUTHORS for more details
-# Copyright (c) 2019, Elasticsearch BV
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-#
-# * Redistributions of source code must retain the above copyright notice, this
-# list of conditions and the following disclaimer.
-#
-# * Redistributions in binary form must reproduce the above copyright notice,
-# this list of conditions and the following disclaimer in the documentation
-# and/or other materials provided with the distribution.
-#
-# * Neither the name of the copyright holder nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
-# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
-# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
-# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
-# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-
-
-import inspect
-
-from elasticapm.base import Client
-
-
-class Middleware(object):
- """ElasticAPM middleware for ZeroRPC.
-
- >>> elasticapm = Middleware(service_name='..', secret_token='...')
- >>> zerorpc.Context.get_instance().register_middleware(elasticapm)
-
- Exceptions detected server-side in ZeroRPC will be submitted to the apm server (and
- propagated to the client as well).
- """
-
- def __init__(self, hide_zerorpc_frames=True, client=None, **kwargs):
- """Create a middleware object that can be injected in a ZeroRPC server.
-
- - hide_zerorpc_frames: modify the exception stacktrace to remove the
- internal zerorpc frames (True by default to make
- the stacktrace as readable as possible);
- - client: use an existing raven.Client object, otherwise one will be
- instantiated from the keyword arguments.
-
- """
- self._elasticapm_client = client or Client(**kwargs)
- self._hide_zerorpc_frames = hide_zerorpc_frames
-
- def server_inspect_exception(self, req_event, rep_event, task_ctx, exc_info):
- """Called when an exception has been raised in the code run by ZeroRPC"""
-
- # Hide the zerorpc internal frames for readability, for a REQ/REP or
- # REQ/STREAM server the frames to hide are:
- # - core.ServerBase._async_task
- # - core.Pattern*.process_call
- # - core.DecoratorBase.__call__
- #
- # For a PUSH/PULL or PUB/SUB server the frame to hide is:
- # - core.Puller._receiver
- if self._hide_zerorpc_frames:
- traceback = exc_info[2]
- while traceback:
- zerorpc_frame = traceback.tb_frame
- zerorpc_frame.f_locals["__traceback_hide__"] = True
- frame_info = inspect.getframeinfo(zerorpc_frame)
- # Is there a better way than this (or looking up the filenames
- # or hardcoding the number of frames to skip) to know when we
- # are out of zerorpc?
- if frame_info.function == "__call__" or frame_info.function == "_receiver":
- break
- traceback = traceback.tb_next
-
- self._elasticapm_client.capture_exception(exc_info, extra=task_ctx, handled=False)
| {"golden_diff": "diff --git a/elasticapm/contrib/zerorpc/__init__.py b/elasticapm/contrib/zerorpc/__init__.py\ndeleted file mode 100644\n--- a/elasticapm/contrib/zerorpc/__init__.py\n+++ /dev/null\n@@ -1,84 +0,0 @@\n-# BSD 3-Clause License\n-#\n-# Copyright (c) 2012, the Sentry Team, see AUTHORS for more details\n-# Copyright (c) 2019, Elasticsearch BV\n-# All rights reserved.\n-#\n-# Redistribution and use in source and binary forms, with or without\n-# modification, are permitted provided that the following conditions are met:\n-#\n-# * Redistributions of source code must retain the above copyright notice, this\n-# list of conditions and the following disclaimer.\n-#\n-# * Redistributions in binary form must reproduce the above copyright notice,\n-# this list of conditions and the following disclaimer in the documentation\n-# and/or other materials provided with the distribution.\n-#\n-# * Neither the name of the copyright holder nor the names of its\n-# contributors may be used to endorse or promote products derived from\n-# this software without specific prior written permission.\n-#\n-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n-# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n-# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n-# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n-# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n-# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n-\n-\n-import inspect\n-\n-from elasticapm.base import Client\n-\n-\n-class Middleware(object):\n- \"\"\"ElasticAPM middleware for ZeroRPC.\n-\n- >>> elasticapm = Middleware(service_name='..', secret_token='...')\n- >>> zerorpc.Context.get_instance().register_middleware(elasticapm)\n-\n- Exceptions detected server-side in ZeroRPC will be submitted to the apm server (and\n- propagated to the client as well).\n- \"\"\"\n-\n- def __init__(self, hide_zerorpc_frames=True, client=None, **kwargs):\n- \"\"\"Create a middleware object that can be injected in a ZeroRPC server.\n-\n- - hide_zerorpc_frames: modify the exception stacktrace to remove the\n- internal zerorpc frames (True by default to make\n- the stacktrace as readable as possible);\n- - client: use an existing raven.Client object, otherwise one will be\n- instantiated from the keyword arguments.\n-\n- \"\"\"\n- self._elasticapm_client = client or Client(**kwargs)\n- self._hide_zerorpc_frames = hide_zerorpc_frames\n-\n- def server_inspect_exception(self, req_event, rep_event, task_ctx, exc_info):\n- \"\"\"Called when an exception has been raised in the code run by ZeroRPC\"\"\"\n-\n- # Hide the zerorpc internal frames for readability, for a REQ/REP or\n- # REQ/STREAM server the frames to hide are:\n- # - core.ServerBase._async_task\n- # - core.Pattern*.process_call\n- # - core.DecoratorBase.__call__\n- #\n- # For a PUSH/PULL or PUB/SUB server the frame to hide is:\n- # - core.Puller._receiver\n- if self._hide_zerorpc_frames:\n- traceback = exc_info[2]\n- while traceback:\n- zerorpc_frame = traceback.tb_frame\n- zerorpc_frame.f_locals[\"__traceback_hide__\"] = True\n- frame_info = inspect.getframeinfo(zerorpc_frame)\n- # Is there a better way than this (or looking up the filenames\n- # or hardcoding the number of frames to skip) to know when we\n- # are out of zerorpc?\n- if frame_info.function == \"__call__\" or frame_info.function == \"_receiver\":\n- break\n- traceback = traceback.tb_next\n-\n- self._elasticapm_client.capture_exception(exc_info, extra=task_ctx, handled=False)\n", "issue": "Remove py2 leftovers\nThis is an issue to track all of the py2 stuff we can remove once a sufficient waiting period after 6.0 has passed.\r\n\r\n- [x] Remove `zerorpc` (py2 only)\r\n- [x] Remove `compat.py`\r\n- [x] Run `pyupgrade`?\r\n\r\nRef #928 \n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2012, the Sentry Team, see AUTHORS for more details\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n\n\nimport inspect\n\nfrom elasticapm.base import Client\n\n\nclass Middleware(object):\n \"\"\"ElasticAPM middleware for ZeroRPC.\n\n >>> elasticapm = Middleware(service_name='..', secret_token='...')\n >>> zerorpc.Context.get_instance().register_middleware(elasticapm)\n\n Exceptions detected server-side in ZeroRPC will be submitted to the apm server (and\n propagated to the client as well).\n \"\"\"\n\n def __init__(self, hide_zerorpc_frames=True, client=None, **kwargs):\n \"\"\"Create a middleware object that can be injected in a ZeroRPC server.\n\n - hide_zerorpc_frames: modify the exception stacktrace to remove the\n internal zerorpc frames (True by default to make\n the stacktrace as readable as possible);\n - client: use an existing raven.Client object, otherwise one will be\n instantiated from the keyword arguments.\n\n \"\"\"\n self._elasticapm_client = client or Client(**kwargs)\n self._hide_zerorpc_frames = hide_zerorpc_frames\n\n def server_inspect_exception(self, req_event, rep_event, task_ctx, exc_info):\n \"\"\"Called when an exception has been raised in the code run by ZeroRPC\"\"\"\n\n # Hide the zerorpc internal frames for readability, for a REQ/REP or\n # REQ/STREAM server the frames to hide are:\n # - core.ServerBase._async_task\n # - core.Pattern*.process_call\n # - core.DecoratorBase.__call__\n #\n # For a PUSH/PULL or PUB/SUB server the frame to hide is:\n # - core.Puller._receiver\n if self._hide_zerorpc_frames:\n traceback = exc_info[2]\n while traceback:\n zerorpc_frame = traceback.tb_frame\n zerorpc_frame.f_locals[\"__traceback_hide__\"] = True\n frame_info = inspect.getframeinfo(zerorpc_frame)\n # Is there a better way than this (or looking up the filenames\n # or hardcoding the number of frames to skip) to know when we\n # are out of zerorpc?\n if frame_info.function == \"__call__\" or frame_info.function == \"_receiver\":\n break\n traceback = traceback.tb_next\n\n self._elasticapm_client.capture_exception(exc_info, extra=task_ctx, handled=False)\n", "path": "elasticapm/contrib/zerorpc/__init__.py"}], "after_files": [{"content": null, "path": "elasticapm/contrib/zerorpc/__init__.py"}]} | 1,362 | 1,006 |
gh_patches_debug_9845 | rasdani/github-patches | git_diff | python-poetry__poetry-1458 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Poetry cant resolve with git dependency
I have `werkzeug` installed with git version and then add `flask-jwt-extended` shows error though `werkzeug` is 1.0.0.dev0
```bash
$ poetry --version
Poetry 0.12.11
```
```bash
$ cat pyproject.toml
[tool.poetry]
name = "test-poetry-git-deps"
version = "0.1.0"
description = ""
authors = []
[tool.poetry.dependencies]
python = "^3.6"
werkzeug = {git = "https://github.com/pallets/werkzeug.git"}
[tool.poetry.dev-dependencies]
[build-system]
requires = ["poetry>=0.12"]
build-backend = "poetry.masonry.api"
```
```bash
$ cat poetry.lock
[[package]]
category = "main"
description = "The comprehensive WSGI web application library."
name = "Werkzeug"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
version = "1.0.0.dev0"
[package.source]
reference = "86f7bdf6d7c005d3ab8be50226fe42edc1ce4f7c"
type = "git"
url = "https://github.com/pallets/werkzeug.git"
[metadata]
content-hash = "7d88d5e01ecd59f104fee20ece9f80aa3b98bc2a281afc0975c36010d6af2648"
python-versions = "^3.6"
[metadata.hashes]
Werkzeug = []
```
```bash
$ poetry add flask-jwt-extended
Using version ^3.18 for flask-jwt-extended
Updating dependencies
Resolving dependencies... (6.9s)
[SolverProblemError]
Because no versions of flask-jwt-extended match >3.18,<4.0
and flask-jwt-extended (3.18.0) depends on Werkzeug (>=0.14), flask-jwt-extended (>=3.18,<4.0) requires Werkzeug (>=0.14).
So, because no versions of werkzeug match >=0.14
and test-poetry-git-deps depends on flask-jwt-extended (^3.18), version solving failed.
add [-D|--dev] [--git GIT] [--path PATH] [-E|--extras EXTRAS] [--optional] [--python PYTHON] [--platform PLATFORM] [--allow-prereleases] [--dry-run] [--] <name> (<name>)...
```
```bash
$ poetry show
Werkzeug 1.0.0.dev0 86f7bdf The comprehensive WSGI web application library.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `poetry/repositories/repository.py`
Content:
```
1 from poetry.semver import parse_constraint
2 from poetry.semver import VersionConstraint
3 from poetry.semver import VersionRange
4
5 from .base_repository import BaseRepository
6
7
8 class Repository(BaseRepository):
9 def __init__(self, packages=None):
10 super(Repository, self).__init__()
11
12 self._name = None
13
14 if packages is None:
15 packages = []
16
17 for package in packages:
18 self.add_package(package)
19
20 @property
21 def name(self):
22 return self._name
23
24 def package(self, name, version, extras=None):
25 name = name.lower()
26
27 if extras is None:
28 extras = []
29
30 for package in self.packages:
31 if name == package.name and package.version.text == version:
32 # Activate extra dependencies
33 for extra in extras:
34 if extra in package.extras:
35 for extra_dep in package.extras[extra]:
36 for dep in package.requires:
37 if dep.name == extra_dep.name:
38 dep.activate()
39
40 return package.clone()
41
42 def find_packages(
43 self, name, constraint=None, extras=None, allow_prereleases=False
44 ):
45 name = name.lower()
46 packages = []
47 if extras is None:
48 extras = []
49
50 if constraint is None:
51 constraint = "*"
52
53 if not isinstance(constraint, VersionConstraint):
54 constraint = parse_constraint(constraint)
55
56 if isinstance(constraint, VersionRange):
57 if (
58 constraint.max is not None
59 and constraint.max.is_prerelease()
60 or constraint.min is not None
61 and constraint.min.is_prerelease()
62 ):
63 allow_prereleases = True
64
65 for package in self.packages:
66 if name == package.name:
67 if package.is_prerelease() and not allow_prereleases:
68 continue
69
70 if constraint.allows(package.version):
71 for dep in package.requires:
72 for extra in extras:
73 if extra not in package.extras:
74 continue
75
76 reqs = package.extras[extra]
77 for req in reqs:
78 if req.name == dep.name:
79 dep.activate()
80
81 if extras:
82 package.requires_extras = extras
83
84 packages.append(package)
85
86 return packages
87
88 def has_package(self, package):
89 package_id = package.unique_name
90
91 for repo_package in self.packages:
92 if package_id == repo_package.unique_name:
93 return True
94
95 return False
96
97 def add_package(self, package):
98 self._packages.append(package)
99
100 def remove_package(self, package):
101 package_id = package.unique_name
102
103 index = None
104 for i, repo_package in enumerate(self.packages):
105 if package_id == repo_package.unique_name:
106 index = i
107 break
108
109 if index is not None:
110 del self._packages[index]
111
112 def search(self, query, mode=0):
113 results = []
114
115 for package in self.packages:
116 if query in package.name:
117 results.append(package)
118
119 return results
120
121 def __len__(self):
122 return len(self._packages)
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/poetry/repositories/repository.py b/poetry/repositories/repository.py
--- a/poetry/repositories/repository.py
+++ b/poetry/repositories/repository.py
@@ -64,7 +64,13 @@
for package in self.packages:
if name == package.name:
- if package.is_prerelease() and not allow_prereleases:
+ if (
+ package.is_prerelease()
+ and not allow_prereleases
+ and not package.source_type
+ ):
+ # If prereleases are not allowed and the package is a prerelease
+ # and is a standard package then we skip it
continue
if constraint.allows(package.version):
| {"golden_diff": "diff --git a/poetry/repositories/repository.py b/poetry/repositories/repository.py\n--- a/poetry/repositories/repository.py\n+++ b/poetry/repositories/repository.py\n@@ -64,7 +64,13 @@\n \n for package in self.packages:\n if name == package.name:\n- if package.is_prerelease() and not allow_prereleases:\n+ if (\n+ package.is_prerelease()\n+ and not allow_prereleases\n+ and not package.source_type\n+ ):\n+ # If prereleases are not allowed and the package is a prerelease\n+ # and is a standard package then we skip it\n continue\n \n if constraint.allows(package.version):\n", "issue": "Poetry cant resolve with git dependency\nI have `werkzeug` installed with git version and then add `flask-jwt-extended` shows error though `werkzeug` is 1.0.0.dev0\r\n\r\n```bash\r\n$ poetry --version\r\nPoetry 0.12.11\r\n```\r\n\r\n```bash\r\n$ cat pyproject.toml\r\n[tool.poetry]\r\nname = \"test-poetry-git-deps\"\r\nversion = \"0.1.0\"\r\ndescription = \"\"\r\nauthors = []\r\n\r\n[tool.poetry.dependencies]\r\npython = \"^3.6\"\r\nwerkzeug = {git = \"https://github.com/pallets/werkzeug.git\"}\r\n\r\n[tool.poetry.dev-dependencies]\r\n\r\n[build-system]\r\nrequires = [\"poetry>=0.12\"]\r\nbuild-backend = \"poetry.masonry.api\"\r\n```\r\n\r\n```bash\r\n$ cat poetry.lock\r\n[[package]]\r\ncategory = \"main\"\r\ndescription = \"The comprehensive WSGI web application library.\"\r\nname = \"Werkzeug\"\r\noptional = false\r\npython-versions = \">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*\"\r\nversion = \"1.0.0.dev0\"\r\n\r\n[package.source]\r\nreference = \"86f7bdf6d7c005d3ab8be50226fe42edc1ce4f7c\"\r\ntype = \"git\"\r\nurl = \"https://github.com/pallets/werkzeug.git\"\r\n[metadata]\r\ncontent-hash = \"7d88d5e01ecd59f104fee20ece9f80aa3b98bc2a281afc0975c36010d6af2648\"\r\npython-versions = \"^3.6\"\r\n\r\n[metadata.hashes]\r\nWerkzeug = []\r\n```\r\n\r\n```bash\r\n$ poetry add flask-jwt-extended\r\nUsing version ^3.18 for flask-jwt-extended\r\n\r\nUpdating dependencies\r\nResolving dependencies... (6.9s)\r\n \r\n[SolverProblemError] \r\nBecause no versions of flask-jwt-extended match >3.18,<4.0 \r\n and flask-jwt-extended (3.18.0) depends on Werkzeug (>=0.14), flask-jwt-extended (>=3.18,<4.0) requires Werkzeug (>=0.14). \r\nSo, because no versions of werkzeug match >=0.14 \r\n and test-poetry-git-deps depends on flask-jwt-extended (^3.18), version solving failed. \r\n \r\nadd [-D|--dev] [--git GIT] [--path PATH] [-E|--extras EXTRAS] [--optional] [--python PYTHON] [--platform PLATFORM] [--allow-prereleases] [--dry-run] [--] <name> (<name>)...\r\n```\r\n\r\n```bash\r\n$ poetry show \r\nWerkzeug 1.0.0.dev0 86f7bdf The comprehensive WSGI web application library.\r\n```\n", "before_files": [{"content": "from poetry.semver import parse_constraint\nfrom poetry.semver import VersionConstraint\nfrom poetry.semver import VersionRange\n\nfrom .base_repository import BaseRepository\n\n\nclass Repository(BaseRepository):\n def __init__(self, packages=None):\n super(Repository, self).__init__()\n\n self._name = None\n\n if packages is None:\n packages = []\n\n for package in packages:\n self.add_package(package)\n\n @property\n def name(self):\n return self._name\n\n def package(self, name, version, extras=None):\n name = name.lower()\n\n if extras is None:\n extras = []\n\n for package in self.packages:\n if name == package.name and package.version.text == version:\n # Activate extra dependencies\n for extra in extras:\n if extra in package.extras:\n for extra_dep in package.extras[extra]:\n for dep in package.requires:\n if dep.name == extra_dep.name:\n dep.activate()\n\n return package.clone()\n\n def find_packages(\n self, name, constraint=None, extras=None, allow_prereleases=False\n ):\n name = name.lower()\n packages = []\n if extras is None:\n extras = []\n\n if constraint is None:\n constraint = \"*\"\n\n if not isinstance(constraint, VersionConstraint):\n constraint = parse_constraint(constraint)\n\n if isinstance(constraint, VersionRange):\n if (\n constraint.max is not None\n and constraint.max.is_prerelease()\n or constraint.min is not None\n and constraint.min.is_prerelease()\n ):\n allow_prereleases = True\n\n for package in self.packages:\n if name == package.name:\n if package.is_prerelease() and not allow_prereleases:\n continue\n\n if constraint.allows(package.version):\n for dep in package.requires:\n for extra in extras:\n if extra not in package.extras:\n continue\n\n reqs = package.extras[extra]\n for req in reqs:\n if req.name == dep.name:\n dep.activate()\n\n if extras:\n package.requires_extras = extras\n\n packages.append(package)\n\n return packages\n\n def has_package(self, package):\n package_id = package.unique_name\n\n for repo_package in self.packages:\n if package_id == repo_package.unique_name:\n return True\n\n return False\n\n def add_package(self, package):\n self._packages.append(package)\n\n def remove_package(self, package):\n package_id = package.unique_name\n\n index = None\n for i, repo_package in enumerate(self.packages):\n if package_id == repo_package.unique_name:\n index = i\n break\n\n if index is not None:\n del self._packages[index]\n\n def search(self, query, mode=0):\n results = []\n\n for package in self.packages:\n if query in package.name:\n results.append(package)\n\n return results\n\n def __len__(self):\n return len(self._packages)\n", "path": "poetry/repositories/repository.py"}], "after_files": [{"content": "from poetry.semver import parse_constraint\nfrom poetry.semver import VersionConstraint\nfrom poetry.semver import VersionRange\n\nfrom .base_repository import BaseRepository\n\n\nclass Repository(BaseRepository):\n def __init__(self, packages=None):\n super(Repository, self).__init__()\n\n self._name = None\n\n if packages is None:\n packages = []\n\n for package in packages:\n self.add_package(package)\n\n @property\n def name(self):\n return self._name\n\n def package(self, name, version, extras=None):\n name = name.lower()\n\n if extras is None:\n extras = []\n\n for package in self.packages:\n if name == package.name and package.version.text == version:\n # Activate extra dependencies\n for extra in extras:\n if extra in package.extras:\n for extra_dep in package.extras[extra]:\n for dep in package.requires:\n if dep.name == extra_dep.name:\n dep.activate()\n\n return package.clone()\n\n def find_packages(\n self, name, constraint=None, extras=None, allow_prereleases=False\n ):\n name = name.lower()\n packages = []\n if extras is None:\n extras = []\n\n if constraint is None:\n constraint = \"*\"\n\n if not isinstance(constraint, VersionConstraint):\n constraint = parse_constraint(constraint)\n\n if isinstance(constraint, VersionRange):\n if (\n constraint.max is not None\n and constraint.max.is_prerelease()\n or constraint.min is not None\n and constraint.min.is_prerelease()\n ):\n allow_prereleases = True\n\n for package in self.packages:\n if name == package.name:\n if (\n package.is_prerelease()\n and not allow_prereleases\n and not package.source_type\n ):\n # If prereleases are not allowed and the package is a prerelease\n # and is a standard package then we skip it\n continue\n\n if constraint.allows(package.version):\n for dep in package.requires:\n for extra in extras:\n if extra not in package.extras:\n continue\n\n reqs = package.extras[extra]\n for req in reqs:\n if req.name == dep.name:\n dep.activate()\n\n if extras:\n package.requires_extras = extras\n\n packages.append(package)\n\n return packages\n\n def has_package(self, package):\n package_id = package.unique_name\n\n for repo_package in self.packages:\n if package_id == repo_package.unique_name:\n return True\n\n return False\n\n def add_package(self, package):\n self._packages.append(package)\n\n def remove_package(self, package):\n package_id = package.unique_name\n\n index = None\n for i, repo_package in enumerate(self.packages):\n if package_id == repo_package.unique_name:\n index = i\n break\n\n if index is not None:\n del self._packages[index]\n\n def search(self, query, mode=0):\n results = []\n\n for package in self.packages:\n if query in package.name:\n results.append(package)\n\n return results\n\n def __len__(self):\n return len(self._packages)\n", "path": "poetry/repositories/repository.py"}]} | 1,839 | 158 |
gh_patches_debug_4266 | rasdani/github-patches | git_diff | getredash__redash-732 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Migration Error(0017_add_organization.py)
Migration Error with current master branch.
```
$ vagrant@vagrant-ubuntu-trusty-64:/opt/redash/current$ export PYTHONPATH=. && bin/run python migrations/0017_add_organization.py
```
### output
```
[2016-01-05 09:38:42,746][PID:1822][WARNING][redash.query_runner] Oracle query runner enabled but not supported, not registering. Either disable or install missing dependencies.
Traceback (most recent call last):
File "migrations/0017_add_organization.py", line 12, in <module>
Organization.SETTING_GOOGLE_APPS_DOMAINS: settings.GOOGLE_APPS_DOMAIN
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 4001, in create
inst.save(force_insert=True)
File "/opt/redash/current/redash/models.py", line 93, in save
super(BaseModel, self).save(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 4148, in save
pk_from_cursor = self.insert(**field_dict).execute()
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2858, in execute
cursor = self._execute()
File "/opt/redash/current/redash/metrics/database.py", line 50, in metered_execute
result = real_execute(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2370, in _execute
sql, params = self.sql()
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 2832, in sql
return self.compiler().generate_insert(self)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1733, in generate_insert
return self.build_query(clauses, alias_map)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1542, in build_query
return self.parse_node(Clause(*clauses), alias_map)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1503, in parse_node
sql, params, unknown = self._parse(node, alias_map, conv)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1478, in _parse
sql, params = self._parse_map[node_type](node, alias_map, conv)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1406, in _parse_clause
node.nodes, alias_map, conv, node.glue)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1520, in parse_node_list
node_sql, node_params = self.parse_node(node, alias_map, conv)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1503, in parse_node
sql, params, unknown = self._parse(node, alias_map, conv)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1478, in _parse
sql, params = self._parse_map[node_type](node, alias_map, conv)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1406, in _parse_clause
node.nodes, alias_map, conv, node.glue)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1520, in parse_node_list
node_sql, node_params = self.parse_node(node, alias_map, conv)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1503, in parse_node
sql, params, unknown = self._parse(node, alias_map, conv)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1478, in _parse
sql, params = self._parse_map[node_type](node, alias_map, conv)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1406, in _parse_clause
node.nodes, alias_map, conv, node.glue)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1520, in parse_node_list
node_sql, node_params = self.parse_node(node, alias_map, conv)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1503, in parse_node
sql, params, unknown = self._parse(node, alias_map, conv)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1478, in _parse
sql, params = self._parse_map[node_type](node, alias_map, conv)
File "/usr/local/lib/python2.7/dist-packages/peewee.py", line 1394, in _parse_param
params = [node.conv(node.value)]
File "/opt/redash/current/redash/models.py", line 68, in db_value
return json.dumps(value)
File "/usr/lib/python2.7/json/__init__.py", line 243, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: set([]) is not JSON serializable
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `migrations/0017_add_organization.py`
Content:
```
1 from redash.models import db, Organization, Group
2 from redash import settings
3 from playhouse.migrate import PostgresqlMigrator, migrate
4
5 if __name__ == '__main__':
6 migrator = PostgresqlMigrator(db.database)
7
8 with db.database.transaction():
9 Organization.create_table()
10
11 default_org = Organization.create(name="Default", slug='default', settings={
12 Organization.SETTING_GOOGLE_APPS_DOMAINS: settings.GOOGLE_APPS_DOMAIN
13 })
14
15 column = Group.org
16 column.default = default_org
17
18 migrate(
19 migrator.add_column('groups', 'org_id', column),
20 migrator.add_column('events', 'org_id', column),
21 migrator.add_column('data_sources', 'org_id', column),
22 migrator.add_column('users', 'org_id', column),
23 migrator.add_column('dashboards', 'org_id', column),
24 migrator.add_column('queries', 'org_id', column),
25 migrator.add_column('query_results', 'org_id', column),
26 )
27
28 # Change the uniqueness constraint on user email to be (org, email):
29 migrate(
30 migrator.drop_index('users', 'users_email'),
31 migrator.add_index('users', ('org_id', 'email'), unique=True)
32 )
33
34 db.close_db(None)
35
36
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/migrations/0017_add_organization.py b/migrations/0017_add_organization.py
--- a/migrations/0017_add_organization.py
+++ b/migrations/0017_add_organization.py
@@ -9,7 +9,7 @@
Organization.create_table()
default_org = Organization.create(name="Default", slug='default', settings={
- Organization.SETTING_GOOGLE_APPS_DOMAINS: settings.GOOGLE_APPS_DOMAIN
+ Organization.SETTING_GOOGLE_APPS_DOMAINS: list(settings.GOOGLE_APPS_DOMAIN)
})
column = Group.org
| {"golden_diff": "diff --git a/migrations/0017_add_organization.py b/migrations/0017_add_organization.py\n--- a/migrations/0017_add_organization.py\n+++ b/migrations/0017_add_organization.py\n@@ -9,7 +9,7 @@\n Organization.create_table()\n \n default_org = Organization.create(name=\"Default\", slug='default', settings={\n- Organization.SETTING_GOOGLE_APPS_DOMAINS: settings.GOOGLE_APPS_DOMAIN\n+ Organization.SETTING_GOOGLE_APPS_DOMAINS: list(settings.GOOGLE_APPS_DOMAIN)\n })\n \n column = Group.org\n", "issue": "Migration Error(0017_add_organization.py)\nMigration Error with current master branch.\n\n```\n$ vagrant@vagrant-ubuntu-trusty-64:/opt/redash/current$ export PYTHONPATH=. && bin/run python migrations/0017_add_organization.py\n```\n### output\n\n```\n[2016-01-05 09:38:42,746][PID:1822][WARNING][redash.query_runner] Oracle query runner enabled but not supported, not registering. Either disable or install missing dependencies.\nTraceback (most recent call last):\n File \"migrations/0017_add_organization.py\", line 12, in <module>\n Organization.SETTING_GOOGLE_APPS_DOMAINS: settings.GOOGLE_APPS_DOMAIN\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 4001, in create\n inst.save(force_insert=True)\n File \"/opt/redash/current/redash/models.py\", line 93, in save\n super(BaseModel, self).save(*args, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 4148, in save\n pk_from_cursor = self.insert(**field_dict).execute()\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 2858, in execute\n cursor = self._execute()\n File \"/opt/redash/current/redash/metrics/database.py\", line 50, in metered_execute\n result = real_execute(self, *args, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 2370, in _execute\n sql, params = self.sql()\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 2832, in sql\n return self.compiler().generate_insert(self)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1733, in generate_insert\n return self.build_query(clauses, alias_map)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1542, in build_query\n return self.parse_node(Clause(*clauses), alias_map)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1503, in parse_node\n sql, params, unknown = self._parse(node, alias_map, conv)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1478, in _parse\n sql, params = self._parse_map[node_type](node, alias_map, conv)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1406, in _parse_clause\n node.nodes, alias_map, conv, node.glue)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1520, in parse_node_list\n node_sql, node_params = self.parse_node(node, alias_map, conv)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1503, in parse_node\n sql, params, unknown = self._parse(node, alias_map, conv)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1478, in _parse\n sql, params = self._parse_map[node_type](node, alias_map, conv)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1406, in _parse_clause\n node.nodes, alias_map, conv, node.glue)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1520, in parse_node_list\n node_sql, node_params = self.parse_node(node, alias_map, conv)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1503, in parse_node\n sql, params, unknown = self._parse(node, alias_map, conv)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1478, in _parse\n sql, params = self._parse_map[node_type](node, alias_map, conv)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1406, in _parse_clause\n node.nodes, alias_map, conv, node.glue)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1520, in parse_node_list\n node_sql, node_params = self.parse_node(node, alias_map, conv)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1503, in parse_node\n sql, params, unknown = self._parse(node, alias_map, conv)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1478, in _parse\n sql, params = self._parse_map[node_type](node, alias_map, conv)\n File \"/usr/local/lib/python2.7/dist-packages/peewee.py\", line 1394, in _parse_param\n params = [node.conv(node.value)]\n File \"/opt/redash/current/redash/models.py\", line 68, in db_value\n return json.dumps(value)\n File \"/usr/lib/python2.7/json/__init__.py\", line 243, in dumps\n return _default_encoder.encode(obj)\n File \"/usr/lib/python2.7/json/encoder.py\", line 207, in encode\n chunks = self.iterencode(o, _one_shot=True)\n File \"/usr/lib/python2.7/json/encoder.py\", line 270, in iterencode\n return _iterencode(o, 0)\n File \"/usr/lib/python2.7/json/encoder.py\", line 184, in default\n raise TypeError(repr(o) + \" is not JSON serializable\")\nTypeError: set([]) is not JSON serializable\n```\n\n", "before_files": [{"content": "from redash.models import db, Organization, Group\nfrom redash import settings\nfrom playhouse.migrate import PostgresqlMigrator, migrate\n\nif __name__ == '__main__':\n migrator = PostgresqlMigrator(db.database)\n\n with db.database.transaction():\n Organization.create_table()\n\n default_org = Organization.create(name=\"Default\", slug='default', settings={\n Organization.SETTING_GOOGLE_APPS_DOMAINS: settings.GOOGLE_APPS_DOMAIN\n })\n\n column = Group.org\n column.default = default_org\n\n migrate(\n migrator.add_column('groups', 'org_id', column),\n migrator.add_column('events', 'org_id', column),\n migrator.add_column('data_sources', 'org_id', column),\n migrator.add_column('users', 'org_id', column),\n migrator.add_column('dashboards', 'org_id', column),\n migrator.add_column('queries', 'org_id', column),\n migrator.add_column('query_results', 'org_id', column),\n )\n\n # Change the uniqueness constraint on user email to be (org, email):\n migrate(\n migrator.drop_index('users', 'users_email'),\n migrator.add_index('users', ('org_id', 'email'), unique=True)\n )\n\n db.close_db(None)\n\n", "path": "migrations/0017_add_organization.py"}], "after_files": [{"content": "from redash.models import db, Organization, Group\nfrom redash import settings\nfrom playhouse.migrate import PostgresqlMigrator, migrate\n\nif __name__ == '__main__':\n migrator = PostgresqlMigrator(db.database)\n\n with db.database.transaction():\n Organization.create_table()\n\n default_org = Organization.create(name=\"Default\", slug='default', settings={\n Organization.SETTING_GOOGLE_APPS_DOMAINS: list(settings.GOOGLE_APPS_DOMAIN)\n })\n\n column = Group.org\n column.default = default_org\n\n migrate(\n migrator.add_column('groups', 'org_id', column),\n migrator.add_column('events', 'org_id', column),\n migrator.add_column('data_sources', 'org_id', column),\n migrator.add_column('users', 'org_id', column),\n migrator.add_column('dashboards', 'org_id', column),\n migrator.add_column('queries', 'org_id', column),\n migrator.add_column('query_results', 'org_id', column),\n )\n\n # Change the uniqueness constraint on user email to be (org, email):\n migrate(\n migrator.drop_index('users', 'users_email'),\n migrator.add_index('users', ('org_id', 'email'), unique=True)\n )\n\n db.close_db(None)\n\n", "path": "migrations/0017_add_organization.py"}]} | 1,979 | 128 |
gh_patches_debug_24645 | rasdani/github-patches | git_diff | getnikola__nikola-3455 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
reST doc directive: slugs with slashes in no longer work
At some point in the previous few versions, slugs with slashes in no longer work - links pointing to them have their slashes removed before the link is resolved, leading to an error like `"swowndata" slug doesn't exist` for a slug called `sw/owndata`. I can't find this being mentioned in the release notes or the other issues here, so I assume this was done by accident, is a bug, and should be fixed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/compile/rest/doc.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2020 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """reST role for linking to other documents."""
28
29 from docutils import nodes
30 from docutils.parsers.rst import roles
31
32 from nikola.utils import split_explicit_title, LOGGER, slugify
33 from nikola.plugin_categories import RestExtension
34
35
36 class Plugin(RestExtension):
37 """Plugin for doc role."""
38
39 name = 'rest_doc'
40
41 def set_site(self, site):
42 """Set Nikola site."""
43 self.site = site
44 roles.register_canonical_role('doc', doc_role)
45 self.site.register_shortcode('doc', doc_shortcode)
46 doc_role.site = site
47 return super().set_site(site)
48
49
50 def _doc_link(rawtext, text, options={}, content=[]):
51 """Handle the doc role."""
52 # split link's text and post's slug in role content
53 has_explicit_title, title, slug = split_explicit_title(text)
54 if '#' in slug:
55 slug, fragment = slug.split('#', 1)
56 else:
57 fragment = None
58 slug = slugify(slug)
59 # check if the slug given is part of our blog posts/pages
60 twin_slugs = False
61 post = None
62 for p in doc_role.site.timeline:
63 if p.meta('slug') == slug:
64 if post is None:
65 post = p
66 else:
67 twin_slugs = True
68 break
69
70 try:
71 if post is None:
72 raise ValueError("No post with matching slug found.")
73 except ValueError:
74 return False, False, None, None, slug
75
76 if not has_explicit_title:
77 # use post's title as link's text
78 title = post.title()
79 permalink = post.permalink()
80 if fragment:
81 permalink += '#' + fragment
82
83 return True, twin_slugs, title, permalink, slug
84
85
86 def doc_role(name, rawtext, text, lineno, inliner, options={}, content=[]):
87 """Handle the doc role."""
88 success, twin_slugs, title, permalink, slug = _doc_link(rawtext, text, options, content)
89 if success:
90 if twin_slugs:
91 inliner.reporter.warning(
92 'More than one post with the same slug. Using "{0}"'.format(permalink))
93 LOGGER.warning(
94 'More than one post with the same slug. Using "{0}" for doc role'.format(permalink))
95 node = make_link_node(rawtext, title, permalink, options)
96 return [node], []
97 else:
98 msg = inliner.reporter.error(
99 '"{0}" slug doesn\'t exist.'.format(slug),
100 line=lineno)
101 prb = inliner.problematic(rawtext, rawtext, msg)
102 return [prb], [msg]
103
104
105 def doc_shortcode(*args, **kwargs):
106 """Implement the doc shortcode."""
107 text = kwargs['data']
108 success, twin_slugs, title, permalink, slug = _doc_link(text, text, LOGGER)
109 if success:
110 if twin_slugs:
111 LOGGER.warning(
112 'More than one post with the same slug. Using "{0}" for doc shortcode'.format(permalink))
113 return '<a href="{0}">{1}</a>'.format(permalink, title)
114 else:
115 LOGGER.error(
116 '"{0}" slug doesn\'t exist.'.format(slug))
117 return '<span class="error text-error" style="color: red;">Invalid link: {0}</span>'.format(text)
118
119
120 def make_link_node(rawtext, text, url, options):
121 """Make a reST link node."""
122 node = nodes.reference(rawtext, text, refuri=url, *options)
123 return node
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nikola/plugins/compile/rest/doc.py b/nikola/plugins/compile/rest/doc.py
--- a/nikola/plugins/compile/rest/doc.py
+++ b/nikola/plugins/compile/rest/doc.py
@@ -47,16 +47,8 @@
return super().set_site(site)
-def _doc_link(rawtext, text, options={}, content=[]):
- """Handle the doc role."""
- # split link's text and post's slug in role content
- has_explicit_title, title, slug = split_explicit_title(text)
- if '#' in slug:
- slug, fragment = slug.split('#', 1)
- else:
- fragment = None
- slug = slugify(slug)
- # check if the slug given is part of our blog posts/pages
+def _find_post(slug):
+ """Find a post with the given slug in posts or pages."""
twin_slugs = False
post = None
for p in doc_role.site.timeline:
@@ -66,6 +58,23 @@
else:
twin_slugs = True
break
+ return post, twin_slugs
+
+
+def _doc_link(rawtext, text, options={}, content=[]):
+ """Handle the doc role."""
+ # split link's text and post's slug in role content
+ has_explicit_title, title, slug = split_explicit_title(text)
+ if '#' in slug:
+ slug, fragment = slug.split('#', 1)
+ else:
+ fragment = None
+
+ # Look for the unslugified input first, then try to slugify (Issue #3450)
+ post, twin_slugs = _find_post(slug)
+ if post is None:
+ slug = slugify(slug)
+ post, twin_slugs = _find_post(slug)
try:
if post is None:
| {"golden_diff": "diff --git a/nikola/plugins/compile/rest/doc.py b/nikola/plugins/compile/rest/doc.py\n--- a/nikola/plugins/compile/rest/doc.py\n+++ b/nikola/plugins/compile/rest/doc.py\n@@ -47,16 +47,8 @@\n return super().set_site(site)\n \n \n-def _doc_link(rawtext, text, options={}, content=[]):\n- \"\"\"Handle the doc role.\"\"\"\n- # split link's text and post's slug in role content\n- has_explicit_title, title, slug = split_explicit_title(text)\n- if '#' in slug:\n- slug, fragment = slug.split('#', 1)\n- else:\n- fragment = None\n- slug = slugify(slug)\n- # check if the slug given is part of our blog posts/pages\n+def _find_post(slug):\n+ \"\"\"Find a post with the given slug in posts or pages.\"\"\"\n twin_slugs = False\n post = None\n for p in doc_role.site.timeline:\n@@ -66,6 +58,23 @@\n else:\n twin_slugs = True\n break\n+ return post, twin_slugs\n+\n+\n+def _doc_link(rawtext, text, options={}, content=[]):\n+ \"\"\"Handle the doc role.\"\"\"\n+ # split link's text and post's slug in role content\n+ has_explicit_title, title, slug = split_explicit_title(text)\n+ if '#' in slug:\n+ slug, fragment = slug.split('#', 1)\n+ else:\n+ fragment = None\n+\n+ # Look for the unslugified input first, then try to slugify (Issue #3450)\n+ post, twin_slugs = _find_post(slug)\n+ if post is None:\n+ slug = slugify(slug)\n+ post, twin_slugs = _find_post(slug)\n \n try:\n if post is None:\n", "issue": "reST doc directive: slugs with slashes in no longer work\nAt some point in the previous few versions, slugs with slashes in no longer work - links pointing to them have their slashes removed before the link is resolved, leading to an error like `\"swowndata\" slug doesn't exist` for a slug called `sw/owndata`. I can't find this being mentioned in the release notes or the other issues here, so I assume this was done by accident, is a bug, and should be fixed.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2020 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"reST role for linking to other documents.\"\"\"\n\nfrom docutils import nodes\nfrom docutils.parsers.rst import roles\n\nfrom nikola.utils import split_explicit_title, LOGGER, slugify\nfrom nikola.plugin_categories import RestExtension\n\n\nclass Plugin(RestExtension):\n \"\"\"Plugin for doc role.\"\"\"\n\n name = 'rest_doc'\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n self.site = site\n roles.register_canonical_role('doc', doc_role)\n self.site.register_shortcode('doc', doc_shortcode)\n doc_role.site = site\n return super().set_site(site)\n\n\ndef _doc_link(rawtext, text, options={}, content=[]):\n \"\"\"Handle the doc role.\"\"\"\n # split link's text and post's slug in role content\n has_explicit_title, title, slug = split_explicit_title(text)\n if '#' in slug:\n slug, fragment = slug.split('#', 1)\n else:\n fragment = None\n slug = slugify(slug)\n # check if the slug given is part of our blog posts/pages\n twin_slugs = False\n post = None\n for p in doc_role.site.timeline:\n if p.meta('slug') == slug:\n if post is None:\n post = p\n else:\n twin_slugs = True\n break\n\n try:\n if post is None:\n raise ValueError(\"No post with matching slug found.\")\n except ValueError:\n return False, False, None, None, slug\n\n if not has_explicit_title:\n # use post's title as link's text\n title = post.title()\n permalink = post.permalink()\n if fragment:\n permalink += '#' + fragment\n\n return True, twin_slugs, title, permalink, slug\n\n\ndef doc_role(name, rawtext, text, lineno, inliner, options={}, content=[]):\n \"\"\"Handle the doc role.\"\"\"\n success, twin_slugs, title, permalink, slug = _doc_link(rawtext, text, options, content)\n if success:\n if twin_slugs:\n inliner.reporter.warning(\n 'More than one post with the same slug. Using \"{0}\"'.format(permalink))\n LOGGER.warning(\n 'More than one post with the same slug. Using \"{0}\" for doc role'.format(permalink))\n node = make_link_node(rawtext, title, permalink, options)\n return [node], []\n else:\n msg = inliner.reporter.error(\n '\"{0}\" slug doesn\\'t exist.'.format(slug),\n line=lineno)\n prb = inliner.problematic(rawtext, rawtext, msg)\n return [prb], [msg]\n\n\ndef doc_shortcode(*args, **kwargs):\n \"\"\"Implement the doc shortcode.\"\"\"\n text = kwargs['data']\n success, twin_slugs, title, permalink, slug = _doc_link(text, text, LOGGER)\n if success:\n if twin_slugs:\n LOGGER.warning(\n 'More than one post with the same slug. Using \"{0}\" for doc shortcode'.format(permalink))\n return '<a href=\"{0}\">{1}</a>'.format(permalink, title)\n else:\n LOGGER.error(\n '\"{0}\" slug doesn\\'t exist.'.format(slug))\n return '<span class=\"error text-error\" style=\"color: red;\">Invalid link: {0}</span>'.format(text)\n\n\ndef make_link_node(rawtext, text, url, options):\n \"\"\"Make a reST link node.\"\"\"\n node = nodes.reference(rawtext, text, refuri=url, *options)\n return node\n", "path": "nikola/plugins/compile/rest/doc.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2020 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"reST role for linking to other documents.\"\"\"\n\nfrom docutils import nodes\nfrom docutils.parsers.rst import roles\n\nfrom nikola.utils import split_explicit_title, LOGGER, slugify\nfrom nikola.plugin_categories import RestExtension\n\n\nclass Plugin(RestExtension):\n \"\"\"Plugin for doc role.\"\"\"\n\n name = 'rest_doc'\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n self.site = site\n roles.register_canonical_role('doc', doc_role)\n self.site.register_shortcode('doc', doc_shortcode)\n doc_role.site = site\n return super().set_site(site)\n\n\ndef _find_post(slug):\n \"\"\"Find a post with the given slug in posts or pages.\"\"\"\n twin_slugs = False\n post = None\n for p in doc_role.site.timeline:\n if p.meta('slug') == slug:\n if post is None:\n post = p\n else:\n twin_slugs = True\n break\n return post, twin_slugs\n\n\ndef _doc_link(rawtext, text, options={}, content=[]):\n \"\"\"Handle the doc role.\"\"\"\n # split link's text and post's slug in role content\n has_explicit_title, title, slug = split_explicit_title(text)\n if '#' in slug:\n slug, fragment = slug.split('#', 1)\n else:\n fragment = None\n\n # Look for the unslugified input first, then try to slugify (Issue #3450)\n post, twin_slugs = _find_post(slug)\n if post is None:\n slug = slugify(slug)\n post, twin_slugs = _find_post(slug)\n\n try:\n if post is None:\n raise ValueError(\"No post with matching slug found.\")\n except ValueError:\n return False, False, None, None, slug\n\n if not has_explicit_title:\n # use post's title as link's text\n title = post.title()\n permalink = post.permalink()\n if fragment:\n permalink += '#' + fragment\n\n return True, twin_slugs, title, permalink, slug\n\n\ndef doc_role(name, rawtext, text, lineno, inliner, options={}, content=[]):\n \"\"\"Handle the doc role.\"\"\"\n success, twin_slugs, title, permalink, slug = _doc_link(rawtext, text, options, content)\n if success:\n if twin_slugs:\n inliner.reporter.warning(\n 'More than one post with the same slug. Using \"{0}\"'.format(permalink))\n LOGGER.warning(\n 'More than one post with the same slug. Using \"{0}\" for doc role'.format(permalink))\n node = make_link_node(rawtext, title, permalink, options)\n return [node], []\n else:\n msg = inliner.reporter.error(\n '\"{0}\" slug doesn\\'t exist.'.format(slug),\n line=lineno)\n prb = inliner.problematic(rawtext, rawtext, msg)\n return [prb], [msg]\n\n\ndef doc_shortcode(*args, **kwargs):\n \"\"\"Implement the doc shortcode.\"\"\"\n text = kwargs['data']\n success, twin_slugs, title, permalink, slug = _doc_link(text, text, LOGGER)\n if success:\n if twin_slugs:\n LOGGER.warning(\n 'More than one post with the same slug. Using \"{0}\" for doc shortcode'.format(permalink))\n return '<a href=\"{0}\">{1}</a>'.format(permalink, title)\n else:\n LOGGER.error(\n '\"{0}\" slug doesn\\'t exist.'.format(slug))\n return '<span class=\"error text-error\" style=\"color: red;\">Invalid link: {0}</span>'.format(text)\n\n\ndef make_link_node(rawtext, text, url, options):\n \"\"\"Make a reST link node.\"\"\"\n node = nodes.reference(rawtext, text, refuri=url, *options)\n return node\n", "path": "nikola/plugins/compile/rest/doc.py"}]} | 1,666 | 420 |
gh_patches_debug_4308 | rasdani/github-patches | git_diff | tiangolo__fastapi-320 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
eta on pydantic update to 0.28
really need your latest PR
it shoud solve recurring issues with sqlalchemy to pydantic mapping
thank you !
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `fastapi/utils.py`
Content:
```
1 import re
2 from typing import Any, Dict, List, Sequence, Set, Type
3
4 from fastapi import routing
5 from fastapi.openapi.constants import REF_PREFIX
6 from pydantic import BaseModel
7 from pydantic.fields import Field
8 from pydantic.schema import get_flat_models_from_fields, model_process_schema
9 from starlette.routing import BaseRoute
10
11
12 def get_flat_models_from_routes(
13 routes: Sequence[Type[BaseRoute]]
14 ) -> Set[Type[BaseModel]]:
15 body_fields_from_routes: List[Field] = []
16 responses_from_routes: List[Field] = []
17 for route in routes:
18 if getattr(route, "include_in_schema", None) and isinstance(
19 route, routing.APIRoute
20 ):
21 if route.body_field:
22 assert isinstance(
23 route.body_field, Field
24 ), "A request body must be a Pydantic Field"
25 body_fields_from_routes.append(route.body_field)
26 if route.response_field:
27 responses_from_routes.append(route.response_field)
28 if route.response_fields:
29 responses_from_routes.extend(route.response_fields.values())
30 flat_models = get_flat_models_from_fields(
31 body_fields_from_routes + responses_from_routes
32 )
33 return flat_models
34
35
36 def get_model_definitions(
37 *, flat_models: Set[Type[BaseModel]], model_name_map: Dict[Type[BaseModel], str]
38 ) -> Dict[str, Any]:
39 definitions: Dict[str, Dict] = {}
40 for model in flat_models:
41 m_schema, m_definitions = model_process_schema(
42 model, model_name_map=model_name_map, ref_prefix=REF_PREFIX
43 )
44 definitions.update(m_definitions)
45 model_name = model_name_map[model]
46 definitions[model_name] = m_schema
47 return definitions
48
49
50 def get_path_param_names(path: str) -> Set[str]:
51 return {item.strip("{}") for item in re.findall("{[^}]*}", path)}
52
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/fastapi/utils.py b/fastapi/utils.py
--- a/fastapi/utils.py
+++ b/fastapi/utils.py
@@ -28,7 +28,7 @@
if route.response_fields:
responses_from_routes.extend(route.response_fields.values())
flat_models = get_flat_models_from_fields(
- body_fields_from_routes + responses_from_routes
+ body_fields_from_routes + responses_from_routes, known_models=set()
)
return flat_models
| {"golden_diff": "diff --git a/fastapi/utils.py b/fastapi/utils.py\n--- a/fastapi/utils.py\n+++ b/fastapi/utils.py\n@@ -28,7 +28,7 @@\n if route.response_fields:\n responses_from_routes.extend(route.response_fields.values())\n flat_models = get_flat_models_from_fields(\n- body_fields_from_routes + responses_from_routes\n+ body_fields_from_routes + responses_from_routes, known_models=set()\n )\n return flat_models\n", "issue": "eta on pydantic update to 0.28\nreally need your latest PR \r\nit shoud solve recurring issues with sqlalchemy to pydantic mapping\r\nthank you !\n", "before_files": [{"content": "import re\nfrom typing import Any, Dict, List, Sequence, Set, Type\n\nfrom fastapi import routing\nfrom fastapi.openapi.constants import REF_PREFIX\nfrom pydantic import BaseModel\nfrom pydantic.fields import Field\nfrom pydantic.schema import get_flat_models_from_fields, model_process_schema\nfrom starlette.routing import BaseRoute\n\n\ndef get_flat_models_from_routes(\n routes: Sequence[Type[BaseRoute]]\n) -> Set[Type[BaseModel]]:\n body_fields_from_routes: List[Field] = []\n responses_from_routes: List[Field] = []\n for route in routes:\n if getattr(route, \"include_in_schema\", None) and isinstance(\n route, routing.APIRoute\n ):\n if route.body_field:\n assert isinstance(\n route.body_field, Field\n ), \"A request body must be a Pydantic Field\"\n body_fields_from_routes.append(route.body_field)\n if route.response_field:\n responses_from_routes.append(route.response_field)\n if route.response_fields:\n responses_from_routes.extend(route.response_fields.values())\n flat_models = get_flat_models_from_fields(\n body_fields_from_routes + responses_from_routes\n )\n return flat_models\n\n\ndef get_model_definitions(\n *, flat_models: Set[Type[BaseModel]], model_name_map: Dict[Type[BaseModel], str]\n) -> Dict[str, Any]:\n definitions: Dict[str, Dict] = {}\n for model in flat_models:\n m_schema, m_definitions = model_process_schema(\n model, model_name_map=model_name_map, ref_prefix=REF_PREFIX\n )\n definitions.update(m_definitions)\n model_name = model_name_map[model]\n definitions[model_name] = m_schema\n return definitions\n\n\ndef get_path_param_names(path: str) -> Set[str]:\n return {item.strip(\"{}\") for item in re.findall(\"{[^}]*}\", path)}\n", "path": "fastapi/utils.py"}], "after_files": [{"content": "import re\nfrom typing import Any, Dict, List, Sequence, Set, Type\n\nfrom fastapi import routing\nfrom fastapi.openapi.constants import REF_PREFIX\nfrom pydantic import BaseModel\nfrom pydantic.fields import Field\nfrom pydantic.schema import get_flat_models_from_fields, model_process_schema\nfrom starlette.routing import BaseRoute\n\n\ndef get_flat_models_from_routes(\n routes: Sequence[Type[BaseRoute]]\n) -> Set[Type[BaseModel]]:\n body_fields_from_routes: List[Field] = []\n responses_from_routes: List[Field] = []\n for route in routes:\n if getattr(route, \"include_in_schema\", None) and isinstance(\n route, routing.APIRoute\n ):\n if route.body_field:\n assert isinstance(\n route.body_field, Field\n ), \"A request body must be a Pydantic Field\"\n body_fields_from_routes.append(route.body_field)\n if route.response_field:\n responses_from_routes.append(route.response_field)\n if route.response_fields:\n responses_from_routes.extend(route.response_fields.values())\n flat_models = get_flat_models_from_fields(\n body_fields_from_routes + responses_from_routes, known_models=set()\n )\n return flat_models\n\n\ndef get_model_definitions(\n *, flat_models: Set[Type[BaseModel]], model_name_map: Dict[Type[BaseModel], str]\n) -> Dict[str, Any]:\n definitions: Dict[str, Dict] = {}\n for model in flat_models:\n m_schema, m_definitions = model_process_schema(\n model, model_name_map=model_name_map, ref_prefix=REF_PREFIX\n )\n definitions.update(m_definitions)\n model_name = model_name_map[model]\n definitions[model_name] = m_schema\n return definitions\n\n\ndef get_path_param_names(path: str) -> Set[str]:\n return {item.strip(\"{}\") for item in re.findall(\"{[^}]*}\", path)}\n", "path": "fastapi/utils.py"}]} | 789 | 99 |
gh_patches_debug_2391 | rasdani/github-patches | git_diff | scrapy__scrapy-2929 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
LinkExtractor is not ignoring .m4v extension (video) by default
By chance I found out that LinkExtractor is not ignoring the video extension m4v in the same way it is ignoring other video formats.
https://en.wikipedia.org/wiki/M4V
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/linkextractors/__init__.py`
Content:
```
1 """
2 scrapy.linkextractors
3
4 This package contains a collection of Link Extractors.
5
6 For more info see docs/topics/link-extractors.rst
7 """
8 import re
9
10 from six.moves.urllib.parse import urlparse
11 from parsel.csstranslator import HTMLTranslator
12 from w3lib.url import canonicalize_url
13
14 from scrapy.utils.misc import arg_to_iter
15 from scrapy.utils.url import (
16 url_is_from_any_domain, url_has_any_extension,
17 )
18
19
20 # common file extensions that are not followed if they occur in links
21 IGNORED_EXTENSIONS = [
22 # images
23 'mng', 'pct', 'bmp', 'gif', 'jpg', 'jpeg', 'png', 'pst', 'psp', 'tif',
24 'tiff', 'ai', 'drw', 'dxf', 'eps', 'ps', 'svg',
25
26 # audio
27 'mp3', 'wma', 'ogg', 'wav', 'ra', 'aac', 'mid', 'au', 'aiff',
28
29 # video
30 '3gp', 'asf', 'asx', 'avi', 'mov', 'mp4', 'mpg', 'qt', 'rm', 'swf', 'wmv',
31 'm4a',
32
33 # office suites
34 'xls', 'xlsx', 'ppt', 'pptx', 'pps', 'doc', 'docx', 'odt', 'ods', 'odg',
35 'odp',
36
37 # other
38 'css', 'pdf', 'exe', 'bin', 'rss', 'zip', 'rar',
39 ]
40
41
42 _re_type = type(re.compile("", 0))
43 _matches = lambda url, regexs: any(r.search(url) for r in regexs)
44 _is_valid_url = lambda url: url.split('://', 1)[0] in {'http', 'https', 'file'}
45
46
47 class FilteringLinkExtractor(object):
48
49 _csstranslator = HTMLTranslator()
50
51 def __init__(self, link_extractor, allow, deny, allow_domains, deny_domains,
52 restrict_xpaths, canonicalize, deny_extensions, restrict_css):
53
54 self.link_extractor = link_extractor
55
56 self.allow_res = [x if isinstance(x, _re_type) else re.compile(x)
57 for x in arg_to_iter(allow)]
58 self.deny_res = [x if isinstance(x, _re_type) else re.compile(x)
59 for x in arg_to_iter(deny)]
60
61 self.allow_domains = set(arg_to_iter(allow_domains))
62 self.deny_domains = set(arg_to_iter(deny_domains))
63
64 self.restrict_xpaths = tuple(arg_to_iter(restrict_xpaths))
65 self.restrict_xpaths += tuple(map(self._csstranslator.css_to_xpath,
66 arg_to_iter(restrict_css)))
67
68 self.canonicalize = canonicalize
69 if deny_extensions is None:
70 deny_extensions = IGNORED_EXTENSIONS
71 self.deny_extensions = {'.' + e for e in arg_to_iter(deny_extensions)}
72
73 def _link_allowed(self, link):
74 if not _is_valid_url(link.url):
75 return False
76 if self.allow_res and not _matches(link.url, self.allow_res):
77 return False
78 if self.deny_res and _matches(link.url, self.deny_res):
79 return False
80 parsed_url = urlparse(link.url)
81 if self.allow_domains and not url_is_from_any_domain(parsed_url, self.allow_domains):
82 return False
83 if self.deny_domains and url_is_from_any_domain(parsed_url, self.deny_domains):
84 return False
85 if self.deny_extensions and url_has_any_extension(parsed_url, self.deny_extensions):
86 return False
87 return True
88
89 def matches(self, url):
90
91 if self.allow_domains and not url_is_from_any_domain(url, self.allow_domains):
92 return False
93 if self.deny_domains and url_is_from_any_domain(url, self.deny_domains):
94 return False
95
96 allowed = (regex.search(url) for regex in self.allow_res) if self.allow_res else [True]
97 denied = (regex.search(url) for regex in self.deny_res) if self.deny_res else []
98 return any(allowed) and not any(denied)
99
100 def _process_links(self, links):
101 links = [x for x in links if self._link_allowed(x)]
102 if self.canonicalize:
103 for link in links:
104 link.url = canonicalize_url(link.url)
105 links = self.link_extractor._process_links(links)
106 return links
107
108 def _extract_links(self, *args, **kwargs):
109 return self.link_extractor._extract_links(*args, **kwargs)
110
111
112 # Top-level imports
113 from .lxmlhtml import LxmlLinkExtractor as LinkExtractor
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/linkextractors/__init__.py b/scrapy/linkextractors/__init__.py
--- a/scrapy/linkextractors/__init__.py
+++ b/scrapy/linkextractors/__init__.py
@@ -28,7 +28,7 @@
# video
'3gp', 'asf', 'asx', 'avi', 'mov', 'mp4', 'mpg', 'qt', 'rm', 'swf', 'wmv',
- 'm4a',
+ 'm4a', 'm4v',
# office suites
'xls', 'xlsx', 'ppt', 'pptx', 'pps', 'doc', 'docx', 'odt', 'ods', 'odg',
| {"golden_diff": "diff --git a/scrapy/linkextractors/__init__.py b/scrapy/linkextractors/__init__.py\n--- a/scrapy/linkextractors/__init__.py\n+++ b/scrapy/linkextractors/__init__.py\n@@ -28,7 +28,7 @@\n \n # video\n '3gp', 'asf', 'asx', 'avi', 'mov', 'mp4', 'mpg', 'qt', 'rm', 'swf', 'wmv',\n- 'm4a',\n+ 'm4a', 'm4v',\n \n # office suites\n 'xls', 'xlsx', 'ppt', 'pptx', 'pps', 'doc', 'docx', 'odt', 'ods', 'odg',\n", "issue": "LinkExtractor is not ignoring .m4v extension (video) by default\nBy chance I found out that LinkExtractor is not ignoring the video extension m4v in the same way it is ignoring other video formats.\r\n\r\nhttps://en.wikipedia.org/wiki/M4V\r\n\n", "before_files": [{"content": "\"\"\"\nscrapy.linkextractors\n\nThis package contains a collection of Link Extractors.\n\nFor more info see docs/topics/link-extractors.rst\n\"\"\"\nimport re\n\nfrom six.moves.urllib.parse import urlparse\nfrom parsel.csstranslator import HTMLTranslator\nfrom w3lib.url import canonicalize_url\n\nfrom scrapy.utils.misc import arg_to_iter\nfrom scrapy.utils.url import (\n url_is_from_any_domain, url_has_any_extension,\n)\n\n\n# common file extensions that are not followed if they occur in links\nIGNORED_EXTENSIONS = [\n # images\n 'mng', 'pct', 'bmp', 'gif', 'jpg', 'jpeg', 'png', 'pst', 'psp', 'tif',\n 'tiff', 'ai', 'drw', 'dxf', 'eps', 'ps', 'svg',\n\n # audio\n 'mp3', 'wma', 'ogg', 'wav', 'ra', 'aac', 'mid', 'au', 'aiff',\n\n # video\n '3gp', 'asf', 'asx', 'avi', 'mov', 'mp4', 'mpg', 'qt', 'rm', 'swf', 'wmv',\n 'm4a',\n\n # office suites\n 'xls', 'xlsx', 'ppt', 'pptx', 'pps', 'doc', 'docx', 'odt', 'ods', 'odg',\n 'odp',\n\n # other\n 'css', 'pdf', 'exe', 'bin', 'rss', 'zip', 'rar',\n]\n\n\n_re_type = type(re.compile(\"\", 0))\n_matches = lambda url, regexs: any(r.search(url) for r in regexs)\n_is_valid_url = lambda url: url.split('://', 1)[0] in {'http', 'https', 'file'}\n\n\nclass FilteringLinkExtractor(object):\n\n _csstranslator = HTMLTranslator()\n\n def __init__(self, link_extractor, allow, deny, allow_domains, deny_domains,\n restrict_xpaths, canonicalize, deny_extensions, restrict_css):\n\n self.link_extractor = link_extractor\n\n self.allow_res = [x if isinstance(x, _re_type) else re.compile(x)\n for x in arg_to_iter(allow)]\n self.deny_res = [x if isinstance(x, _re_type) else re.compile(x)\n for x in arg_to_iter(deny)]\n\n self.allow_domains = set(arg_to_iter(allow_domains))\n self.deny_domains = set(arg_to_iter(deny_domains))\n\n self.restrict_xpaths = tuple(arg_to_iter(restrict_xpaths))\n self.restrict_xpaths += tuple(map(self._csstranslator.css_to_xpath,\n arg_to_iter(restrict_css)))\n\n self.canonicalize = canonicalize\n if deny_extensions is None:\n deny_extensions = IGNORED_EXTENSIONS\n self.deny_extensions = {'.' + e for e in arg_to_iter(deny_extensions)}\n\n def _link_allowed(self, link):\n if not _is_valid_url(link.url):\n return False\n if self.allow_res and not _matches(link.url, self.allow_res):\n return False\n if self.deny_res and _matches(link.url, self.deny_res):\n return False\n parsed_url = urlparse(link.url)\n if self.allow_domains and not url_is_from_any_domain(parsed_url, self.allow_domains):\n return False\n if self.deny_domains and url_is_from_any_domain(parsed_url, self.deny_domains):\n return False\n if self.deny_extensions and url_has_any_extension(parsed_url, self.deny_extensions):\n return False\n return True\n\n def matches(self, url):\n\n if self.allow_domains and not url_is_from_any_domain(url, self.allow_domains):\n return False\n if self.deny_domains and url_is_from_any_domain(url, self.deny_domains):\n return False\n\n allowed = (regex.search(url) for regex in self.allow_res) if self.allow_res else [True]\n denied = (regex.search(url) for regex in self.deny_res) if self.deny_res else []\n return any(allowed) and not any(denied)\n\n def _process_links(self, links):\n links = [x for x in links if self._link_allowed(x)]\n if self.canonicalize:\n for link in links:\n link.url = canonicalize_url(link.url)\n links = self.link_extractor._process_links(links)\n return links\n\n def _extract_links(self, *args, **kwargs):\n return self.link_extractor._extract_links(*args, **kwargs)\n\n\n# Top-level imports\nfrom .lxmlhtml import LxmlLinkExtractor as LinkExtractor\n", "path": "scrapy/linkextractors/__init__.py"}], "after_files": [{"content": "\"\"\"\nscrapy.linkextractors\n\nThis package contains a collection of Link Extractors.\n\nFor more info see docs/topics/link-extractors.rst\n\"\"\"\nimport re\n\nfrom six.moves.urllib.parse import urlparse\nfrom parsel.csstranslator import HTMLTranslator\nfrom w3lib.url import canonicalize_url\n\nfrom scrapy.utils.misc import arg_to_iter\nfrom scrapy.utils.url import (\n url_is_from_any_domain, url_has_any_extension,\n)\n\n\n# common file extensions that are not followed if they occur in links\nIGNORED_EXTENSIONS = [\n # images\n 'mng', 'pct', 'bmp', 'gif', 'jpg', 'jpeg', 'png', 'pst', 'psp', 'tif',\n 'tiff', 'ai', 'drw', 'dxf', 'eps', 'ps', 'svg',\n\n # audio\n 'mp3', 'wma', 'ogg', 'wav', 'ra', 'aac', 'mid', 'au', 'aiff',\n\n # video\n '3gp', 'asf', 'asx', 'avi', 'mov', 'mp4', 'mpg', 'qt', 'rm', 'swf', 'wmv',\n 'm4a', 'm4v',\n\n # office suites\n 'xls', 'xlsx', 'ppt', 'pptx', 'pps', 'doc', 'docx', 'odt', 'ods', 'odg',\n 'odp',\n\n # other\n 'css', 'pdf', 'exe', 'bin', 'rss', 'zip', 'rar',\n]\n\n\n_re_type = type(re.compile(\"\", 0))\n_matches = lambda url, regexs: any(r.search(url) for r in regexs)\n_is_valid_url = lambda url: url.split('://', 1)[0] in {'http', 'https', 'file'}\n\n\nclass FilteringLinkExtractor(object):\n\n _csstranslator = HTMLTranslator()\n\n def __init__(self, link_extractor, allow, deny, allow_domains, deny_domains,\n restrict_xpaths, canonicalize, deny_extensions, restrict_css):\n\n self.link_extractor = link_extractor\n\n self.allow_res = [x if isinstance(x, _re_type) else re.compile(x)\n for x in arg_to_iter(allow)]\n self.deny_res = [x if isinstance(x, _re_type) else re.compile(x)\n for x in arg_to_iter(deny)]\n\n self.allow_domains = set(arg_to_iter(allow_domains))\n self.deny_domains = set(arg_to_iter(deny_domains))\n\n self.restrict_xpaths = tuple(arg_to_iter(restrict_xpaths))\n self.restrict_xpaths += tuple(map(self._csstranslator.css_to_xpath,\n arg_to_iter(restrict_css)))\n\n self.canonicalize = canonicalize\n if deny_extensions is None:\n deny_extensions = IGNORED_EXTENSIONS\n self.deny_extensions = {'.' + e for e in arg_to_iter(deny_extensions)}\n\n def _link_allowed(self, link):\n if not _is_valid_url(link.url):\n return False\n if self.allow_res and not _matches(link.url, self.allow_res):\n return False\n if self.deny_res and _matches(link.url, self.deny_res):\n return False\n parsed_url = urlparse(link.url)\n if self.allow_domains and not url_is_from_any_domain(parsed_url, self.allow_domains):\n return False\n if self.deny_domains and url_is_from_any_domain(parsed_url, self.deny_domains):\n return False\n if self.deny_extensions and url_has_any_extension(parsed_url, self.deny_extensions):\n return False\n return True\n\n def matches(self, url):\n\n if self.allow_domains and not url_is_from_any_domain(url, self.allow_domains):\n return False\n if self.deny_domains and url_is_from_any_domain(url, self.deny_domains):\n return False\n\n allowed = (regex.search(url) for regex in self.allow_res) if self.allow_res else [True]\n denied = (regex.search(url) for regex in self.deny_res) if self.deny_res else []\n return any(allowed) and not any(denied)\n\n def _process_links(self, links):\n links = [x for x in links if self._link_allowed(x)]\n if self.canonicalize:\n for link in links:\n link.url = canonicalize_url(link.url)\n links = self.link_extractor._process_links(links)\n return links\n\n def _extract_links(self, *args, **kwargs):\n return self.link_extractor._extract_links(*args, **kwargs)\n\n\n# Top-level imports\nfrom .lxmlhtml import LxmlLinkExtractor as LinkExtractor\n", "path": "scrapy/linkextractors/__init__.py"}]} | 1,570 | 166 |
gh_patches_debug_17330 | rasdani/github-patches | git_diff | bornhack__bornhack-website-187 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
include iban/swift/bic info on custominvoice pdfs
include iban/swift/bic info on custominvoice pdfs
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/shop/invoiceworker.py`
Content:
```
1 from django.core.files import File
2 from utils.pdf import generate_pdf_letter
3 from shop.email import add_invoice_email, add_creditnote_email
4 from shop.models import Order, CustomOrder, Invoice, CreditNote
5 import logging
6 logging.basicConfig(level=logging.INFO)
7 logger = logging.getLogger('bornhack.%s' % __name__)
8
9
10 def do_work():
11 """
12 The invoice worker creates Invoice objects for shop orders and
13 for custom orders. It also generates PDF files for Invoice objects
14 that have no PDF. It also emails invoices for shop orders.
15 """
16
17 # check if we need to generate any invoices for shop orders
18 for order in Order.objects.filter(paid=True, invoice__isnull=True):
19 # generate invoice for this Order
20 Invoice.objects.create(order=order)
21 logger.info('Generated Invoice object for %s' % order)
22
23 # check if we need to generate any invoices for custom orders
24 for customorder in CustomOrder.objects.filter(invoice__isnull=True):
25 # generate invoice for this CustomOrder
26 Invoice.objects.create(customorder=customorder)
27 logger.info('Generated Invoice object for %s' % customorder)
28
29 # check if we need to generate any pdf invoices
30 for invoice in Invoice.objects.filter(pdf=''):
31 # generate the pdf
32 try:
33 if invoice.customorder:
34 template = 'pdf/custominvoice.html'
35 else:
36 template = 'pdf/invoice.html'
37 pdffile = generate_pdf_letter(
38 filename=invoice.filename,
39 template=template,
40 formatdict={
41 'invoice': invoice,
42 },
43 )
44 logger.info('Generated pdf for invoice %s' % invoice)
45 except Exception as E:
46 logger.exception('Unable to generate PDF file for invoice #%s. Error: %s' % (invoice.pk, E))
47 continue
48
49 # update invoice object with the file
50 invoice.pdf.save(invoice.filename, File(pdffile))
51 invoice.save()
52
53 # check if we need to send out any invoices (only for shop orders, and only where pdf has been generated)
54 for invoice in Invoice.objects.filter(order__isnull=False, sent_to_customer=False).exclude(pdf=''):
55 logger.info("found unmailed Invoice object: %s" % invoice)
56 # add email to the outgoing email queue
57 if add_invoice_email(invoice=invoice):
58 invoice.sent_to_customer = True
59 invoice.save()
60 logger.info('OK: Invoice email to {} added to queue.'.format(
61 invoice.order.user.email)
62 )
63 else:
64 logger.error('Unable to add email for invoice {} to {}'.format(
65 invoice.pk,
66 invoice.order.user.email
67 )
68 )
69
70 # check if we need to generate any pdf creditnotes?
71 for creditnote in CreditNote.objects.filter(pdf=''):
72 # generate the pdf
73 try:
74 pdffile = generate_pdf_letter(
75 filename=creditnote.filename,
76 template='pdf/creditnote.html',
77 formatdict={
78 'creditnote': creditnote,
79 },
80 )
81 logger.info('Generated pdf for creditnote %s' % creditnote)
82 except Exception as E:
83 logger.exception('Unable to generate PDF file for creditnote #%s. Error: %s' % (creditnote.pk, E))
84 continue
85
86 # update creditnote object with the file
87 creditnote.pdf.save(creditnote.filename, File(pdffile))
88 creditnote.save()
89
90 # check if we need to send out any creditnotes (only where pdf has been generated and only for creditnotes linked to a user)
91 for creditnote in CreditNote.objects.filter(sent_to_customer=False).exclude(pdf='').exclude(user=None):
92 # send the email
93 if add_creditnote_email(creditnote=creditnote):
94 logger.info('OK: Creditnote email to %s added' % creditnote.user.email)
95 creditnote.sent_to_customer = True
96 creditnote.save()
97 else:
98 logger.error('Unable to add creditnote email for creditnote %s to %s' % (creditnote.pk, creditnote.user.email))
99
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/shop/invoiceworker.py b/src/shop/invoiceworker.py
--- a/src/shop/invoiceworker.py
+++ b/src/shop/invoiceworker.py
@@ -1,3 +1,4 @@
+from django.conf import settings
from django.core.files import File
from utils.pdf import generate_pdf_letter
from shop.email import add_invoice_email, add_creditnote_email
@@ -39,6 +40,11 @@
template=template,
formatdict={
'invoice': invoice,
+ 'bank': settings.BANKACCOUNT_BANK,
+ 'bank_iban': settings.BANKACCOUNT_IBAN,
+ 'bank_bic': settings.BANKACCOUNT_SWIFTBIC,
+ 'bank_dk_reg': settings.BANKACCOUNT_REG,
+ 'bank_dk_accno': settings.BANKACCOUNT_ACCOUNT,
},
)
logger.info('Generated pdf for invoice %s' % invoice)
| {"golden_diff": "diff --git a/src/shop/invoiceworker.py b/src/shop/invoiceworker.py\n--- a/src/shop/invoiceworker.py\n+++ b/src/shop/invoiceworker.py\n@@ -1,3 +1,4 @@\n+from django.conf import settings\n from django.core.files import File\n from utils.pdf import generate_pdf_letter\n from shop.email import add_invoice_email, add_creditnote_email\n@@ -39,6 +40,11 @@\n template=template,\n formatdict={\n 'invoice': invoice,\n+ 'bank': settings.BANKACCOUNT_BANK,\n+ 'bank_iban': settings.BANKACCOUNT_IBAN,\n+ 'bank_bic': settings.BANKACCOUNT_SWIFTBIC,\n+ 'bank_dk_reg': settings.BANKACCOUNT_REG,\n+ 'bank_dk_accno': settings.BANKACCOUNT_ACCOUNT,\n },\n )\n logger.info('Generated pdf for invoice %s' % invoice)\n", "issue": "include iban/swift/bic info on custominvoice pdfs\n\ninclude iban/swift/bic info on custominvoice pdfs\n\n", "before_files": [{"content": "from django.core.files import File\nfrom utils.pdf import generate_pdf_letter\nfrom shop.email import add_invoice_email, add_creditnote_email\nfrom shop.models import Order, CustomOrder, Invoice, CreditNote\nimport logging\nlogging.basicConfig(level=logging.INFO)\nlogger = logging.getLogger('bornhack.%s' % __name__)\n\n\ndef do_work():\n \"\"\"\n The invoice worker creates Invoice objects for shop orders and\n for custom orders. It also generates PDF files for Invoice objects\n that have no PDF. It also emails invoices for shop orders.\n \"\"\"\n\n # check if we need to generate any invoices for shop orders\n for order in Order.objects.filter(paid=True, invoice__isnull=True):\n # generate invoice for this Order\n Invoice.objects.create(order=order)\n logger.info('Generated Invoice object for %s' % order)\n\n # check if we need to generate any invoices for custom orders\n for customorder in CustomOrder.objects.filter(invoice__isnull=True):\n # generate invoice for this CustomOrder\n Invoice.objects.create(customorder=customorder)\n logger.info('Generated Invoice object for %s' % customorder)\n\n # check if we need to generate any pdf invoices\n for invoice in Invoice.objects.filter(pdf=''):\n # generate the pdf\n try:\n if invoice.customorder:\n template = 'pdf/custominvoice.html'\n else:\n template = 'pdf/invoice.html'\n pdffile = generate_pdf_letter(\n filename=invoice.filename,\n template=template,\n formatdict={\n 'invoice': invoice,\n },\n )\n logger.info('Generated pdf for invoice %s' % invoice)\n except Exception as E:\n logger.exception('Unable to generate PDF file for invoice #%s. Error: %s' % (invoice.pk, E))\n continue\n\n # update invoice object with the file\n invoice.pdf.save(invoice.filename, File(pdffile))\n invoice.save()\n\n # check if we need to send out any invoices (only for shop orders, and only where pdf has been generated)\n for invoice in Invoice.objects.filter(order__isnull=False, sent_to_customer=False).exclude(pdf=''):\n logger.info(\"found unmailed Invoice object: %s\" % invoice)\n # add email to the outgoing email queue\n if add_invoice_email(invoice=invoice):\n invoice.sent_to_customer = True\n invoice.save()\n logger.info('OK: Invoice email to {} added to queue.'.format(\n invoice.order.user.email)\n )\n else:\n logger.error('Unable to add email for invoice {} to {}'.format(\n invoice.pk,\n invoice.order.user.email\n )\n )\n\n # check if we need to generate any pdf creditnotes?\n for creditnote in CreditNote.objects.filter(pdf=''):\n # generate the pdf\n try:\n pdffile = generate_pdf_letter(\n filename=creditnote.filename,\n template='pdf/creditnote.html',\n formatdict={\n 'creditnote': creditnote,\n },\n )\n logger.info('Generated pdf for creditnote %s' % creditnote)\n except Exception as E:\n logger.exception('Unable to generate PDF file for creditnote #%s. Error: %s' % (creditnote.pk, E))\n continue\n\n # update creditnote object with the file\n creditnote.pdf.save(creditnote.filename, File(pdffile))\n creditnote.save()\n\n # check if we need to send out any creditnotes (only where pdf has been generated and only for creditnotes linked to a user)\n for creditnote in CreditNote.objects.filter(sent_to_customer=False).exclude(pdf='').exclude(user=None):\n # send the email\n if add_creditnote_email(creditnote=creditnote):\n logger.info('OK: Creditnote email to %s added' % creditnote.user.email)\n creditnote.sent_to_customer = True\n creditnote.save()\n else:\n logger.error('Unable to add creditnote email for creditnote %s to %s' % (creditnote.pk, creditnote.user.email))\n\n", "path": "src/shop/invoiceworker.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.core.files import File\nfrom utils.pdf import generate_pdf_letter\nfrom shop.email import add_invoice_email, add_creditnote_email\nfrom shop.models import Order, CustomOrder, Invoice, CreditNote\nimport logging\nlogging.basicConfig(level=logging.INFO)\nlogger = logging.getLogger('bornhack.%s' % __name__)\n\n\ndef do_work():\n \"\"\"\n The invoice worker creates Invoice objects for shop orders and\n for custom orders. It also generates PDF files for Invoice objects\n that have no PDF. It also emails invoices for shop orders.\n \"\"\"\n\n # check if we need to generate any invoices for shop orders\n for order in Order.objects.filter(paid=True, invoice__isnull=True):\n # generate invoice for this Order\n Invoice.objects.create(order=order)\n logger.info('Generated Invoice object for %s' % order)\n\n # check if we need to generate any invoices for custom orders\n for customorder in CustomOrder.objects.filter(invoice__isnull=True):\n # generate invoice for this CustomOrder\n Invoice.objects.create(customorder=customorder)\n logger.info('Generated Invoice object for %s' % customorder)\n\n # check if we need to generate any pdf invoices\n for invoice in Invoice.objects.filter(pdf=''):\n # generate the pdf\n try:\n if invoice.customorder:\n template = 'pdf/custominvoice.html'\n else:\n template = 'pdf/invoice.html'\n pdffile = generate_pdf_letter(\n filename=invoice.filename,\n template=template,\n formatdict={\n 'invoice': invoice,\n 'bank': settings.BANKACCOUNT_BANK,\n 'bank_iban': settings.BANKACCOUNT_IBAN,\n 'bank_bic': settings.BANKACCOUNT_SWIFTBIC,\n 'bank_dk_reg': settings.BANKACCOUNT_REG,\n 'bank_dk_accno': settings.BANKACCOUNT_ACCOUNT,\n },\n )\n logger.info('Generated pdf for invoice %s' % invoice)\n except Exception as E:\n logger.exception('Unable to generate PDF file for invoice #%s. Error: %s' % (invoice.pk, E))\n continue\n\n # update invoice object with the file\n invoice.pdf.save(invoice.filename, File(pdffile))\n invoice.save()\n\n # check if we need to send out any invoices (only for shop orders, and only where pdf has been generated)\n for invoice in Invoice.objects.filter(order__isnull=False, sent_to_customer=False).exclude(pdf=''):\n logger.info(\"found unmailed Invoice object: %s\" % invoice)\n # add email to the outgoing email queue\n if add_invoice_email(invoice=invoice):\n invoice.sent_to_customer = True\n invoice.save()\n logger.info('OK: Invoice email to {} added to queue.'.format(\n invoice.order.user.email)\n )\n else:\n logger.error('Unable to add email for invoice {} to {}'.format(\n invoice.pk,\n invoice.order.user.email\n )\n )\n\n # check if we need to generate any pdf creditnotes?\n for creditnote in CreditNote.objects.filter(pdf=''):\n # generate the pdf\n try:\n pdffile = generate_pdf_letter(\n filename=creditnote.filename,\n template='pdf/creditnote.html',\n formatdict={\n 'creditnote': creditnote,\n },\n )\n logger.info('Generated pdf for creditnote %s' % creditnote)\n except Exception as E:\n logger.exception('Unable to generate PDF file for creditnote #%s. Error: %s' % (creditnote.pk, E))\n continue\n\n # update creditnote object with the file\n creditnote.pdf.save(creditnote.filename, File(pdffile))\n creditnote.save()\n\n # check if we need to send out any creditnotes (only where pdf has been generated and only for creditnotes linked to a user)\n for creditnote in CreditNote.objects.filter(sent_to_customer=False).exclude(pdf='').exclude(user=None):\n # send the email\n if add_creditnote_email(creditnote=creditnote):\n logger.info('OK: Creditnote email to %s added' % creditnote.user.email)\n creditnote.sent_to_customer = True\n creditnote.save()\n else:\n logger.error('Unable to add creditnote email for creditnote %s to %s' % (creditnote.pk, creditnote.user.email))\n\n", "path": "src/shop/invoiceworker.py"}]} | 1,342 | 205 |
gh_patches_debug_10518 | rasdani/github-patches | git_diff | Mailu__Mailu-1052 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Regular alpine base-image update
To receive current updates for upstream projects, security fixes and new features such as mentioned in #1049 , let’s upgrade the alpine baseimage to latest 3.10
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/postfix/start.py`
Content:
```
1 #!/usr/bin/python3
2
3 import os
4 import glob
5 import shutil
6 import multiprocessing
7 import logging as log
8 import sys
9 from mailustart import resolve, convert
10
11 from podop import run_server
12
13 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "WARNING"))
14
15 def start_podop():
16 os.setuid(100)
17 url = "http://" + os.environ["ADMIN_ADDRESS"] + "/internal/postfix/"
18 # TODO: Remove verbosity setting from Podop?
19 run_server(0, "postfix", "/tmp/podop.socket", [
20 ("transport", "url", url + "transport/§"),
21 ("alias", "url", url + "alias/§"),
22 ("domain", "url", url + "domain/§"),
23 ("mailbox", "url", url + "mailbox/§"),
24 ("senderaccess", "url", url + "sender/access/§"),
25 ("senderlogin", "url", url + "sender/login/§")
26 ])
27
28 # Actual startup script
29 os.environ["FRONT_ADDRESS"] = resolve(os.environ.get("FRONT_ADDRESS", "front"))
30 os.environ["ADMIN_ADDRESS"] = resolve(os.environ.get("ADMIN_ADDRESS", "admin"))
31 os.environ["HOST_ANTISPAM"] = resolve(os.environ.get("HOST_ANTISPAM", "antispam:11332"))
32 os.environ["HOST_LMTP"] = resolve(os.environ.get("HOST_LMTP", "imap:2525"))
33
34 for postfix_file in glob.glob("/conf/*.cf"):
35 convert(postfix_file, os.path.join("/etc/postfix", os.path.basename(postfix_file)))
36
37 if os.path.exists("/overrides/postfix.cf"):
38 for line in open("/overrides/postfix.cf").read().strip().split("\n"):
39 os.system('postconf -e "{}"'.format(line))
40
41 if os.path.exists("/overrides/postfix.master"):
42 for line in open("/overrides/postfix.master").read().strip().split("\n"):
43 os.system('postconf -Me "{}"'.format(line))
44
45 for map_file in glob.glob("/overrides/*.map"):
46 destination = os.path.join("/etc/postfix", os.path.basename(map_file))
47 shutil.copyfile(map_file, destination)
48 os.system("postmap {}".format(destination))
49 os.remove(destination)
50
51 if "RELAYUSER" in os.environ:
52 path = "/etc/postfix/sasl_passwd"
53 convert("/conf/sasl_passwd", path)
54 os.system("postmap {}".format(path))
55
56 convert("/conf/rsyslog.conf", "/etc/rsyslog.conf")
57
58 # Run Podop and Postfix
59 multiprocessing.Process(target=start_podop).start()
60 if os.path.exists("/var/run/rsyslogd.pid"):
61 os.remove("/var/run/rsyslogd.pid")
62 os.system("/usr/lib/postfix/post-install meta_directory=/etc/postfix create-missing")
63 os.system("/usr/lib/postfix/master &")
64 os.execv("/usr/sbin/rsyslogd", ["rsyslogd", "-n"])
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/postfix/start.py b/core/postfix/start.py
--- a/core/postfix/start.py
+++ b/core/postfix/start.py
@@ -53,12 +53,7 @@
convert("/conf/sasl_passwd", path)
os.system("postmap {}".format(path))
-convert("/conf/rsyslog.conf", "/etc/rsyslog.conf")
-
# Run Podop and Postfix
multiprocessing.Process(target=start_podop).start()
-if os.path.exists("/var/run/rsyslogd.pid"):
- os.remove("/var/run/rsyslogd.pid")
-os.system("/usr/lib/postfix/post-install meta_directory=/etc/postfix create-missing")
-os.system("/usr/lib/postfix/master &")
-os.execv("/usr/sbin/rsyslogd", ["rsyslogd", "-n"])
+os.system("/usr/libexec/postfix/post-install meta_directory=/etc/postfix create-missing")
+os.system("postfix start-fg")
| {"golden_diff": "diff --git a/core/postfix/start.py b/core/postfix/start.py\n--- a/core/postfix/start.py\n+++ b/core/postfix/start.py\n@@ -53,12 +53,7 @@\n convert(\"/conf/sasl_passwd\", path)\n os.system(\"postmap {}\".format(path))\n \n-convert(\"/conf/rsyslog.conf\", \"/etc/rsyslog.conf\")\n-\n # Run Podop and Postfix\n multiprocessing.Process(target=start_podop).start()\n-if os.path.exists(\"/var/run/rsyslogd.pid\"):\n- os.remove(\"/var/run/rsyslogd.pid\")\n-os.system(\"/usr/lib/postfix/post-install meta_directory=/etc/postfix create-missing\")\n-os.system(\"/usr/lib/postfix/master &\")\n-os.execv(\"/usr/sbin/rsyslogd\", [\"rsyslogd\", \"-n\"])\n+os.system(\"/usr/libexec/postfix/post-install meta_directory=/etc/postfix create-missing\")\n+os.system(\"postfix start-fg\")\n", "issue": "Regular alpine base-image update\nTo receive current updates for upstream projects, security fixes and new features such as mentioned in #1049 , let\u2019s upgrade the alpine baseimage to latest 3.10\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport shutil\nimport multiprocessing\nimport logging as log\nimport sys\nfrom mailustart import resolve, convert\n\nfrom podop import run_server\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\ndef start_podop():\n os.setuid(100)\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/postfix/\"\n # TODO: Remove verbosity setting from Podop?\n run_server(0, \"postfix\", \"/tmp/podop.socket\", [\n\t\t(\"transport\", \"url\", url + \"transport/\u00a7\"),\n\t\t(\"alias\", \"url\", url + \"alias/\u00a7\"),\n\t\t(\"domain\", \"url\", url + \"domain/\u00a7\"),\n (\"mailbox\", \"url\", url + \"mailbox/\u00a7\"),\n (\"senderaccess\", \"url\", url + \"sender/access/\u00a7\"),\n (\"senderlogin\", \"url\", url + \"sender/login/\u00a7\")\n ])\n\n# Actual startup script\nos.environ[\"FRONT_ADDRESS\"] = resolve(os.environ.get(\"FRONT_ADDRESS\", \"front\"))\nos.environ[\"ADMIN_ADDRESS\"] = resolve(os.environ.get(\"ADMIN_ADDRESS\", \"admin\"))\nos.environ[\"HOST_ANTISPAM\"] = resolve(os.environ.get(\"HOST_ANTISPAM\", \"antispam:11332\"))\nos.environ[\"HOST_LMTP\"] = resolve(os.environ.get(\"HOST_LMTP\", \"imap:2525\"))\n\nfor postfix_file in glob.glob(\"/conf/*.cf\"):\n convert(postfix_file, os.path.join(\"/etc/postfix\", os.path.basename(postfix_file)))\n\nif os.path.exists(\"/overrides/postfix.cf\"):\n for line in open(\"/overrides/postfix.cf\").read().strip().split(\"\\n\"):\n os.system('postconf -e \"{}\"'.format(line))\n\nif os.path.exists(\"/overrides/postfix.master\"):\n for line in open(\"/overrides/postfix.master\").read().strip().split(\"\\n\"):\n os.system('postconf -Me \"{}\"'.format(line))\n\nfor map_file in glob.glob(\"/overrides/*.map\"):\n destination = os.path.join(\"/etc/postfix\", os.path.basename(map_file))\n shutil.copyfile(map_file, destination)\n os.system(\"postmap {}\".format(destination))\n os.remove(destination)\n\nif \"RELAYUSER\" in os.environ:\n path = \"/etc/postfix/sasl_passwd\"\n convert(\"/conf/sasl_passwd\", path)\n os.system(\"postmap {}\".format(path))\n\nconvert(\"/conf/rsyslog.conf\", \"/etc/rsyslog.conf\")\n\n# Run Podop and Postfix\nmultiprocessing.Process(target=start_podop).start()\nif os.path.exists(\"/var/run/rsyslogd.pid\"):\n os.remove(\"/var/run/rsyslogd.pid\")\nos.system(\"/usr/lib/postfix/post-install meta_directory=/etc/postfix create-missing\")\nos.system(\"/usr/lib/postfix/master &\")\nos.execv(\"/usr/sbin/rsyslogd\", [\"rsyslogd\", \"-n\"])\n", "path": "core/postfix/start.py"}], "after_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport shutil\nimport multiprocessing\nimport logging as log\nimport sys\nfrom mailustart import resolve, convert\n\nfrom podop import run_server\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\ndef start_podop():\n os.setuid(100)\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/postfix/\"\n # TODO: Remove verbosity setting from Podop?\n run_server(0, \"postfix\", \"/tmp/podop.socket\", [\n\t\t(\"transport\", \"url\", url + \"transport/\u00a7\"),\n\t\t(\"alias\", \"url\", url + \"alias/\u00a7\"),\n\t\t(\"domain\", \"url\", url + \"domain/\u00a7\"),\n (\"mailbox\", \"url\", url + \"mailbox/\u00a7\"),\n (\"senderaccess\", \"url\", url + \"sender/access/\u00a7\"),\n (\"senderlogin\", \"url\", url + \"sender/login/\u00a7\")\n ])\n\n# Actual startup script\nos.environ[\"FRONT_ADDRESS\"] = resolve(os.environ.get(\"FRONT_ADDRESS\", \"front\"))\nos.environ[\"ADMIN_ADDRESS\"] = resolve(os.environ.get(\"ADMIN_ADDRESS\", \"admin\"))\nos.environ[\"HOST_ANTISPAM\"] = resolve(os.environ.get(\"HOST_ANTISPAM\", \"antispam:11332\"))\nos.environ[\"HOST_LMTP\"] = resolve(os.environ.get(\"HOST_LMTP\", \"imap:2525\"))\n\nfor postfix_file in glob.glob(\"/conf/*.cf\"):\n convert(postfix_file, os.path.join(\"/etc/postfix\", os.path.basename(postfix_file)))\n\nif os.path.exists(\"/overrides/postfix.cf\"):\n for line in open(\"/overrides/postfix.cf\").read().strip().split(\"\\n\"):\n os.system('postconf -e \"{}\"'.format(line))\n\nif os.path.exists(\"/overrides/postfix.master\"):\n for line in open(\"/overrides/postfix.master\").read().strip().split(\"\\n\"):\n os.system('postconf -Me \"{}\"'.format(line))\n\nfor map_file in glob.glob(\"/overrides/*.map\"):\n destination = os.path.join(\"/etc/postfix\", os.path.basename(map_file))\n shutil.copyfile(map_file, destination)\n os.system(\"postmap {}\".format(destination))\n os.remove(destination)\n\nif \"RELAYUSER\" in os.environ:\n path = \"/etc/postfix/sasl_passwd\"\n convert(\"/conf/sasl_passwd\", path)\n os.system(\"postmap {}\".format(path))\n\n# Run Podop and Postfix\nmultiprocessing.Process(target=start_podop).start()\nos.system(\"/usr/libexec/postfix/post-install meta_directory=/etc/postfix create-missing\")\nos.system(\"postfix start-fg\")\n", "path": "core/postfix/start.py"}]} | 1,070 | 206 |
gh_patches_debug_967 | rasdani/github-patches | git_diff | ivy-llc__ivy-16680 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
rad2deg
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/paddle/tensor/math.py`
Content:
```
1 # global
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
4 from ivy.functional.frontends.paddle.func_wrapper import (
5 to_ivy_arrays_and_back,
6 )
7
8
9 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
10 @to_ivy_arrays_and_back
11 def sin(x, name=None):
12 return ivy.sin(x)
13
14
15 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
16 @to_ivy_arrays_and_back
17 def cos(x, name=None):
18 return ivy.cos(x)
19
20
21 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
22 @to_ivy_arrays_and_back
23 def acos(x, name=None):
24 return ivy.acos(x)
25
26
27 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
28 @to_ivy_arrays_and_back
29 def cosh(x, name=None):
30 return ivy.cosh(x)
31
32
33 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
34 @to_ivy_arrays_and_back
35 def tanh(x, name=None):
36 return ivy.tanh(x)
37
38
39 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
40 @to_ivy_arrays_and_back
41 def acosh(x, name=None):
42 return ivy.acosh(x)
43
44
45 @with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
46 @to_ivy_arrays_and_back
47 def asin(x, name=None):
48 return ivy.asin(x)
49
50
51 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
52 @to_ivy_arrays_and_back
53 def log(x, name=None):
54 return ivy.log(x)
55
56
57 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
58 @to_ivy_arrays_and_back
59 def divide(x, y, name=None):
60 return ivy.divide(x, y)
61
62
63 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
64 @to_ivy_arrays_and_back
65 def abs(x, name=None):
66 return ivy.abs(x)
67
68
69 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
70 @to_ivy_arrays_and_back
71 def multiply(x, y, name=None):
72 return ivy.multiply(x, y)
73
74
75 @with_unsupported_dtypes(
76 {"2.4.2 and below": ("bool", "unsigned", "int8", "float16", "bfloat16")}, "paddle"
77 )
78 @to_ivy_arrays_and_back
79 def add(x, y, name=None):
80 return ivy.add(x, y)
81
82
83 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
84 @to_ivy_arrays_and_back
85 def subtract(x, y, name=None):
86 return ivy.subtract(x, y)
87
88
89 @with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
90 @to_ivy_arrays_and_back
91 def sqrt(x, name=None):
92 return ivy.sqrt(x)
93
94
95 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
96 @to_ivy_arrays_and_back
97 def atanh(x, name=None):
98 return ivy.atanh(x)
99
100
101 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
102 @to_ivy_arrays_and_back
103 def atan(x, name=None):
104 return ivy.atan(x)
105
106
107 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
108 @to_ivy_arrays_and_back
109 def round(x, name=None):
110 return ivy.round(x)
111
112
113 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
114 @to_ivy_arrays_and_back
115 def ceil(x, name=None):
116 return ivy.ceil(x)
117
118
119 @with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
120 @to_ivy_arrays_and_back
121 def sinh(x, name=None):
122 return ivy.sinh(x)
123
124
125 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
126 @to_ivy_arrays_and_back
127 def pow(x, y, name=None):
128 return ivy.pow(x, y)
129
130
131 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
132 @to_ivy_arrays_and_back
133 def floor(x, name=None):
134 return ivy.floor(x)
135
136
137 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
138 @to_ivy_arrays_and_back
139 def remainder(x, y, name=None):
140 return ivy.remainder(x, y)
141
142
143 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
144 @to_ivy_arrays_and_back
145 def log2(x, name=None):
146 return ivy.log2(x)
147
148
149 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
150 @to_ivy_arrays_and_back
151 def log1p(x, name=None):
152 return ivy.log1p(x)
153
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/paddle/tensor/math.py b/ivy/functional/frontends/paddle/tensor/math.py
--- a/ivy/functional/frontends/paddle/tensor/math.py
+++ b/ivy/functional/frontends/paddle/tensor/math.py
@@ -150,3 +150,9 @@
@to_ivy_arrays_and_back
def log1p(x, name=None):
return ivy.log1p(x)
+
+
+@with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
+@to_ivy_arrays_and_back
+def rad2deg(x, name=None):
+ return ivy.rad2deg(x)
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/math.py b/ivy/functional/frontends/paddle/tensor/math.py\n--- a/ivy/functional/frontends/paddle/tensor/math.py\n+++ b/ivy/functional/frontends/paddle/tensor/math.py\n@@ -150,3 +150,9 @@\n @to_ivy_arrays_and_back\n def log1p(x, name=None):\n return ivy.log1p(x)\n+\n+\n+@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n+@to_ivy_arrays_and_back\n+def rad2deg(x, name=None):\n+ return ivy.rad2deg(x)\n", "issue": "rad2deg\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sin(x, name=None):\n return ivy.sin(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cos(x, name=None):\n return ivy.cos(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acos(x, name=None):\n return ivy.acos(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cosh(x, name=None):\n return ivy.cosh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tanh(x, name=None):\n return ivy.tanh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acosh(x, name=None):\n return ivy.acosh(x)\n\n\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef asin(x, name=None):\n return ivy.asin(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log(x, name=None):\n return ivy.log(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef divide(x, y, name=None):\n return ivy.divide(x, y)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef abs(x, name=None):\n return ivy.abs(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef multiply(x, y, name=None):\n return ivy.multiply(x, y)\n\n\n@with_unsupported_dtypes(\n {\"2.4.2 and below\": (\"bool\", \"unsigned\", \"int8\", \"float16\", \"bfloat16\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef add(x, y, name=None):\n return ivy.add(x, y)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef subtract(x, y, name=None):\n return ivy.subtract(x, y)\n\n\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sqrt(x, name=None):\n return ivy.sqrt(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atanh(x, name=None):\n return ivy.atanh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atan(x, name=None):\n return ivy.atan(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef round(x, name=None):\n return ivy.round(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef ceil(x, name=None):\n return ivy.ceil(x)\n\n\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sinh(x, name=None):\n return ivy.sinh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef pow(x, y, name=None):\n return ivy.pow(x, y)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef floor(x, name=None):\n return ivy.floor(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef remainder(x, y, name=None):\n return ivy.remainder(x, y)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log2(x, name=None):\n return ivy.log2(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log1p(x, name=None):\n return ivy.log1p(x)\n", "path": "ivy/functional/frontends/paddle/tensor/math.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sin(x, name=None):\n return ivy.sin(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cos(x, name=None):\n return ivy.cos(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acos(x, name=None):\n return ivy.acos(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cosh(x, name=None):\n return ivy.cosh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tanh(x, name=None):\n return ivy.tanh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acosh(x, name=None):\n return ivy.acosh(x)\n\n\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef asin(x, name=None):\n return ivy.asin(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log(x, name=None):\n return ivy.log(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef divide(x, y, name=None):\n return ivy.divide(x, y)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef abs(x, name=None):\n return ivy.abs(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef multiply(x, y, name=None):\n return ivy.multiply(x, y)\n\n\n@with_unsupported_dtypes(\n {\"2.4.2 and below\": (\"bool\", \"unsigned\", \"int8\", \"float16\", \"bfloat16\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef add(x, y, name=None):\n return ivy.add(x, y)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef subtract(x, y, name=None):\n return ivy.subtract(x, y)\n\n\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sqrt(x, name=None):\n return ivy.sqrt(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atanh(x, name=None):\n return ivy.atanh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atan(x, name=None):\n return ivy.atan(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef round(x, name=None):\n return ivy.round(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef ceil(x, name=None):\n return ivy.ceil(x)\n\n\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sinh(x, name=None):\n return ivy.sinh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef pow(x, y, name=None):\n return ivy.pow(x, y)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef floor(x, name=None):\n return ivy.floor(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef remainder(x, y, name=None):\n return ivy.remainder(x, y)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log2(x, name=None):\n return ivy.log2(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log1p(x, name=None):\n return ivy.log1p(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef rad2deg(x, name=None):\n return ivy.rad2deg(x)\n", "path": "ivy/functional/frontends/paddle/tensor/math.py"}]} | 2,036 | 163 |
gh_patches_debug_8478 | rasdani/github-patches | git_diff | ocf__ocfweb-535 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
versioned caching is broken
On [this line](https://github.com/ocf/ocfweb/blob/e0ca5b9053abf4908fd6dd3ff344724e61c38b53/ocfweb/environment.py#L15) we use a marathon environment variable to determine the cache version. We need to change this to use a Kubernetes things, a la
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ocfweb/environment.py`
Content:
```
1 """Introspection into the current environment."""
2 import os
3 from functools import lru_cache
4
5
6 @lru_cache()
7 def ocfweb_version():
8 """Return string representing ocfweb version.
9
10 In dev, returns 'dev'. In prod, returns a version
11 similar to '2015.12.06.02.25-gitb98c8cb6'.
12 """
13 # On Marathon, read it out of environ
14 try:
15 docker_image, tag = os.environ['MARATHON_APP_DOCKER_IMAGE'].split(':', 1)
16 return tag
17 except KeyError:
18 pass
19
20 # Otherwise, we must be in dev.
21 return 'dev'
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ocfweb/environment.py b/ocfweb/environment.py
--- a/ocfweb/environment.py
+++ b/ocfweb/environment.py
@@ -8,12 +8,11 @@
"""Return string representing ocfweb version.
In dev, returns 'dev'. In prod, returns a version
- similar to '2015.12.06.02.25-gitb98c8cb6'.
+ similar to '2019-08-20-T21-10-57-gite0ca5b9'.
"""
- # On Marathon, read it out of environ
+ # On Kubernetes, read it out of environ
try:
- docker_image, tag = os.environ['MARATHON_APP_DOCKER_IMAGE'].split(':', 1)
- return tag
+ return os.environ['OCFWEB_PROD_VERSION']
except KeyError:
pass
| {"golden_diff": "diff --git a/ocfweb/environment.py b/ocfweb/environment.py\n--- a/ocfweb/environment.py\n+++ b/ocfweb/environment.py\n@@ -8,12 +8,11 @@\n \"\"\"Return string representing ocfweb version.\n \n In dev, returns 'dev'. In prod, returns a version\n- similar to '2015.12.06.02.25-gitb98c8cb6'.\n+ similar to '2019-08-20-T21-10-57-gite0ca5b9'.\n \"\"\"\n- # On Marathon, read it out of environ\n+ # On Kubernetes, read it out of environ\n try:\n- docker_image, tag = os.environ['MARATHON_APP_DOCKER_IMAGE'].split(':', 1)\n- return tag\n+ return os.environ['OCFWEB_PROD_VERSION']\n except KeyError:\n pass\n", "issue": "versioned caching is broken\nOn [this line](https://github.com/ocf/ocfweb/blob/e0ca5b9053abf4908fd6dd3ff344724e61c38b53/ocfweb/environment.py#L15) we use a marathon environment variable to determine the cache version. We need to change this to use a Kubernetes things, a la \r\nhttps://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/.\n", "before_files": [{"content": "\"\"\"Introspection into the current environment.\"\"\"\nimport os\nfrom functools import lru_cache\n\n\n@lru_cache()\ndef ocfweb_version():\n \"\"\"Return string representing ocfweb version.\n\n In dev, returns 'dev'. In prod, returns a version\n similar to '2015.12.06.02.25-gitb98c8cb6'.\n \"\"\"\n # On Marathon, read it out of environ\n try:\n docker_image, tag = os.environ['MARATHON_APP_DOCKER_IMAGE'].split(':', 1)\n return tag\n except KeyError:\n pass\n\n # Otherwise, we must be in dev.\n return 'dev'\n", "path": "ocfweb/environment.py"}], "after_files": [{"content": "\"\"\"Introspection into the current environment.\"\"\"\nimport os\nfrom functools import lru_cache\n\n\n@lru_cache()\ndef ocfweb_version():\n \"\"\"Return string representing ocfweb version.\n\n In dev, returns 'dev'. In prod, returns a version\n similar to '2019-08-20-T21-10-57-gite0ca5b9'.\n \"\"\"\n # On Kubernetes, read it out of environ\n try:\n return os.environ['OCFWEB_PROD_VERSION']\n except KeyError:\n pass\n\n # Otherwise, we must be in dev.\n return 'dev'\n", "path": "ocfweb/environment.py"}]} | 561 | 213 |
gh_patches_debug_6883 | rasdani/github-patches | git_diff | shuup__shuup-1573 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Admin: Fix category name in category edit view
Do not show the category with hierarchy on breadcrumbs and title

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shuup/admin/modules/categories/views/edit.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This file is part of Shuup.
3 #
4 # Copyright (c) 2012-2018, Shuup Inc. All rights reserved.
5 #
6 # This source code is licensed under the OSL-3.0 license found in the
7 # LICENSE file in the root directory of this source tree.
8 from django.core.urlresolvers import reverse_lazy
9
10 from shuup.admin.form_part import FormPartsViewMixin, SaveFormPartsMixin
11 from shuup.admin.modules.categories.form_parts import (
12 CategoryBaseFormPart, CategoryProductFormPart
13 )
14 from shuup.admin.shop_provider import get_shop
15 from shuup.admin.toolbar import get_default_edit_toolbar
16 from shuup.admin.utils.tour import is_tour_complete
17 from shuup.admin.utils.views import CreateOrUpdateView
18 from shuup.core.models import Category
19
20
21 class CategoryEditView(SaveFormPartsMixin, FormPartsViewMixin, CreateOrUpdateView):
22 model = Category
23 template_name = "shuup/admin/categories/edit.jinja"
24 context_object_name = "category"
25 base_form_part_classes = [CategoryBaseFormPart, CategoryProductFormPart]
26 form_part_class_provide_key = "admin_category_form_part"
27
28 def get_toolbar(self):
29 save_form_id = self.get_save_form_id()
30 object = self.get_object()
31 delete_url = reverse_lazy("shuup_admin:category.delete", kwargs={"pk": object.pk}) if object.pk else None
32 return get_default_edit_toolbar(self, save_form_id, delete_url=delete_url)
33
34 def get_context_data(self, **kwargs):
35 context = super(CategoryEditView, self).get_context_data(**kwargs)
36 context["tour_key"] = "category"
37 context["tour_complete"] = is_tour_complete(get_shop(self.request), "category")
38 return context
39
40 def form_valid(self, form):
41 return self.save_form_parts(form)
42
43 def get_queryset(self):
44 return Category.objects.all_except_deleted(shop=get_shop(self.request))
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/shuup/admin/modules/categories/views/edit.py b/shuup/admin/modules/categories/views/edit.py
--- a/shuup/admin/modules/categories/views/edit.py
+++ b/shuup/admin/modules/categories/views/edit.py
@@ -35,6 +35,9 @@
context = super(CategoryEditView, self).get_context_data(**kwargs)
context["tour_key"] = "category"
context["tour_complete"] = is_tour_complete(get_shop(self.request), "category")
+ if self.object.pk:
+ context["title"] = self.object.name
+
return context
def form_valid(self, form):
| {"golden_diff": "diff --git a/shuup/admin/modules/categories/views/edit.py b/shuup/admin/modules/categories/views/edit.py\n--- a/shuup/admin/modules/categories/views/edit.py\n+++ b/shuup/admin/modules/categories/views/edit.py\n@@ -35,6 +35,9 @@\n context = super(CategoryEditView, self).get_context_data(**kwargs)\n context[\"tour_key\"] = \"category\"\n context[\"tour_complete\"] = is_tour_complete(get_shop(self.request), \"category\")\n+ if self.object.pk:\n+ context[\"title\"] = self.object.name\n+\n return context\n \n def form_valid(self, form):\n", "issue": "Admin: Fix category name in category edit view\nDo not show the category with hierarchy on breadcrumbs and title\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of Shuup.\n#\n# Copyright (c) 2012-2018, Shuup Inc. All rights reserved.\n#\n# This source code is licensed under the OSL-3.0 license found in the\n# LICENSE file in the root directory of this source tree.\nfrom django.core.urlresolvers import reverse_lazy\n\nfrom shuup.admin.form_part import FormPartsViewMixin, SaveFormPartsMixin\nfrom shuup.admin.modules.categories.form_parts import (\n CategoryBaseFormPart, CategoryProductFormPart\n)\nfrom shuup.admin.shop_provider import get_shop\nfrom shuup.admin.toolbar import get_default_edit_toolbar\nfrom shuup.admin.utils.tour import is_tour_complete\nfrom shuup.admin.utils.views import CreateOrUpdateView\nfrom shuup.core.models import Category\n\n\nclass CategoryEditView(SaveFormPartsMixin, FormPartsViewMixin, CreateOrUpdateView):\n model = Category\n template_name = \"shuup/admin/categories/edit.jinja\"\n context_object_name = \"category\"\n base_form_part_classes = [CategoryBaseFormPart, CategoryProductFormPart]\n form_part_class_provide_key = \"admin_category_form_part\"\n\n def get_toolbar(self):\n save_form_id = self.get_save_form_id()\n object = self.get_object()\n delete_url = reverse_lazy(\"shuup_admin:category.delete\", kwargs={\"pk\": object.pk}) if object.pk else None\n return get_default_edit_toolbar(self, save_form_id, delete_url=delete_url)\n\n def get_context_data(self, **kwargs):\n context = super(CategoryEditView, self).get_context_data(**kwargs)\n context[\"tour_key\"] = \"category\"\n context[\"tour_complete\"] = is_tour_complete(get_shop(self.request), \"category\")\n return context\n\n def form_valid(self, form):\n return self.save_form_parts(form)\n\n def get_queryset(self):\n return Category.objects.all_except_deleted(shop=get_shop(self.request))\n", "path": "shuup/admin/modules/categories/views/edit.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of Shuup.\n#\n# Copyright (c) 2012-2018, Shuup Inc. All rights reserved.\n#\n# This source code is licensed under the OSL-3.0 license found in the\n# LICENSE file in the root directory of this source tree.\nfrom django.core.urlresolvers import reverse_lazy\n\nfrom shuup.admin.form_part import FormPartsViewMixin, SaveFormPartsMixin\nfrom shuup.admin.modules.categories.form_parts import (\n CategoryBaseFormPart, CategoryProductFormPart\n)\nfrom shuup.admin.shop_provider import get_shop\nfrom shuup.admin.toolbar import get_default_edit_toolbar\nfrom shuup.admin.utils.tour import is_tour_complete\nfrom shuup.admin.utils.views import CreateOrUpdateView\nfrom shuup.core.models import Category\n\n\nclass CategoryEditView(SaveFormPartsMixin, FormPartsViewMixin, CreateOrUpdateView):\n model = Category\n template_name = \"shuup/admin/categories/edit.jinja\"\n context_object_name = \"category\"\n base_form_part_classes = [CategoryBaseFormPart, CategoryProductFormPart]\n form_part_class_provide_key = \"admin_category_form_part\"\n\n def get_toolbar(self):\n save_form_id = self.get_save_form_id()\n object = self.get_object()\n delete_url = reverse_lazy(\"shuup_admin:category.delete\", kwargs={\"pk\": object.pk}) if object.pk else None\n return get_default_edit_toolbar(self, save_form_id, delete_url=delete_url)\n\n def get_context_data(self, **kwargs):\n context = super(CategoryEditView, self).get_context_data(**kwargs)\n context[\"tour_key\"] = \"category\"\n context[\"tour_complete\"] = is_tour_complete(get_shop(self.request), \"category\")\n if self.object.pk:\n context[\"title\"] = self.object.name\n\n return context\n\n def form_valid(self, form):\n return self.save_form_parts(form)\n\n def get_queryset(self):\n return Category.objects.all_except_deleted(shop=get_shop(self.request))\n", "path": "shuup/admin/modules/categories/views/edit.py"}]} | 863 | 136 |
gh_patches_debug_34446 | rasdani/github-patches | git_diff | litestar-org__litestar-1255 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: Issue with reload
**Describe the bug**
When I start my app using starlite run -r I get the following error
INFO: Will watch for changes in these directories: ['/Users/alc/starlite-apps/starlite_test']
ERROR: [Errno 48] Address already in use
If I stop that and run it without -r it works, so I guess my port is not blocked
**To Reproduce**
Save this as `app.py`
```python
from starlite import get, Starlite
@get('/')
def route() -> str:
return 'Hello'
app = Starlite([route])
```
**Additional context**
@provinzkraut in discord help suggested a workaround using ` uvicorn <app file>:<application object> --reload` and also linked https://github.com/encode/uvicorn/issues/1045
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlite/cli/commands/core.py`
Content:
```
1 from __future__ import annotations
2
3 import inspect
4 import multiprocessing
5
6 import click
7 from click import command, option
8 from rich.tree import Tree
9
10 from starlite import Starlite
11 from starlite.cli.utils import StarliteCLIException, StarliteEnv, console, show_app_info
12 from starlite.routes import HTTPRoute, WebSocketRoute
13 from starlite.utils.helpers import unwrap_partial
14
15
16 @command(name="info")
17 def info_command(app: Starlite) -> None:
18 """Show information about the detected Starlite app."""
19
20 show_app_info(app)
21
22
23 @command(name="run")
24 @option("-r", "--reload", help="Reload server on changes", default=False, is_flag=True)
25 @option("-p", "--port", help="Serve under this port", type=int, default=8000, show_default=True)
26 @option(
27 "-wc",
28 "--web-concurrency",
29 help="The number of HTTP workers to launch",
30 type=click.IntRange(min=1, max=multiprocessing.cpu_count() + 1),
31 show_default=True,
32 default=1,
33 )
34 @option("--host", help="Server under this host", default="127.0.0.1", show_default=True)
35 @option("--debug", help="Run app in debug mode", is_flag=True)
36 def run_command(
37 reload: bool,
38 port: int,
39 web_concurrency: int,
40 host: str,
41 debug: bool,
42 env: StarliteEnv,
43 app: Starlite,
44 ) -> None:
45 """Run a Starlite app.
46
47 The app can be either passed as a module path in the form of <module name>.<submodule>:<app instance or factory>,
48 set as an environment variable STARLITE_APP with the same format or automatically discovered from one of these
49 canonical paths: app.py, asgi.py, application.py or app/__init__.py. When auto-discovering application factories,
50 functions with the name ``create_app`` are considered, or functions that are annotated as returning a ``Starlite``
51 instance.
52 """
53
54 try:
55 import uvicorn
56 except ImportError:
57 raise StarliteCLIException("Uvicorn needs to be installed to run an app") # pylint: disable=W0707
58
59 if debug or env.debug:
60 app.debug = True
61
62 show_app_info(app)
63
64 console.rule("[yellow]Starting server process", align="left")
65
66 uvicorn.run(
67 env.app_path,
68 reload=env.reload or reload,
69 host=env.host or host,
70 port=env.port or port,
71 workers=env.web_concurrency or web_concurrency,
72 factory=env.is_app_factory,
73 )
74
75
76 @command(name="routes")
77 def routes_command(app: Starlite) -> None: # pragma: no cover
78 """Display information about the application's routes."""
79
80 tree = Tree("", hide_root=True)
81
82 for route in sorted(app.routes, key=lambda r: r.path):
83 if isinstance(route, HTTPRoute):
84 branch = tree.add(f"[green]{route.path}[/green] (HTTP)")
85 for handler in route.route_handlers:
86 handler_info = [
87 f"[blue]{handler.name or handler.handler_name}[/blue]",
88 ]
89
90 if inspect.iscoroutinefunction(unwrap_partial(handler.fn.value)):
91 handler_info.append("[magenta]async[/magenta]")
92 else:
93 handler_info.append("[yellow]sync[/yellow]")
94
95 handler_info.append(f'[cyan]{", ".join(sorted(handler.http_methods))}[/cyan]')
96
97 if len(handler.paths) > 1:
98 for path in handler.paths:
99 branch.add(" ".join([f"[green]{path}[green]", *handler_info]))
100 else:
101 branch.add(" ".join(handler_info))
102
103 else:
104 if isinstance(route, WebSocketRoute):
105 route_type = "WS"
106 else:
107 route_type = "ASGI"
108 branch = tree.add(f"[green]{route.path}[/green] ({route_type})")
109 branch.add(f"[blue]{route.route_handler.name or route.route_handler.handler_name}[/blue]")
110
111 console.print(tree)
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlite/cli/commands/core.py b/starlite/cli/commands/core.py
--- a/starlite/cli/commands/core.py
+++ b/starlite/cli/commands/core.py
@@ -2,17 +2,31 @@
import inspect
import multiprocessing
+import subprocess
+from typing import Any
import click
from click import command, option
from rich.tree import Tree
from starlite import Starlite
-from starlite.cli.utils import StarliteCLIException, StarliteEnv, console, show_app_info
+from starlite.cli.utils import StarliteEnv, console, show_app_info
from starlite.routes import HTTPRoute, WebSocketRoute
from starlite.utils.helpers import unwrap_partial
+def _convert_uvicorn_args(args: dict[str, Any]) -> list[str]:
+ process_args = []
+ for arg, value in args.items():
+ if isinstance(value, bool):
+ if value:
+ process_args.append(f"--{arg}")
+ else:
+ process_args.append(f"--{arg}={value}")
+
+ return process_args
+
+
@command(name="info")
def info_command(app: Starlite) -> None:
"""Show information about the detected Starlite app."""
@@ -51,11 +65,6 @@
instance.
"""
- try:
- import uvicorn
- except ImportError:
- raise StarliteCLIException("Uvicorn needs to be installed to run an app") # pylint: disable=W0707
-
if debug or env.debug:
app.debug = True
@@ -63,14 +72,18 @@
console.rule("[yellow]Starting server process", align="left")
- uvicorn.run(
- env.app_path,
- reload=env.reload or reload,
- host=env.host or host,
- port=env.port or port,
- workers=env.web_concurrency or web_concurrency,
- factory=env.is_app_factory,
- )
+ # invoke uvicorn in a subprocess to be able to use the --reload flag. see
+ # https://github.com/starlite-api/starlite/issues/1191 and https://github.com/encode/uvicorn/issues/1045
+
+ process_args = {
+ "reload": env.reload or reload,
+ "host": env.host or host,
+ "port": env.port or port,
+ "workers": env.web_concurrency or web_concurrency,
+ "factory": env.is_app_factory,
+ }
+
+ subprocess.run(["uvicorn", env.app_path, *_convert_uvicorn_args(process_args)], check=True)
@command(name="routes")
| {"golden_diff": "diff --git a/starlite/cli/commands/core.py b/starlite/cli/commands/core.py\n--- a/starlite/cli/commands/core.py\n+++ b/starlite/cli/commands/core.py\n@@ -2,17 +2,31 @@\n \n import inspect\n import multiprocessing\n+import subprocess\n+from typing import Any\n \n import click\n from click import command, option\n from rich.tree import Tree\n \n from starlite import Starlite\n-from starlite.cli.utils import StarliteCLIException, StarliteEnv, console, show_app_info\n+from starlite.cli.utils import StarliteEnv, console, show_app_info\n from starlite.routes import HTTPRoute, WebSocketRoute\n from starlite.utils.helpers import unwrap_partial\n \n \n+def _convert_uvicorn_args(args: dict[str, Any]) -> list[str]:\n+ process_args = []\n+ for arg, value in args.items():\n+ if isinstance(value, bool):\n+ if value:\n+ process_args.append(f\"--{arg}\")\n+ else:\n+ process_args.append(f\"--{arg}={value}\")\n+\n+ return process_args\n+\n+\n @command(name=\"info\")\n def info_command(app: Starlite) -> None:\n \"\"\"Show information about the detected Starlite app.\"\"\"\n@@ -51,11 +65,6 @@\n instance.\n \"\"\"\n \n- try:\n- import uvicorn\n- except ImportError:\n- raise StarliteCLIException(\"Uvicorn needs to be installed to run an app\") # pylint: disable=W0707\n-\n if debug or env.debug:\n app.debug = True\n \n@@ -63,14 +72,18 @@\n \n console.rule(\"[yellow]Starting server process\", align=\"left\")\n \n- uvicorn.run(\n- env.app_path,\n- reload=env.reload or reload,\n- host=env.host or host,\n- port=env.port or port,\n- workers=env.web_concurrency or web_concurrency,\n- factory=env.is_app_factory,\n- )\n+ # invoke uvicorn in a subprocess to be able to use the --reload flag. see\n+ # https://github.com/starlite-api/starlite/issues/1191 and https://github.com/encode/uvicorn/issues/1045\n+\n+ process_args = {\n+ \"reload\": env.reload or reload,\n+ \"host\": env.host or host,\n+ \"port\": env.port or port,\n+ \"workers\": env.web_concurrency or web_concurrency,\n+ \"factory\": env.is_app_factory,\n+ }\n+\n+ subprocess.run([\"uvicorn\", env.app_path, *_convert_uvicorn_args(process_args)], check=True)\n \n \n @command(name=\"routes\")\n", "issue": "Bug: Issue with reload\n**Describe the bug**\r\nWhen I start my app using starlite run -r I get the following error\r\n\r\nINFO: Will watch for changes in these directories: ['/Users/alc/starlite-apps/starlite_test']\r\nERROR: [Errno 48] Address already in use\r\n\r\nIf I stop that and run it without -r it works, so I guess my port is not blocked\r\n\r\n**To Reproduce**\r\nSave this as `app.py`\r\n\r\n```python\r\nfrom starlite import get, Starlite\r\n\r\n@get('/')\r\ndef route() -> str:\r\n return 'Hello'\r\n\r\napp = Starlite([route])\r\n```\r\n\r\n**Additional context**\r\n@provinzkraut in discord help suggested a workaround using ` uvicorn <app file>:<application object> --reload` and also linked https://github.com/encode/uvicorn/issues/1045\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport inspect\nimport multiprocessing\n\nimport click\nfrom click import command, option\nfrom rich.tree import Tree\n\nfrom starlite import Starlite\nfrom starlite.cli.utils import StarliteCLIException, StarliteEnv, console, show_app_info\nfrom starlite.routes import HTTPRoute, WebSocketRoute\nfrom starlite.utils.helpers import unwrap_partial\n\n\n@command(name=\"info\")\ndef info_command(app: Starlite) -> None:\n \"\"\"Show information about the detected Starlite app.\"\"\"\n\n show_app_info(app)\n\n\n@command(name=\"run\")\n@option(\"-r\", \"--reload\", help=\"Reload server on changes\", default=False, is_flag=True)\n@option(\"-p\", \"--port\", help=\"Serve under this port\", type=int, default=8000, show_default=True)\n@option(\n \"-wc\",\n \"--web-concurrency\",\n help=\"The number of HTTP workers to launch\",\n type=click.IntRange(min=1, max=multiprocessing.cpu_count() + 1),\n show_default=True,\n default=1,\n)\n@option(\"--host\", help=\"Server under this host\", default=\"127.0.0.1\", show_default=True)\n@option(\"--debug\", help=\"Run app in debug mode\", is_flag=True)\ndef run_command(\n reload: bool,\n port: int,\n web_concurrency: int,\n host: str,\n debug: bool,\n env: StarliteEnv,\n app: Starlite,\n) -> None:\n \"\"\"Run a Starlite app.\n\n The app can be either passed as a module path in the form of <module name>.<submodule>:<app instance or factory>,\n set as an environment variable STARLITE_APP with the same format or automatically discovered from one of these\n canonical paths: app.py, asgi.py, application.py or app/__init__.py. When auto-discovering application factories,\n functions with the name ``create_app`` are considered, or functions that are annotated as returning a ``Starlite``\n instance.\n \"\"\"\n\n try:\n import uvicorn\n except ImportError:\n raise StarliteCLIException(\"Uvicorn needs to be installed to run an app\") # pylint: disable=W0707\n\n if debug or env.debug:\n app.debug = True\n\n show_app_info(app)\n\n console.rule(\"[yellow]Starting server process\", align=\"left\")\n\n uvicorn.run(\n env.app_path,\n reload=env.reload or reload,\n host=env.host or host,\n port=env.port or port,\n workers=env.web_concurrency or web_concurrency,\n factory=env.is_app_factory,\n )\n\n\n@command(name=\"routes\")\ndef routes_command(app: Starlite) -> None: # pragma: no cover\n \"\"\"Display information about the application's routes.\"\"\"\n\n tree = Tree(\"\", hide_root=True)\n\n for route in sorted(app.routes, key=lambda r: r.path):\n if isinstance(route, HTTPRoute):\n branch = tree.add(f\"[green]{route.path}[/green] (HTTP)\")\n for handler in route.route_handlers:\n handler_info = [\n f\"[blue]{handler.name or handler.handler_name}[/blue]\",\n ]\n\n if inspect.iscoroutinefunction(unwrap_partial(handler.fn.value)):\n handler_info.append(\"[magenta]async[/magenta]\")\n else:\n handler_info.append(\"[yellow]sync[/yellow]\")\n\n handler_info.append(f'[cyan]{\", \".join(sorted(handler.http_methods))}[/cyan]')\n\n if len(handler.paths) > 1:\n for path in handler.paths:\n branch.add(\" \".join([f\"[green]{path}[green]\", *handler_info]))\n else:\n branch.add(\" \".join(handler_info))\n\n else:\n if isinstance(route, WebSocketRoute):\n route_type = \"WS\"\n else:\n route_type = \"ASGI\"\n branch = tree.add(f\"[green]{route.path}[/green] ({route_type})\")\n branch.add(f\"[blue]{route.route_handler.name or route.route_handler.handler_name}[/blue]\")\n\n console.print(tree)\n", "path": "starlite/cli/commands/core.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport inspect\nimport multiprocessing\nimport subprocess\nfrom typing import Any\n\nimport click\nfrom click import command, option\nfrom rich.tree import Tree\n\nfrom starlite import Starlite\nfrom starlite.cli.utils import StarliteEnv, console, show_app_info\nfrom starlite.routes import HTTPRoute, WebSocketRoute\nfrom starlite.utils.helpers import unwrap_partial\n\n\ndef _convert_uvicorn_args(args: dict[str, Any]) -> list[str]:\n process_args = []\n for arg, value in args.items():\n if isinstance(value, bool):\n if value:\n process_args.append(f\"--{arg}\")\n else:\n process_args.append(f\"--{arg}={value}\")\n\n return process_args\n\n\n@command(name=\"info\")\ndef info_command(app: Starlite) -> None:\n \"\"\"Show information about the detected Starlite app.\"\"\"\n\n show_app_info(app)\n\n\n@command(name=\"run\")\n@option(\"-r\", \"--reload\", help=\"Reload server on changes\", default=False, is_flag=True)\n@option(\"-p\", \"--port\", help=\"Serve under this port\", type=int, default=8000, show_default=True)\n@option(\n \"-wc\",\n \"--web-concurrency\",\n help=\"The number of HTTP workers to launch\",\n type=click.IntRange(min=1, max=multiprocessing.cpu_count() + 1),\n show_default=True,\n default=1,\n)\n@option(\"--host\", help=\"Server under this host\", default=\"127.0.0.1\", show_default=True)\n@option(\"--debug\", help=\"Run app in debug mode\", is_flag=True)\ndef run_command(\n reload: bool,\n port: int,\n web_concurrency: int,\n host: str,\n debug: bool,\n env: StarliteEnv,\n app: Starlite,\n) -> None:\n \"\"\"Run a Starlite app.\n\n The app can be either passed as a module path in the form of <module name>.<submodule>:<app instance or factory>,\n set as an environment variable STARLITE_APP with the same format or automatically discovered from one of these\n canonical paths: app.py, asgi.py, application.py or app/__init__.py. When auto-discovering application factories,\n functions with the name ``create_app`` are considered, or functions that are annotated as returning a ``Starlite``\n instance.\n \"\"\"\n\n if debug or env.debug:\n app.debug = True\n\n show_app_info(app)\n\n console.rule(\"[yellow]Starting server process\", align=\"left\")\n\n # invoke uvicorn in a subprocess to be able to use the --reload flag. see\n # https://github.com/starlite-api/starlite/issues/1191 and https://github.com/encode/uvicorn/issues/1045\n\n process_args = {\n \"reload\": env.reload or reload,\n \"host\": env.host or host,\n \"port\": env.port or port,\n \"workers\": env.web_concurrency or web_concurrency,\n \"factory\": env.is_app_factory,\n }\n\n subprocess.run([\"uvicorn\", env.app_path, *_convert_uvicorn_args(process_args)], check=True)\n\n\n@command(name=\"routes\")\ndef routes_command(app: Starlite) -> None: # pragma: no cover\n \"\"\"Display information about the application's routes.\"\"\"\n\n tree = Tree(\"\", hide_root=True)\n\n for route in sorted(app.routes, key=lambda r: r.path):\n if isinstance(route, HTTPRoute):\n branch = tree.add(f\"[green]{route.path}[/green] (HTTP)\")\n for handler in route.route_handlers:\n handler_info = [\n f\"[blue]{handler.name or handler.handler_name}[/blue]\",\n ]\n\n if inspect.iscoroutinefunction(unwrap_partial(handler.fn.value)):\n handler_info.append(\"[magenta]async[/magenta]\")\n else:\n handler_info.append(\"[yellow]sync[/yellow]\")\n\n handler_info.append(f'[cyan]{\", \".join(sorted(handler.http_methods))}[/cyan]')\n\n if len(handler.paths) > 1:\n for path in handler.paths:\n branch.add(\" \".join([f\"[green]{path}[green]\", *handler_info]))\n else:\n branch.add(\" \".join(handler_info))\n\n else:\n if isinstance(route, WebSocketRoute):\n route_type = \"WS\"\n else:\n route_type = \"ASGI\"\n branch = tree.add(f\"[green]{route.path}[/green] ({route_type})\")\n branch.add(f\"[blue]{route.route_handler.name or route.route_handler.handler_name}[/blue]\")\n\n console.print(tree)\n", "path": "starlite/cli/commands/core.py"}]} | 1,562 | 588 |
gh_patches_debug_38540 | rasdani/github-patches | git_diff | adap__flower-458 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve docstring for `start_keras_client`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/py/flwr/common/__init__.py`
Content:
```
1 # Copyright 2020 Adap GmbH. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Flower utilities shared between server and client."""
16
17
18 from .parameter import bytes_to_ndarray as bytes_to_ndarray
19 from .parameter import ndarray_to_bytes as ndarray_to_bytes
20 from .parameter import parameters_to_weights as parameters_to_weights
21 from .parameter import weights_to_parameters as weights_to_parameters
22 from .typing import Disconnect as Disconnect
23 from .typing import EvaluateIns as EvaluateIns
24 from .typing import EvaluateRes as EvaluateRes
25 from .typing import FitIns as FitIns
26 from .typing import FitRes as FitRes
27 from .typing import Parameters as Parameters
28 from .typing import ParametersRes as ParametersRes
29 from .typing import Reconnect as Reconnect
30 from .typing import Weights as Weights
31
32 GRPC_MAX_MESSAGE_LENGTH: int = 536870912 # == 512 * 1024 * 1024
33
34 __all__ = [
35 "bytes_to_ndarray",
36 "Disconnect",
37 "EvaluateIns",
38 "EvaluateRes",
39 "FitIns",
40 "FitRes",
41 "GRPC_MAX_MESSAGE_LENGTH",
42 "ndarray_to_bytes",
43 "Parameters",
44 "parameters_to_weights",
45 "ParametersRes",
46 "Reconnect",
47 "Weights",
48 "weights_to_parameters",
49 ]
50
```
Path: `src/py/flwr/client/app.py`
Content:
```
1 # Copyright 2020 Adap GmbH. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Flower client app."""
16
17
18 import time
19 from logging import INFO
20
21 from flwr.common import GRPC_MAX_MESSAGE_LENGTH
22 from flwr.common.logger import log
23
24 from .client import Client
25 from .grpc_client.connection import insecure_grpc_connection
26 from .grpc_client.message_handler import handle
27 from .keras_client import KerasClient, KerasClientWrapper
28
29
30 def start_client(
31 server_address: str,
32 client: Client,
33 grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,
34 ) -> None:
35 """Start a Flower Client which connects to a gRPC server."""
36 while True:
37 sleep_duration: int = 0
38 with insecure_grpc_connection(
39 server_address, max_message_length=grpc_max_message_length
40 ) as conn:
41 receive, send = conn
42 log(INFO, "Opened (insecure) gRPC connection")
43
44 while True:
45 server_message = receive()
46 client_message, sleep_duration, keep_going = handle(
47 client, server_message
48 )
49 send(client_message)
50 if not keep_going:
51 break
52 if sleep_duration == 0:
53 log(INFO, "Disconnect and shut down")
54 break
55 # Sleep and reconnect afterwards
56 log(
57 INFO,
58 "Disconnect, then re-establish connection after %s second(s)",
59 sleep_duration,
60 )
61 time.sleep(sleep_duration)
62
63
64 def start_keras_client(server_address: str, client: KerasClient) -> None:
65 """Start a Flower KerasClient which connects to a gRPC server."""
66
67 # Wrap the Keras client
68 flower_client = KerasClientWrapper(client)
69
70 # Start
71 start_client(server_address, flower_client)
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/py/flwr/client/app.py b/src/py/flwr/client/app.py
--- a/src/py/flwr/client/app.py
+++ b/src/py/flwr/client/app.py
@@ -32,7 +32,25 @@
client: Client,
grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,
) -> None:
- """Start a Flower Client which connects to a gRPC server."""
+ """Start a Flower Client which connects to a gRPC server.
+
+ Arguments:
+ server_address: str. The IPv6 address of the server. If the Flower
+ server runs on the same machine on port 8080, then `server_address`
+ would be `"[::]:8080"`.
+ client: flwr.client.Client. An implementation of the abstract base
+ class `flwr.client.Client`.
+ grpc_max_message_length: int (default: 536_870_912, this equals 512MB).
+ The maximum length of gRPC messages that can be exchanged with the
+ Flower server. The default should be sufficient for most models.
+ Users who train very large models might need to increase this
+ value. Note that the Flower server needs to started with the same
+ value (see `flwr.server.start_server`), otherwise it will not know
+ about the increased limit and block larger messages.
+
+ Returns:
+ None.
+ """
while True:
sleep_duration: int = 0
with insecure_grpc_connection(
@@ -61,11 +79,37 @@
time.sleep(sleep_duration)
-def start_keras_client(server_address: str, client: KerasClient) -> None:
- """Start a Flower KerasClient which connects to a gRPC server."""
+def start_keras_client(
+ server_address: str,
+ client: KerasClient,
+ grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,
+) -> None:
+ """Start a Flower KerasClient which connects to a gRPC server.
+
+ Arguments:
+ server_address: str. The IPv6 address of the server. If the Flower
+ server runs on the same machine on port 8080, then `server_address`
+ would be `"[::]:8080"`.
+ client: flwr.client.KerasClient. An implementation of the abstract base
+ class `flwr.client.KerasClient`.
+ grpc_max_message_length: int (default: 536_870_912, this equals 512MB).
+ The maximum length of gRPC messages that can be exchanged with the
+ Flower server. The default should be sufficient for most models.
+ Users who train very large models might need to increase this
+ value. Note that the Flower server needs to started with the same
+ value (see `flwr.server.start_server`), otherwise it will not know
+ about the increased limit and block larger messages.
+
+ Returns:
+ None.
+ """
# Wrap the Keras client
flower_client = KerasClientWrapper(client)
# Start
- start_client(server_address, flower_client)
+ start_client(
+ server_address=server_address,
+ client=flower_client,
+ grpc_max_message_length=grpc_max_message_length,
+ )
diff --git a/src/py/flwr/common/__init__.py b/src/py/flwr/common/__init__.py
--- a/src/py/flwr/common/__init__.py
+++ b/src/py/flwr/common/__init__.py
@@ -29,7 +29,7 @@
from .typing import Reconnect as Reconnect
from .typing import Weights as Weights
-GRPC_MAX_MESSAGE_LENGTH: int = 536870912 # == 512 * 1024 * 1024
+GRPC_MAX_MESSAGE_LENGTH: int = 536_870_912 # == 512 * 1024 * 1024
__all__ = [
"bytes_to_ndarray",
| {"golden_diff": "diff --git a/src/py/flwr/client/app.py b/src/py/flwr/client/app.py\n--- a/src/py/flwr/client/app.py\n+++ b/src/py/flwr/client/app.py\n@@ -32,7 +32,25 @@\n client: Client,\n grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n ) -> None:\n- \"\"\"Start a Flower Client which connects to a gRPC server.\"\"\"\n+ \"\"\"Start a Flower Client which connects to a gRPC server.\n+\n+ Arguments:\n+ server_address: str. The IPv6 address of the server. If the Flower\n+ server runs on the same machine on port 8080, then `server_address`\n+ would be `\"[::]:8080\"`.\n+ client: flwr.client.Client. An implementation of the abstract base\n+ class `flwr.client.Client`.\n+ grpc_max_message_length: int (default: 536_870_912, this equals 512MB).\n+ The maximum length of gRPC messages that can be exchanged with the\n+ Flower server. The default should be sufficient for most models.\n+ Users who train very large models might need to increase this\n+ value. Note that the Flower server needs to started with the same\n+ value (see `flwr.server.start_server`), otherwise it will not know\n+ about the increased limit and block larger messages.\n+\n+ Returns:\n+ None.\n+ \"\"\"\n while True:\n sleep_duration: int = 0\n with insecure_grpc_connection(\n@@ -61,11 +79,37 @@\n time.sleep(sleep_duration)\n \n \n-def start_keras_client(server_address: str, client: KerasClient) -> None:\n- \"\"\"Start a Flower KerasClient which connects to a gRPC server.\"\"\"\n+def start_keras_client(\n+ server_address: str,\n+ client: KerasClient,\n+ grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n+) -> None:\n+ \"\"\"Start a Flower KerasClient which connects to a gRPC server.\n+\n+ Arguments:\n+ server_address: str. The IPv6 address of the server. If the Flower\n+ server runs on the same machine on port 8080, then `server_address`\n+ would be `\"[::]:8080\"`.\n+ client: flwr.client.KerasClient. An implementation of the abstract base\n+ class `flwr.client.KerasClient`.\n+ grpc_max_message_length: int (default: 536_870_912, this equals 512MB).\n+ The maximum length of gRPC messages that can be exchanged with the\n+ Flower server. The default should be sufficient for most models.\n+ Users who train very large models might need to increase this\n+ value. Note that the Flower server needs to started with the same\n+ value (see `flwr.server.start_server`), otherwise it will not know\n+ about the increased limit and block larger messages.\n+\n+ Returns:\n+ None.\n+ \"\"\"\n \n # Wrap the Keras client\n flower_client = KerasClientWrapper(client)\n \n # Start\n- start_client(server_address, flower_client)\n+ start_client(\n+ server_address=server_address,\n+ client=flower_client,\n+ grpc_max_message_length=grpc_max_message_length,\n+ )\ndiff --git a/src/py/flwr/common/__init__.py b/src/py/flwr/common/__init__.py\n--- a/src/py/flwr/common/__init__.py\n+++ b/src/py/flwr/common/__init__.py\n@@ -29,7 +29,7 @@\n from .typing import Reconnect as Reconnect\n from .typing import Weights as Weights\n \n-GRPC_MAX_MESSAGE_LENGTH: int = 536870912 # == 512 * 1024 * 1024\n+GRPC_MAX_MESSAGE_LENGTH: int = 536_870_912 # == 512 * 1024 * 1024\n \n __all__ = [\n \"bytes_to_ndarray\",\n", "issue": "Improve docstring for `start_keras_client`\n\n", "before_files": [{"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Flower utilities shared between server and client.\"\"\"\n\n\nfrom .parameter import bytes_to_ndarray as bytes_to_ndarray\nfrom .parameter import ndarray_to_bytes as ndarray_to_bytes\nfrom .parameter import parameters_to_weights as parameters_to_weights\nfrom .parameter import weights_to_parameters as weights_to_parameters\nfrom .typing import Disconnect as Disconnect\nfrom .typing import EvaluateIns as EvaluateIns\nfrom .typing import EvaluateRes as EvaluateRes\nfrom .typing import FitIns as FitIns\nfrom .typing import FitRes as FitRes\nfrom .typing import Parameters as Parameters\nfrom .typing import ParametersRes as ParametersRes\nfrom .typing import Reconnect as Reconnect\nfrom .typing import Weights as Weights\n\nGRPC_MAX_MESSAGE_LENGTH: int = 536870912 # == 512 * 1024 * 1024\n\n__all__ = [\n \"bytes_to_ndarray\",\n \"Disconnect\",\n \"EvaluateIns\",\n \"EvaluateRes\",\n \"FitIns\",\n \"FitRes\",\n \"GRPC_MAX_MESSAGE_LENGTH\",\n \"ndarray_to_bytes\",\n \"Parameters\",\n \"parameters_to_weights\",\n \"ParametersRes\",\n \"Reconnect\",\n \"Weights\",\n \"weights_to_parameters\",\n]\n", "path": "src/py/flwr/common/__init__.py"}, {"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Flower client app.\"\"\"\n\n\nimport time\nfrom logging import INFO\n\nfrom flwr.common import GRPC_MAX_MESSAGE_LENGTH\nfrom flwr.common.logger import log\n\nfrom .client import Client\nfrom .grpc_client.connection import insecure_grpc_connection\nfrom .grpc_client.message_handler import handle\nfrom .keras_client import KerasClient, KerasClientWrapper\n\n\ndef start_client(\n server_address: str,\n client: Client,\n grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n) -> None:\n \"\"\"Start a Flower Client which connects to a gRPC server.\"\"\"\n while True:\n sleep_duration: int = 0\n with insecure_grpc_connection(\n server_address, max_message_length=grpc_max_message_length\n ) as conn:\n receive, send = conn\n log(INFO, \"Opened (insecure) gRPC connection\")\n\n while True:\n server_message = receive()\n client_message, sleep_duration, keep_going = handle(\n client, server_message\n )\n send(client_message)\n if not keep_going:\n break\n if sleep_duration == 0:\n log(INFO, \"Disconnect and shut down\")\n break\n # Sleep and reconnect afterwards\n log(\n INFO,\n \"Disconnect, then re-establish connection after %s second(s)\",\n sleep_duration,\n )\n time.sleep(sleep_duration)\n\n\ndef start_keras_client(server_address: str, client: KerasClient) -> None:\n \"\"\"Start a Flower KerasClient which connects to a gRPC server.\"\"\"\n\n # Wrap the Keras client\n flower_client = KerasClientWrapper(client)\n\n # Start\n start_client(server_address, flower_client)\n", "path": "src/py/flwr/client/app.py"}], "after_files": [{"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Flower utilities shared between server and client.\"\"\"\n\n\nfrom .parameter import bytes_to_ndarray as bytes_to_ndarray\nfrom .parameter import ndarray_to_bytes as ndarray_to_bytes\nfrom .parameter import parameters_to_weights as parameters_to_weights\nfrom .parameter import weights_to_parameters as weights_to_parameters\nfrom .typing import Disconnect as Disconnect\nfrom .typing import EvaluateIns as EvaluateIns\nfrom .typing import EvaluateRes as EvaluateRes\nfrom .typing import FitIns as FitIns\nfrom .typing import FitRes as FitRes\nfrom .typing import Parameters as Parameters\nfrom .typing import ParametersRes as ParametersRes\nfrom .typing import Reconnect as Reconnect\nfrom .typing import Weights as Weights\n\nGRPC_MAX_MESSAGE_LENGTH: int = 536_870_912 # == 512 * 1024 * 1024\n\n__all__ = [\n \"bytes_to_ndarray\",\n \"Disconnect\",\n \"EvaluateIns\",\n \"EvaluateRes\",\n \"FitIns\",\n \"FitRes\",\n \"GRPC_MAX_MESSAGE_LENGTH\",\n \"ndarray_to_bytes\",\n \"Parameters\",\n \"parameters_to_weights\",\n \"ParametersRes\",\n \"Reconnect\",\n \"Weights\",\n \"weights_to_parameters\",\n]\n", "path": "src/py/flwr/common/__init__.py"}, {"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Flower client app.\"\"\"\n\n\nimport time\nfrom logging import INFO\n\nfrom flwr.common import GRPC_MAX_MESSAGE_LENGTH\nfrom flwr.common.logger import log\n\nfrom .client import Client\nfrom .grpc_client.connection import insecure_grpc_connection\nfrom .grpc_client.message_handler import handle\nfrom .keras_client import KerasClient, KerasClientWrapper\n\n\ndef start_client(\n server_address: str,\n client: Client,\n grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n) -> None:\n \"\"\"Start a Flower Client which connects to a gRPC server.\n\n Arguments:\n server_address: str. The IPv6 address of the server. If the Flower\n server runs on the same machine on port 8080, then `server_address`\n would be `\"[::]:8080\"`.\n client: flwr.client.Client. An implementation of the abstract base\n class `flwr.client.Client`.\n grpc_max_message_length: int (default: 536_870_912, this equals 512MB).\n The maximum length of gRPC messages that can be exchanged with the\n Flower server. The default should be sufficient for most models.\n Users who train very large models might need to increase this\n value. Note that the Flower server needs to started with the same\n value (see `flwr.server.start_server`), otherwise it will not know\n about the increased limit and block larger messages.\n\n Returns:\n None.\n \"\"\"\n while True:\n sleep_duration: int = 0\n with insecure_grpc_connection(\n server_address, max_message_length=grpc_max_message_length\n ) as conn:\n receive, send = conn\n log(INFO, \"Opened (insecure) gRPC connection\")\n\n while True:\n server_message = receive()\n client_message, sleep_duration, keep_going = handle(\n client, server_message\n )\n send(client_message)\n if not keep_going:\n break\n if sleep_duration == 0:\n log(INFO, \"Disconnect and shut down\")\n break\n # Sleep and reconnect afterwards\n log(\n INFO,\n \"Disconnect, then re-establish connection after %s second(s)\",\n sleep_duration,\n )\n time.sleep(sleep_duration)\n\n\ndef start_keras_client(\n server_address: str,\n client: KerasClient,\n grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n) -> None:\n \"\"\"Start a Flower KerasClient which connects to a gRPC server.\n\n Arguments:\n server_address: str. The IPv6 address of the server. If the Flower\n server runs on the same machine on port 8080, then `server_address`\n would be `\"[::]:8080\"`.\n client: flwr.client.KerasClient. An implementation of the abstract base\n class `flwr.client.KerasClient`.\n grpc_max_message_length: int (default: 536_870_912, this equals 512MB).\n The maximum length of gRPC messages that can be exchanged with the\n Flower server. The default should be sufficient for most models.\n Users who train very large models might need to increase this\n value. Note that the Flower server needs to started with the same\n value (see `flwr.server.start_server`), otherwise it will not know\n about the increased limit and block larger messages.\n\n Returns:\n None.\n \"\"\"\n\n # Wrap the Keras client\n flower_client = KerasClientWrapper(client)\n\n # Start\n start_client(\n server_address=server_address,\n client=flower_client,\n grpc_max_message_length=grpc_max_message_length,\n )\n", "path": "src/py/flwr/client/app.py"}]} | 1,429 | 923 |
gh_patches_debug_6417 | rasdani/github-patches | git_diff | kornia__kornia-694 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Broken link "InvDepthSmoothnessLoss"
## 📚 Documentation
In the documentation of this function https://kornia.readthedocs.io/en/latest/losses.html#kornia.losses.inverse_depth_smoothness_loss, the link is broken.
```See InvDepthSmoothnessLoss for details.```
The error is due to an incorrect name. The correct name should be `InverseDepthSmoothnessLoss`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/losses/depth_smooth.py`
Content:
```
1 import torch
2 import torch.nn as nn
3 import torch.nn.functional as F
4
5 # Based on
6 # https://github.com/tensorflow/models/blob/master/research/struct2depth/model.py#L625-L641
7
8
9 def _gradient_x(img: torch.Tensor) -> torch.Tensor:
10 assert len(img.shape) == 4, img.shape
11 return img[:, :, :, :-1] - img[:, :, :, 1:]
12
13
14 def _gradient_y(img: torch.Tensor) -> torch.Tensor:
15 assert len(img.shape) == 4, img.shape
16 return img[:, :, :-1, :] - img[:, :, 1:, :]
17
18
19 def inverse_depth_smoothness_loss(
20 idepth: torch.Tensor,
21 image: torch.Tensor) -> torch.Tensor:
22 r"""Computes image-aware inverse depth smoothness loss.
23
24 See :class:`~kornia.losses.InvDepthSmoothnessLoss` for details.
25 """
26 if not torch.is_tensor(idepth):
27 raise TypeError("Input idepth type is not a torch.Tensor. Got {}"
28 .format(type(idepth)))
29
30 if not torch.is_tensor(image):
31 raise TypeError("Input image type is not a torch.Tensor. Got {}"
32 .format(type(image)))
33
34 if not len(idepth.shape) == 4:
35 raise ValueError("Invalid idepth shape, we expect BxCxHxW. Got: {}"
36 .format(idepth.shape))
37
38 if not len(image.shape) == 4:
39 raise ValueError("Invalid image shape, we expect BxCxHxW. Got: {}"
40 .format(image.shape))
41
42 if not idepth.shape[-2:] == image.shape[-2:]:
43 raise ValueError("idepth and image shapes must be the same. Got: {} and {}"
44 .format(idepth.shape, image.shape))
45
46 if not idepth.device == image.device:
47 raise ValueError(
48 "idepth and image must be in the same device. Got: {} and {}" .format(
49 idepth.device, image.device))
50
51 if not idepth.dtype == image.dtype:
52 raise ValueError(
53 "idepth and image must be in the same dtype. Got: {} and {}" .format(
54 idepth.dtype, image.dtype))
55
56 # compute the gradients
57 idepth_dx: torch.Tensor = _gradient_x(idepth)
58 idepth_dy: torch.Tensor = _gradient_y(idepth)
59 image_dx: torch.Tensor = _gradient_x(image)
60 image_dy: torch.Tensor = _gradient_y(image)
61
62 # compute image weights
63 weights_x: torch.Tensor = torch.exp(
64 -torch.mean(torch.abs(image_dx), dim=1, keepdim=True))
65 weights_y: torch.Tensor = torch.exp(
66 -torch.mean(torch.abs(image_dy), dim=1, keepdim=True))
67
68 # apply image weights to depth
69 smoothness_x: torch.Tensor = torch.abs(idepth_dx * weights_x)
70 smoothness_y: torch.Tensor = torch.abs(idepth_dy * weights_y)
71 return torch.mean(smoothness_x) + torch.mean(smoothness_y)
72
73
74 class InverseDepthSmoothnessLoss(nn.Module):
75 r"""Criterion that computes image-aware inverse depth smoothness loss.
76
77 .. math::
78
79 \text{loss} = \left | \partial_x d_{ij} \right | e^{-\left \|
80 \partial_x I_{ij} \right \|} + \left |
81 \partial_y d_{ij} \right | e^{-\left \| \partial_y I_{ij} \right \|}
82
83
84 Shape:
85 - Inverse Depth: :math:`(N, 1, H, W)`
86 - Image: :math:`(N, 3, H, W)`
87 - Output: scalar
88
89 Examples::
90
91 >>> idepth = torch.rand(1, 1, 4, 5)
92 >>> image = torch.rand(1, 3, 4, 5)
93 >>> smooth = kornia.losses.DepthSmoothnessLoss()
94 >>> loss = smooth(idepth, image)
95 """
96
97 def __init__(self) -> None:
98 super(InverseDepthSmoothnessLoss, self).__init__()
99
100 def forward(self, idepth: torch.Tensor, image: torch.Tensor) -> torch.Tensor: # type:ignore
101 return inverse_depth_smoothness_loss(idepth, image)
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kornia/losses/depth_smooth.py b/kornia/losses/depth_smooth.py
--- a/kornia/losses/depth_smooth.py
+++ b/kornia/losses/depth_smooth.py
@@ -21,7 +21,7 @@
image: torch.Tensor) -> torch.Tensor:
r"""Computes image-aware inverse depth smoothness loss.
- See :class:`~kornia.losses.InvDepthSmoothnessLoss` for details.
+ See :class:`~kornia.losses.InverseDepthSmoothnessLoss` for details.
"""
if not torch.is_tensor(idepth):
raise TypeError("Input idepth type is not a torch.Tensor. Got {}"
| {"golden_diff": "diff --git a/kornia/losses/depth_smooth.py b/kornia/losses/depth_smooth.py\n--- a/kornia/losses/depth_smooth.py\n+++ b/kornia/losses/depth_smooth.py\n@@ -21,7 +21,7 @@\n image: torch.Tensor) -> torch.Tensor:\n r\"\"\"Computes image-aware inverse depth smoothness loss.\n \n- See :class:`~kornia.losses.InvDepthSmoothnessLoss` for details.\n+ See :class:`~kornia.losses.InverseDepthSmoothnessLoss` for details.\n \"\"\"\n if not torch.is_tensor(idepth):\n raise TypeError(\"Input idepth type is not a torch.Tensor. Got {}\"\n", "issue": "Broken link \"InvDepthSmoothnessLoss\"\n## \ud83d\udcda Documentation\r\n\r\nIn the documentation of this function https://kornia.readthedocs.io/en/latest/losses.html#kornia.losses.inverse_depth_smoothness_loss, the link is broken.\r\n\r\n```See InvDepthSmoothnessLoss for details.```\r\n\r\nThe error is due to an incorrect name. The correct name should be `InverseDepthSmoothnessLoss`.\r\n\r\n\n", "before_files": [{"content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n# Based on\n# https://github.com/tensorflow/models/blob/master/research/struct2depth/model.py#L625-L641\n\n\ndef _gradient_x(img: torch.Tensor) -> torch.Tensor:\n assert len(img.shape) == 4, img.shape\n return img[:, :, :, :-1] - img[:, :, :, 1:]\n\n\ndef _gradient_y(img: torch.Tensor) -> torch.Tensor:\n assert len(img.shape) == 4, img.shape\n return img[:, :, :-1, :] - img[:, :, 1:, :]\n\n\ndef inverse_depth_smoothness_loss(\n idepth: torch.Tensor,\n image: torch.Tensor) -> torch.Tensor:\n r\"\"\"Computes image-aware inverse depth smoothness loss.\n\n See :class:`~kornia.losses.InvDepthSmoothnessLoss` for details.\n \"\"\"\n if not torch.is_tensor(idepth):\n raise TypeError(\"Input idepth type is not a torch.Tensor. Got {}\"\n .format(type(idepth)))\n\n if not torch.is_tensor(image):\n raise TypeError(\"Input image type is not a torch.Tensor. Got {}\"\n .format(type(image)))\n\n if not len(idepth.shape) == 4:\n raise ValueError(\"Invalid idepth shape, we expect BxCxHxW. Got: {}\"\n .format(idepth.shape))\n\n if not len(image.shape) == 4:\n raise ValueError(\"Invalid image shape, we expect BxCxHxW. Got: {}\"\n .format(image.shape))\n\n if not idepth.shape[-2:] == image.shape[-2:]:\n raise ValueError(\"idepth and image shapes must be the same. Got: {} and {}\"\n .format(idepth.shape, image.shape))\n\n if not idepth.device == image.device:\n raise ValueError(\n \"idepth and image must be in the same device. Got: {} and {}\" .format(\n idepth.device, image.device))\n\n if not idepth.dtype == image.dtype:\n raise ValueError(\n \"idepth and image must be in the same dtype. Got: {} and {}\" .format(\n idepth.dtype, image.dtype))\n\n # compute the gradients\n idepth_dx: torch.Tensor = _gradient_x(idepth)\n idepth_dy: torch.Tensor = _gradient_y(idepth)\n image_dx: torch.Tensor = _gradient_x(image)\n image_dy: torch.Tensor = _gradient_y(image)\n\n # compute image weights\n weights_x: torch.Tensor = torch.exp(\n -torch.mean(torch.abs(image_dx), dim=1, keepdim=True))\n weights_y: torch.Tensor = torch.exp(\n -torch.mean(torch.abs(image_dy), dim=1, keepdim=True))\n\n # apply image weights to depth\n smoothness_x: torch.Tensor = torch.abs(idepth_dx * weights_x)\n smoothness_y: torch.Tensor = torch.abs(idepth_dy * weights_y)\n return torch.mean(smoothness_x) + torch.mean(smoothness_y)\n\n\nclass InverseDepthSmoothnessLoss(nn.Module):\n r\"\"\"Criterion that computes image-aware inverse depth smoothness loss.\n\n .. math::\n\n \\text{loss} = \\left | \\partial_x d_{ij} \\right | e^{-\\left \\|\n \\partial_x I_{ij} \\right \\|} + \\left |\n \\partial_y d_{ij} \\right | e^{-\\left \\| \\partial_y I_{ij} \\right \\|}\n\n\n Shape:\n - Inverse Depth: :math:`(N, 1, H, W)`\n - Image: :math:`(N, 3, H, W)`\n - Output: scalar\n\n Examples::\n\n >>> idepth = torch.rand(1, 1, 4, 5)\n >>> image = torch.rand(1, 3, 4, 5)\n >>> smooth = kornia.losses.DepthSmoothnessLoss()\n >>> loss = smooth(idepth, image)\n \"\"\"\n\n def __init__(self) -> None:\n super(InverseDepthSmoothnessLoss, self).__init__()\n\n def forward(self, idepth: torch.Tensor, image: torch.Tensor) -> torch.Tensor: # type:ignore\n return inverse_depth_smoothness_loss(idepth, image)\n", "path": "kornia/losses/depth_smooth.py"}], "after_files": [{"content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n# Based on\n# https://github.com/tensorflow/models/blob/master/research/struct2depth/model.py#L625-L641\n\n\ndef _gradient_x(img: torch.Tensor) -> torch.Tensor:\n assert len(img.shape) == 4, img.shape\n return img[:, :, :, :-1] - img[:, :, :, 1:]\n\n\ndef _gradient_y(img: torch.Tensor) -> torch.Tensor:\n assert len(img.shape) == 4, img.shape\n return img[:, :, :-1, :] - img[:, :, 1:, :]\n\n\ndef inverse_depth_smoothness_loss(\n idepth: torch.Tensor,\n image: torch.Tensor) -> torch.Tensor:\n r\"\"\"Computes image-aware inverse depth smoothness loss.\n\n See :class:`~kornia.losses.InverseDepthSmoothnessLoss` for details.\n \"\"\"\n if not torch.is_tensor(idepth):\n raise TypeError(\"Input idepth type is not a torch.Tensor. Got {}\"\n .format(type(idepth)))\n\n if not torch.is_tensor(image):\n raise TypeError(\"Input image type is not a torch.Tensor. Got {}\"\n .format(type(image)))\n\n if not len(idepth.shape) == 4:\n raise ValueError(\"Invalid idepth shape, we expect BxCxHxW. Got: {}\"\n .format(idepth.shape))\n\n if not len(image.shape) == 4:\n raise ValueError(\"Invalid image shape, we expect BxCxHxW. Got: {}\"\n .format(image.shape))\n\n if not idepth.shape[-2:] == image.shape[-2:]:\n raise ValueError(\"idepth and image shapes must be the same. Got: {} and {}\"\n .format(idepth.shape, image.shape))\n\n if not idepth.device == image.device:\n raise ValueError(\n \"idepth and image must be in the same device. Got: {} and {}\" .format(\n idepth.device, image.device))\n\n if not idepth.dtype == image.dtype:\n raise ValueError(\n \"idepth and image must be in the same dtype. Got: {} and {}\" .format(\n idepth.dtype, image.dtype))\n\n # compute the gradients\n idepth_dx: torch.Tensor = _gradient_x(idepth)\n idepth_dy: torch.Tensor = _gradient_y(idepth)\n image_dx: torch.Tensor = _gradient_x(image)\n image_dy: torch.Tensor = _gradient_y(image)\n\n # compute image weights\n weights_x: torch.Tensor = torch.exp(\n -torch.mean(torch.abs(image_dx), dim=1, keepdim=True))\n weights_y: torch.Tensor = torch.exp(\n -torch.mean(torch.abs(image_dy), dim=1, keepdim=True))\n\n # apply image weights to depth\n smoothness_x: torch.Tensor = torch.abs(idepth_dx * weights_x)\n smoothness_y: torch.Tensor = torch.abs(idepth_dy * weights_y)\n return torch.mean(smoothness_x) + torch.mean(smoothness_y)\n\n\nclass InverseDepthSmoothnessLoss(nn.Module):\n r\"\"\"Criterion that computes image-aware inverse depth smoothness loss.\n\n .. math::\n\n \\text{loss} = \\left | \\partial_x d_{ij} \\right | e^{-\\left \\|\n \\partial_x I_{ij} \\right \\|} + \\left |\n \\partial_y d_{ij} \\right | e^{-\\left \\| \\partial_y I_{ij} \\right \\|}\n\n\n Shape:\n - Inverse Depth: :math:`(N, 1, H, W)`\n - Image: :math:`(N, 3, H, W)`\n - Output: scalar\n\n Examples::\n\n >>> idepth = torch.rand(1, 1, 4, 5)\n >>> image = torch.rand(1, 3, 4, 5)\n >>> smooth = kornia.losses.DepthSmoothnessLoss()\n >>> loss = smooth(idepth, image)\n \"\"\"\n\n def __init__(self) -> None:\n super(InverseDepthSmoothnessLoss, self).__init__()\n\n def forward(self, idepth: torch.Tensor, image: torch.Tensor) -> torch.Tensor: # type:ignore\n return inverse_depth_smoothness_loss(idepth, image)\n", "path": "kornia/losses/depth_smooth.py"}]} | 1,503 | 158 |
gh_patches_debug_2038 | rasdani/github-patches | git_diff | RedHatInsights__insights-core-1112 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
hostname parser doesn't handle sos_commands/general/hostname
sos_commands/general/hostname contains the hostname with a newline at the end, which results in a file with two lines. The hostname parser specifically checks for one line but doesn't gracefully handle the problem.
We can update the parser to handle this case and/or investigate whether proc/sys/kernel/hostname is a valid substitute to put in sos_archive.py instead.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `insights/parsers/hostname.py`
Content:
```
1 """
2 hostname - command ``/bin/hostname``
3 ====================================
4
5 This parser simply reads the output of ``/bin/hostname``, which is the
6 configured fully qualified domain name of the client system. It then
7 splits it into ``hostname`` and ``domain`` and stores these as attributes,
8 along with the unmodified name in the ``fqdn`` attribute.
9
10 Examples:
11
12 >>> hostname = shared[Hostname]
13 >>> hostname.fqdn
14 'www.example.com'
15 >>> hostname.hostname
16 'www'
17 >>> hostname.domain
18 'example.com'
19
20 """
21
22 from .. import Parser, parser
23 from insights.specs import Specs
24
25
26 @parser(Specs.hostname)
27 class Hostname(Parser):
28 """Class for parsing ``hostname`` command output.
29
30 Attributes:
31 fqdn: The fully qualified domain name of the host. The same to
32 ``hostname`` when domain part is not set.
33 hostname: The hostname.
34 domain: The domain get from the fqdn.
35 """
36 def parse_content(self, content):
37 raw = None
38 if len(content) == 1:
39 raw = content[0].strip()
40 self.fqdn = raw
41 self.hostname = raw.split(".")[0] if raw else None
42 self.domain = ".".join(raw.split(".")[1:]) if raw else None
43
44 def __str__(self):
45 return "<hostname: {h}, domain: {d}>".format(h=self.hostname, d=self.domain)
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/insights/parsers/hostname.py b/insights/parsers/hostname.py
--- a/insights/parsers/hostname.py
+++ b/insights/parsers/hostname.py
@@ -34,6 +34,7 @@
domain: The domain get from the fqdn.
"""
def parse_content(self, content):
+ content = filter(None, content)
raw = None
if len(content) == 1:
raw = content[0].strip()
| {"golden_diff": "diff --git a/insights/parsers/hostname.py b/insights/parsers/hostname.py\n--- a/insights/parsers/hostname.py\n+++ b/insights/parsers/hostname.py\n@@ -34,6 +34,7 @@\n domain: The domain get from the fqdn.\n \"\"\"\n def parse_content(self, content):\n+ content = filter(None, content)\n raw = None\n if len(content) == 1:\n raw = content[0].strip()\n", "issue": "hostname parser doesn't handle sos_commands/general/hostname\nsos_commands/general/hostname contains the hostname with a newline at the end, which results in a file with two lines. The hostname parser specifically checks for one line but doesn't gracefully handle the problem.\r\n\r\nWe can update the parser to handle this case and/or investigate whether proc/sys/kernel/hostname is a valid substitute to put in sos_archive.py instead.\n", "before_files": [{"content": "\"\"\"\nhostname - command ``/bin/hostname``\n====================================\n\nThis parser simply reads the output of ``/bin/hostname``, which is the\nconfigured fully qualified domain name of the client system. It then\nsplits it into ``hostname`` and ``domain`` and stores these as attributes,\nalong with the unmodified name in the ``fqdn`` attribute.\n\nExamples:\n\n >>> hostname = shared[Hostname]\n >>> hostname.fqdn\n 'www.example.com'\n >>> hostname.hostname\n 'www'\n >>> hostname.domain\n 'example.com'\n\n\"\"\"\n\nfrom .. import Parser, parser\nfrom insights.specs import Specs\n\n\n@parser(Specs.hostname)\nclass Hostname(Parser):\n \"\"\"Class for parsing ``hostname`` command output.\n\n Attributes:\n fqdn: The fully qualified domain name of the host. The same to\n ``hostname`` when domain part is not set.\n hostname: The hostname.\n domain: The domain get from the fqdn.\n \"\"\"\n def parse_content(self, content):\n raw = None\n if len(content) == 1:\n raw = content[0].strip()\n self.fqdn = raw\n self.hostname = raw.split(\".\")[0] if raw else None\n self.domain = \".\".join(raw.split(\".\")[1:]) if raw else None\n\n def __str__(self):\n return \"<hostname: {h}, domain: {d}>\".format(h=self.hostname, d=self.domain)\n", "path": "insights/parsers/hostname.py"}], "after_files": [{"content": "\"\"\"\nhostname - command ``/bin/hostname``\n====================================\n\nThis parser simply reads the output of ``/bin/hostname``, which is the\nconfigured fully qualified domain name of the client system. It then\nsplits it into ``hostname`` and ``domain`` and stores these as attributes,\nalong with the unmodified name in the ``fqdn`` attribute.\n\nExamples:\n\n >>> hostname = shared[Hostname]\n >>> hostname.fqdn\n 'www.example.com'\n >>> hostname.hostname\n 'www'\n >>> hostname.domain\n 'example.com'\n\n\"\"\"\n\nfrom .. import Parser, parser\nfrom insights.specs import Specs\n\n\n@parser(Specs.hostname)\nclass Hostname(Parser):\n \"\"\"Class for parsing ``hostname`` command output.\n\n Attributes:\n fqdn: The fully qualified domain name of the host. The same to\n ``hostname`` when domain part is not set.\n hostname: The hostname.\n domain: The domain get from the fqdn.\n \"\"\"\n def parse_content(self, content):\n content = filter(None, content)\n raw = None\n if len(content) == 1:\n raw = content[0].strip()\n self.fqdn = raw\n self.hostname = raw.split(\".\")[0] if raw else None\n self.domain = \".\".join(raw.split(\".\")[1:]) if raw else None\n\n def __str__(self):\n return \"<hostname: {h}, domain: {d}>\".format(h=self.hostname, d=self.domain)\n", "path": "insights/parsers/hostname.py"}]} | 740 | 109 |
gh_patches_debug_411 | rasdani/github-patches | git_diff | ocf__ocfweb-48 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Have next 7 days in hours sidebar
This means all our normal hours are visible on a normal week.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ocfweb/main/home.py`
Content:
```
1 from datetime import date
2 from datetime import timedelta
3
4 from django.shortcuts import render_to_response
5 from django.template import RequestContext
6 from django.utils import timezone
7 from ocflib.lab.hours import get_hours
8 from ocflib.lab.staff_hours import get_staff_hours_soonest_first
9
10 from ocfweb.component.blog import get_blog_posts
11 from ocfweb.component.lab_status import get_lab_status
12
13
14 def home(request):
15 hours = [
16 get_hours(date.today() + timedelta(days=i)) for i in range(7)
17 ]
18
19 blog_posts = [
20 post for post
21 in get_blog_posts()
22 if timezone.now() - post.published < timedelta(days=365)
23 ][:2]
24
25 return render_to_response(
26 'home.html',
27 {
28 'fulltitle': 'Open Computing Facility at UC Berkeley',
29 'description': (
30 'The Open Computing Facility is an all-volunteer student '
31 'organization dedicated to free and open-source computing for all UC '
32 'Berkeley students.'''
33 ),
34 'staff_hours': get_staff_hours_soonest_first()[:2],
35 'hours': hours,
36 'today': hours[0],
37 'blog_posts': blog_posts,
38 'lab_status': get_lab_status(),
39 },
40 context_instance=RequestContext(request),
41 )
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ocfweb/main/home.py b/ocfweb/main/home.py
--- a/ocfweb/main/home.py
+++ b/ocfweb/main/home.py
@@ -13,7 +13,7 @@
def home(request):
hours = [
- get_hours(date.today() + timedelta(days=i)) for i in range(7)
+ get_hours(date.today() + timedelta(days=i)) for i in range(5)
]
blog_posts = [
| {"golden_diff": "diff --git a/ocfweb/main/home.py b/ocfweb/main/home.py\n--- a/ocfweb/main/home.py\n+++ b/ocfweb/main/home.py\n@@ -13,7 +13,7 @@\n \n def home(request):\n hours = [\n- get_hours(date.today() + timedelta(days=i)) for i in range(7)\n+ get_hours(date.today() + timedelta(days=i)) for i in range(5)\n ]\n \n blog_posts = [\n", "issue": "Have next 7 days in hours sidebar\nThis means all our normal hours are visible on a normal week.\n\n", "before_files": [{"content": "from datetime import date\nfrom datetime import timedelta\n\nfrom django.shortcuts import render_to_response\nfrom django.template import RequestContext\nfrom django.utils import timezone\nfrom ocflib.lab.hours import get_hours\nfrom ocflib.lab.staff_hours import get_staff_hours_soonest_first\n\nfrom ocfweb.component.blog import get_blog_posts\nfrom ocfweb.component.lab_status import get_lab_status\n\n\ndef home(request):\n hours = [\n get_hours(date.today() + timedelta(days=i)) for i in range(7)\n ]\n\n blog_posts = [\n post for post\n in get_blog_posts()\n if timezone.now() - post.published < timedelta(days=365)\n ][:2]\n\n return render_to_response(\n 'home.html',\n {\n 'fulltitle': 'Open Computing Facility at UC Berkeley',\n 'description': (\n 'The Open Computing Facility is an all-volunteer student '\n 'organization dedicated to free and open-source computing for all UC '\n 'Berkeley students.'''\n ),\n 'staff_hours': get_staff_hours_soonest_first()[:2],\n 'hours': hours,\n 'today': hours[0],\n 'blog_posts': blog_posts,\n 'lab_status': get_lab_status(),\n },\n context_instance=RequestContext(request),\n )\n", "path": "ocfweb/main/home.py"}], "after_files": [{"content": "from datetime import date\nfrom datetime import timedelta\n\nfrom django.shortcuts import render_to_response\nfrom django.template import RequestContext\nfrom django.utils import timezone\nfrom ocflib.lab.hours import get_hours\nfrom ocflib.lab.staff_hours import get_staff_hours_soonest_first\n\nfrom ocfweb.component.blog import get_blog_posts\nfrom ocfweb.component.lab_status import get_lab_status\n\n\ndef home(request):\n hours = [\n get_hours(date.today() + timedelta(days=i)) for i in range(5)\n ]\n\n blog_posts = [\n post for post\n in get_blog_posts()\n if timezone.now() - post.published < timedelta(days=365)\n ][:2]\n\n return render_to_response(\n 'home.html',\n {\n 'fulltitle': 'Open Computing Facility at UC Berkeley',\n 'description': (\n 'The Open Computing Facility is an all-volunteer student '\n 'organization dedicated to free and open-source computing for all UC '\n 'Berkeley students.'''\n ),\n 'staff_hours': get_staff_hours_soonest_first()[:2],\n 'hours': hours,\n 'today': hours[0],\n 'blog_posts': blog_posts,\n 'lab_status': get_lab_status(),\n },\n context_instance=RequestContext(request),\n )\n", "path": "ocfweb/main/home.py"}]} | 633 | 107 |
gh_patches_debug_34747 | rasdani/github-patches | git_diff | Cloud-CV__EvalAI-1035 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add serializers for creating challenge using zip file.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/challenges/serializers.py`
Content:
```
1 from rest_framework import serializers
2
3 from hosts.serializers import ChallengeHostTeamSerializer
4
5 from .models import (
6 Challenge,
7 ChallengePhase,
8 ChallengePhaseSplit,
9 DatasetSplit,)
10
11
12 class ChallengeSerializer(serializers.ModelSerializer):
13
14 is_active = serializers.ReadOnlyField()
15
16 def __init__(self, *args, **kwargs):
17 super(ChallengeSerializer, self).__init__(*args, **kwargs)
18 context = kwargs.get('context')
19 if context and context.get('request').method != 'GET':
20 challenge_host_team = context.get('challenge_host_team')
21 kwargs['data']['creator'] = challenge_host_team.pk
22 else:
23 self.fields['creator'] = ChallengeHostTeamSerializer()
24
25 class Meta:
26 model = Challenge
27 fields = ('id', 'title', 'short_description', 'description', 'terms_and_conditions',
28 'submission_guidelines', 'evaluation_details',
29 'image', 'start_date', 'end_date', 'creator',
30 'published', 'enable_forum', 'anonymous_leaderboard', 'is_active',)
31
32
33 class ChallengePhaseSerializer(serializers.ModelSerializer):
34
35 is_active = serializers.ReadOnlyField()
36
37 def __init__(self, *args, **kwargs):
38 super(ChallengePhaseSerializer, self).__init__(*args, **kwargs)
39 context = kwargs.get('context')
40 if context:
41 challenge = context.get('challenge')
42 kwargs['data']['challenge'] = challenge.pk
43
44 class Meta:
45 model = ChallengePhase
46 fields = ('id', 'name', 'description', 'leaderboard_public', 'start_date',
47 'end_date', 'challenge', 'max_submissions_per_day', 'max_submissions',
48 'is_public', 'is_active', 'codename')
49
50
51 class DatasetSplitSerializer(serializers.ModelSerializer):
52
53 class Meta:
54 model = DatasetSplit
55 fields = '__all__'
56
57
58 class ChallengePhaseSplitSerializer(serializers.ModelSerializer):
59 """Serialize the ChallengePhaseSplits Model"""
60
61 dataset_split_name = serializers.SerializerMethodField()
62 challenge_phase_name = serializers.SerializerMethodField()
63
64 class Meta:
65 model = ChallengePhaseSplit
66 fields = '__all__'
67 fields = ('id', 'dataset_split', 'challenge_phase', 'challenge_phase_name', 'dataset_split_name', 'visibility')
68
69 def get_dataset_split_name(self, obj):
70 return obj.dataset_split.name
71
72 def get_challenge_phase_name(self, obj):
73 return obj.challenge_phase.name
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/challenges/serializers.py b/apps/challenges/serializers.py
--- a/apps/challenges/serializers.py
+++ b/apps/challenges/serializers.py
@@ -2,11 +2,12 @@
from hosts.serializers import ChallengeHostTeamSerializer
-from .models import (
- Challenge,
+from .models import (Challenge,
+ ChallengeConfiguration,
ChallengePhase,
ChallengePhaseSplit,
- DatasetSplit,)
+ DatasetSplit,
+ Leaderboard,)
class ChallengeSerializer(serializers.ModelSerializer):
@@ -63,7 +64,6 @@
class Meta:
model = ChallengePhaseSplit
- fields = '__all__'
fields = ('id', 'dataset_split', 'challenge_phase', 'challenge_phase_name', 'dataset_split_name', 'visibility')
def get_dataset_split_name(self, obj):
@@ -71,3 +71,48 @@
def get_challenge_phase_name(self, obj):
return obj.challenge_phase.name
+
+
+class ChallengeConfigSerializer(serializers.ModelSerializer):
+ """
+ Serialize the ChallengeConfiguration Model.
+ """
+ def __init__(self, *args, **kwargs):
+ super(ChallengeConfigSerializer, self).__init__(*args, **kwargs)
+ context = kwargs.get('context')
+ if context:
+ user = context.get('user')
+ kwargs['data']['user'] = user.pk
+
+ class Meta:
+ model = ChallengeConfiguration
+ fields = ('zip_configuration', 'user',)
+
+
+class LeaderboardSerializer(serializers.ModelSerializer):
+ """
+ Serialize the Leaderboard Model.
+ """
+ class Meta:
+ model = Leaderboard
+ fields = '__all__'
+
+
+class ZipChallengeSerializer(ChallengeSerializer):
+ """
+ Serializer used for creating challenge through zip file.
+ """
+ class Meta:
+ model = Challenge
+ fields = ('id', 'title', 'short_description', 'description', 'terms_and_conditions',
+ 'submission_guidelines', 'start_date', 'end_date', 'creator',
+ 'published', 'enable_forum', 'anonymous_leaderboard', 'is_active',)
+
+
+class ZipChallengePhaseSplitSerializer(serializers.ModelSerializer):
+ """
+ Serializer used for creating challenge phase split through zip file.
+ """
+ class Meta:
+ model = ChallengePhaseSplit
+ fields = '__all__'
| {"golden_diff": "diff --git a/apps/challenges/serializers.py b/apps/challenges/serializers.py\n--- a/apps/challenges/serializers.py\n+++ b/apps/challenges/serializers.py\n@@ -2,11 +2,12 @@\n \n from hosts.serializers import ChallengeHostTeamSerializer\n \n-from .models import (\n- Challenge,\n+from .models import (Challenge,\n+ ChallengeConfiguration,\n ChallengePhase,\n ChallengePhaseSplit,\n- DatasetSplit,)\n+ DatasetSplit,\n+ Leaderboard,)\n \n \n class ChallengeSerializer(serializers.ModelSerializer):\n@@ -63,7 +64,6 @@\n \n class Meta:\n model = ChallengePhaseSplit\n- fields = '__all__'\n fields = ('id', 'dataset_split', 'challenge_phase', 'challenge_phase_name', 'dataset_split_name', 'visibility')\n \n def get_dataset_split_name(self, obj):\n@@ -71,3 +71,48 @@\n \n def get_challenge_phase_name(self, obj):\n return obj.challenge_phase.name\n+\n+\n+class ChallengeConfigSerializer(serializers.ModelSerializer):\n+ \"\"\"\n+ Serialize the ChallengeConfiguration Model.\n+ \"\"\"\n+ def __init__(self, *args, **kwargs):\n+ super(ChallengeConfigSerializer, self).__init__(*args, **kwargs)\n+ context = kwargs.get('context')\n+ if context:\n+ user = context.get('user')\n+ kwargs['data']['user'] = user.pk\n+\n+ class Meta:\n+ model = ChallengeConfiguration\n+ fields = ('zip_configuration', 'user',)\n+\n+\n+class LeaderboardSerializer(serializers.ModelSerializer):\n+ \"\"\"\n+ Serialize the Leaderboard Model.\n+ \"\"\"\n+ class Meta:\n+ model = Leaderboard\n+ fields = '__all__'\n+\n+\n+class ZipChallengeSerializer(ChallengeSerializer):\n+ \"\"\"\n+ Serializer used for creating challenge through zip file.\n+ \"\"\"\n+ class Meta:\n+ model = Challenge\n+ fields = ('id', 'title', 'short_description', 'description', 'terms_and_conditions',\n+ 'submission_guidelines', 'start_date', 'end_date', 'creator',\n+ 'published', 'enable_forum', 'anonymous_leaderboard', 'is_active',)\n+\n+\n+class ZipChallengePhaseSplitSerializer(serializers.ModelSerializer):\n+ \"\"\"\n+ Serializer used for creating challenge phase split through zip file.\n+ \"\"\"\n+ class Meta:\n+ model = ChallengePhaseSplit\n+ fields = '__all__'\n", "issue": "Add serializers for creating challenge using zip file.\n\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom hosts.serializers import ChallengeHostTeamSerializer\n\nfrom .models import (\n Challenge,\n ChallengePhase,\n ChallengePhaseSplit,\n DatasetSplit,)\n\n\nclass ChallengeSerializer(serializers.ModelSerializer):\n\n is_active = serializers.ReadOnlyField()\n\n def __init__(self, *args, **kwargs):\n super(ChallengeSerializer, self).__init__(*args, **kwargs)\n context = kwargs.get('context')\n if context and context.get('request').method != 'GET':\n challenge_host_team = context.get('challenge_host_team')\n kwargs['data']['creator'] = challenge_host_team.pk\n else:\n self.fields['creator'] = ChallengeHostTeamSerializer()\n\n class Meta:\n model = Challenge\n fields = ('id', 'title', 'short_description', 'description', 'terms_and_conditions',\n 'submission_guidelines', 'evaluation_details',\n 'image', 'start_date', 'end_date', 'creator',\n 'published', 'enable_forum', 'anonymous_leaderboard', 'is_active',)\n\n\nclass ChallengePhaseSerializer(serializers.ModelSerializer):\n\n is_active = serializers.ReadOnlyField()\n\n def __init__(self, *args, **kwargs):\n super(ChallengePhaseSerializer, self).__init__(*args, **kwargs)\n context = kwargs.get('context')\n if context:\n challenge = context.get('challenge')\n kwargs['data']['challenge'] = challenge.pk\n\n class Meta:\n model = ChallengePhase\n fields = ('id', 'name', 'description', 'leaderboard_public', 'start_date',\n 'end_date', 'challenge', 'max_submissions_per_day', 'max_submissions',\n 'is_public', 'is_active', 'codename')\n\n\nclass DatasetSplitSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = DatasetSplit\n fields = '__all__'\n\n\nclass ChallengePhaseSplitSerializer(serializers.ModelSerializer):\n \"\"\"Serialize the ChallengePhaseSplits Model\"\"\"\n\n dataset_split_name = serializers.SerializerMethodField()\n challenge_phase_name = serializers.SerializerMethodField()\n\n class Meta:\n model = ChallengePhaseSplit\n fields = '__all__'\n fields = ('id', 'dataset_split', 'challenge_phase', 'challenge_phase_name', 'dataset_split_name', 'visibility')\n\n def get_dataset_split_name(self, obj):\n return obj.dataset_split.name\n\n def get_challenge_phase_name(self, obj):\n return obj.challenge_phase.name\n", "path": "apps/challenges/serializers.py"}], "after_files": [{"content": "from rest_framework import serializers\n\nfrom hosts.serializers import ChallengeHostTeamSerializer\n\nfrom .models import (Challenge,\n ChallengeConfiguration,\n ChallengePhase,\n ChallengePhaseSplit,\n DatasetSplit,\n Leaderboard,)\n\n\nclass ChallengeSerializer(serializers.ModelSerializer):\n\n is_active = serializers.ReadOnlyField()\n\n def __init__(self, *args, **kwargs):\n super(ChallengeSerializer, self).__init__(*args, **kwargs)\n context = kwargs.get('context')\n if context and context.get('request').method != 'GET':\n challenge_host_team = context.get('challenge_host_team')\n kwargs['data']['creator'] = challenge_host_team.pk\n else:\n self.fields['creator'] = ChallengeHostTeamSerializer()\n\n class Meta:\n model = Challenge\n fields = ('id', 'title', 'short_description', 'description', 'terms_and_conditions',\n 'submission_guidelines', 'evaluation_details',\n 'image', 'start_date', 'end_date', 'creator',\n 'published', 'enable_forum', 'anonymous_leaderboard', 'is_active',)\n\n\nclass ChallengePhaseSerializer(serializers.ModelSerializer):\n\n is_active = serializers.ReadOnlyField()\n\n def __init__(self, *args, **kwargs):\n super(ChallengePhaseSerializer, self).__init__(*args, **kwargs)\n context = kwargs.get('context')\n if context:\n challenge = context.get('challenge')\n kwargs['data']['challenge'] = challenge.pk\n\n class Meta:\n model = ChallengePhase\n fields = ('id', 'name', 'description', 'leaderboard_public', 'start_date',\n 'end_date', 'challenge', 'max_submissions_per_day', 'max_submissions',\n 'is_public', 'is_active', 'codename')\n\n\nclass DatasetSplitSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = DatasetSplit\n fields = '__all__'\n\n\nclass ChallengePhaseSplitSerializer(serializers.ModelSerializer):\n \"\"\"Serialize the ChallengePhaseSplits Model\"\"\"\n\n dataset_split_name = serializers.SerializerMethodField()\n challenge_phase_name = serializers.SerializerMethodField()\n\n class Meta:\n model = ChallengePhaseSplit\n fields = ('id', 'dataset_split', 'challenge_phase', 'challenge_phase_name', 'dataset_split_name', 'visibility')\n\n def get_dataset_split_name(self, obj):\n return obj.dataset_split.name\n\n def get_challenge_phase_name(self, obj):\n return obj.challenge_phase.name\n\n\nclass ChallengeConfigSerializer(serializers.ModelSerializer):\n \"\"\"\n Serialize the ChallengeConfiguration Model.\n \"\"\"\n def __init__(self, *args, **kwargs):\n super(ChallengeConfigSerializer, self).__init__(*args, **kwargs)\n context = kwargs.get('context')\n if context:\n user = context.get('user')\n kwargs['data']['user'] = user.pk\n\n class Meta:\n model = ChallengeConfiguration\n fields = ('zip_configuration', 'user',)\n\n\nclass LeaderboardSerializer(serializers.ModelSerializer):\n \"\"\"\n Serialize the Leaderboard Model.\n \"\"\"\n class Meta:\n model = Leaderboard\n fields = '__all__'\n\n\nclass ZipChallengeSerializer(ChallengeSerializer):\n \"\"\"\n Serializer used for creating challenge through zip file.\n \"\"\"\n class Meta:\n model = Challenge\n fields = ('id', 'title', 'short_description', 'description', 'terms_and_conditions',\n 'submission_guidelines', 'start_date', 'end_date', 'creator',\n 'published', 'enable_forum', 'anonymous_leaderboard', 'is_active',)\n\n\nclass ZipChallengePhaseSplitSerializer(serializers.ModelSerializer):\n \"\"\"\n Serializer used for creating challenge phase split through zip file.\n \"\"\"\n class Meta:\n model = ChallengePhaseSplit\n fields = '__all__'\n", "path": "apps/challenges/serializers.py"}]} | 924 | 525 |
gh_patches_debug_9413 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-908 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update Domain management permissions
### Issue Description
Domain management, specifically that of DNS specific information, should be reserved for approved domain requests. Currently, if an item is in the list of registered domains and the status of the associated domain request switches to something like Withdrawn/Investigating, then the user can still edit the domain information AND the domain still appears in Active domains
AC
1. In Active Domain Request table, do not show domain applications that are approved.
Suggestion: check to make sure when you change states on an application that the domain management page doesn't show when the status shouldn't allow it.
### Additional Context (optional)
see [slack thread](https://cisa-corp.slack.com/archives/C05BDEA3C11/p1689089565297719)
### Issue Links
may relate to #701
relates to: #818
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/registrar/views/index.py`
Content:
```
1 from django.db.models import F
2 from django.shortcuts import render
3
4 from registrar.models import DomainApplication
5
6
7 def index(request):
8 """This page is available to anyone without logging in."""
9 context = {}
10 if request.user.is_authenticated:
11 applications = DomainApplication.objects.filter(creator=request.user)
12 context["domain_applications"] = applications
13
14 domains = request.user.permissions.values(
15 "role",
16 pk=F("domain__id"),
17 name=F("domain__name"),
18 created_time=F("domain__created_at"),
19 application_status=F("domain__domain_application__status"),
20 )
21 context["domains"] = domains
22 return render(request, "home.html", context)
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/registrar/views/index.py b/src/registrar/views/index.py
--- a/src/registrar/views/index.py
+++ b/src/registrar/views/index.py
@@ -9,7 +9,10 @@
context = {}
if request.user.is_authenticated:
applications = DomainApplication.objects.filter(creator=request.user)
- context["domain_applications"] = applications
+ # Let's exclude the approved applications since our
+ # domain_applications context will be used to populate
+ # the active applications table
+ context["domain_applications"] = applications.exclude(status="approved")
domains = request.user.permissions.values(
"role",
| {"golden_diff": "diff --git a/src/registrar/views/index.py b/src/registrar/views/index.py\n--- a/src/registrar/views/index.py\n+++ b/src/registrar/views/index.py\n@@ -9,7 +9,10 @@\n context = {}\n if request.user.is_authenticated:\n applications = DomainApplication.objects.filter(creator=request.user)\n- context[\"domain_applications\"] = applications\n+ # Let's exclude the approved applications since our\n+ # domain_applications context will be used to populate\n+ # the active applications table\n+ context[\"domain_applications\"] = applications.exclude(status=\"approved\")\n \n domains = request.user.permissions.values(\n \"role\",\n", "issue": "Update Domain management permissions \n### Issue Description\r\n\r\nDomain management, specifically that of DNS specific information, should be reserved for approved domain requests. Currently, if an item is in the list of registered domains and the status of the associated domain request switches to something like Withdrawn/Investigating, then the user can still edit the domain information AND the domain still appears in Active domains\r\n\r\nAC\r\n\r\n1. In Active Domain Request table, do not show domain applications that are approved. \r\n\r\n\r\nSuggestion: check to make sure when you change states on an application that the domain management page doesn't show when the status shouldn't allow it.\r\n\r\n\r\n### Additional Context (optional)\r\n\r\nsee [slack thread](https://cisa-corp.slack.com/archives/C05BDEA3C11/p1689089565297719)\r\n\r\n### Issue Links\r\nmay relate to #701 \r\nrelates to: #818 \r\n_No response_\n", "before_files": [{"content": "from django.db.models import F\nfrom django.shortcuts import render\n\nfrom registrar.models import DomainApplication\n\n\ndef index(request):\n \"\"\"This page is available to anyone without logging in.\"\"\"\n context = {}\n if request.user.is_authenticated:\n applications = DomainApplication.objects.filter(creator=request.user)\n context[\"domain_applications\"] = applications\n\n domains = request.user.permissions.values(\n \"role\",\n pk=F(\"domain__id\"),\n name=F(\"domain__name\"),\n created_time=F(\"domain__created_at\"),\n application_status=F(\"domain__domain_application__status\"),\n )\n context[\"domains\"] = domains\n return render(request, \"home.html\", context)\n", "path": "src/registrar/views/index.py"}], "after_files": [{"content": "from django.db.models import F\nfrom django.shortcuts import render\n\nfrom registrar.models import DomainApplication\n\n\ndef index(request):\n \"\"\"This page is available to anyone without logging in.\"\"\"\n context = {}\n if request.user.is_authenticated:\n applications = DomainApplication.objects.filter(creator=request.user)\n # Let's exclude the approved applications since our\n # domain_applications context will be used to populate\n # the active applications table\n context[\"domain_applications\"] = applications.exclude(status=\"approved\")\n\n domains = request.user.permissions.values(\n \"role\",\n pk=F(\"domain__id\"),\n name=F(\"domain__name\"),\n created_time=F(\"domain__created_at\"),\n application_status=F(\"domain__domain_application__status\"),\n )\n context[\"domains\"] = domains\n return render(request, \"home.html\", context)\n", "path": "src/registrar/views/index.py"}]} | 644 | 142 |
gh_patches_debug_2812 | rasdani/github-patches | git_diff | comic__grand-challenge.org-827 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update to Django 2.2
- [ ] Check that all Django apps support 2.2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/serving/views.py`
Content:
```
1 import os
2 import posixpath
3 import re
4
5 from django.conf import settings
6 from django.core.files import File
7 from django.core.files.storage import DefaultStorage
8 from django.http import Http404, HttpResponse
9 from django.shortcuts import get_object_or_404
10 from django.utils._os import safe_join
11 from rest_framework.authentication import TokenAuthentication
12 from rest_framework.exceptions import AuthenticationFailed
13
14 from grandchallenge.cases.models import Image
15 from grandchallenge.challenges.models import Challenge
16 from grandchallenge.core.storage import ProtectedS3Storage
17 from grandchallenge.evaluation.models import Submission
18 from grandchallenge.serving.api import serve_file
19 from grandchallenge.serving.permissions import (
20 can_access,
21 user_can_download_image,
22 user_can_download_submission,
23 )
24
25
26 def serve_fullpath(*, fullpath):
27 storage = DefaultStorage()
28
29 if not (os.path.abspath(fullpath) == fullpath) or not storage.exists(
30 fullpath
31 ):
32 raise Http404("File not found.")
33
34 try:
35 f = storage.open(fullpath, "rb")
36 file = File(f)
37 return serve_file(file, save_as=True)
38 except IOError:
39 raise Http404("File not found.")
40
41
42 def protected_storage_redirect(*, name):
43 # Get the storage with the internal redirect and auth. This will prepend
44 # settings.PROTECTED_S3_STORAGE_KWARGS['endpoint_url'] to the url
45 storage = ProtectedS3Storage(internal=True)
46
47 if not storage.exists(name=name):
48 raise Http404("File not found.")
49
50 url = storage.url(name=name)
51
52 # Now strip the endpoint_url
53 external_url = re.match(
54 f"^{settings.PROTECTED_S3_STORAGE_KWARGS['endpoint_url']}(.*)$", url
55 ).group(1)
56
57 response = HttpResponse()
58 response["X-Accel-Redirect"] = external_url
59
60 return response
61
62
63 def serve_folder(request, *, challenge_name=None, folder=None, path):
64 """
65 Serve static files in a folder.
66
67 If the file is in a challenge folder, then the subfolders of this challenge
68 will be checked for permissions, see `can_access`.
69
70 If the challenge_short_name is not set, then the folder must be set.
71 ALL FILES IN THIS FOLDER WILL BE AVAILABLE TO DOWNLOAD.
72 """
73 path = posixpath.normpath(path).lstrip("/")
74
75 if challenge_name:
76 if folder:
77 raise AttributeError("Only challenge_name or folder should be set")
78
79 challenge = get_object_or_404(
80 Challenge, short_name__iexact=challenge_name
81 )
82
83 document_root = safe_join(settings.MEDIA_ROOT, challenge.short_name)
84 fullpath = safe_join(document_root, path)
85 allowed = can_access(
86 request.user,
87 fullpath[len(document_root) :].lstrip("/"),
88 challenge=challenge,
89 )
90 elif folder:
91 document_root = safe_join(settings.MEDIA_ROOT, folder)
92 fullpath = safe_join(document_root, path)
93 allowed = True
94 else:
95 raise AttributeError("challenge_name or folder must be set")
96
97 if not allowed:
98 raise Http404("File not found.")
99
100 return serve_fullpath(fullpath=fullpath)
101
102
103 def serve_images(request, *, pk, path):
104 document_root = safe_join(f"/{settings.IMAGE_FILES_SUBDIRECTORY}", pk)
105 path = posixpath.normpath(path).lstrip("/")
106 name = safe_join(document_root, path)
107
108 try:
109 image = Image.objects.get(pk=pk)
110 except Image.DoesNotExist:
111 raise Http404("File not found.")
112
113 try:
114 user, _ = TokenAuthentication().authenticate(request)
115 except (AuthenticationFailed, TypeError):
116 user = request.user
117
118 if user_can_download_image(user=user, image=image):
119 return protected_storage_redirect(name=name)
120
121 raise Http404("File not found.")
122
123
124 def serve_submissions(request, *, submission_pk, **_):
125 try:
126 submission = Submission.objects.get(pk=submission_pk)
127 except Submission.DoesNotExist:
128 raise Http404("File not found.")
129
130 if user_can_download_submission(user=request.user, submission=submission):
131 return protected_storage_redirect(name=submission.file.name)
132
133 raise Http404("File not found.")
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/serving/views.py b/app/grandchallenge/serving/views.py
--- a/app/grandchallenge/serving/views.py
+++ b/app/grandchallenge/serving/views.py
@@ -101,7 +101,7 @@
def serve_images(request, *, pk, path):
- document_root = safe_join(f"/{settings.IMAGE_FILES_SUBDIRECTORY}", pk)
+ document_root = safe_join(f"/{settings.IMAGE_FILES_SUBDIRECTORY}", str(pk))
path = posixpath.normpath(path).lstrip("/")
name = safe_join(document_root, path)
| {"golden_diff": "diff --git a/app/grandchallenge/serving/views.py b/app/grandchallenge/serving/views.py\n--- a/app/grandchallenge/serving/views.py\n+++ b/app/grandchallenge/serving/views.py\n@@ -101,7 +101,7 @@\n \n \n def serve_images(request, *, pk, path):\n- document_root = safe_join(f\"/{settings.IMAGE_FILES_SUBDIRECTORY}\", pk)\n+ document_root = safe_join(f\"/{settings.IMAGE_FILES_SUBDIRECTORY}\", str(pk))\n path = posixpath.normpath(path).lstrip(\"/\")\n name = safe_join(document_root, path)\n", "issue": "Update to Django 2.2\n- [ ] Check that all Django apps support 2.2\n", "before_files": [{"content": "import os\nimport posixpath\nimport re\n\nfrom django.conf import settings\nfrom django.core.files import File\nfrom django.core.files.storage import DefaultStorage\nfrom django.http import Http404, HttpResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.utils._os import safe_join\nfrom rest_framework.authentication import TokenAuthentication\nfrom rest_framework.exceptions import AuthenticationFailed\n\nfrom grandchallenge.cases.models import Image\nfrom grandchallenge.challenges.models import Challenge\nfrom grandchallenge.core.storage import ProtectedS3Storage\nfrom grandchallenge.evaluation.models import Submission\nfrom grandchallenge.serving.api import serve_file\nfrom grandchallenge.serving.permissions import (\n can_access,\n user_can_download_image,\n user_can_download_submission,\n)\n\n\ndef serve_fullpath(*, fullpath):\n storage = DefaultStorage()\n\n if not (os.path.abspath(fullpath) == fullpath) or not storage.exists(\n fullpath\n ):\n raise Http404(\"File not found.\")\n\n try:\n f = storage.open(fullpath, \"rb\")\n file = File(f)\n return serve_file(file, save_as=True)\n except IOError:\n raise Http404(\"File not found.\")\n\n\ndef protected_storage_redirect(*, name):\n # Get the storage with the internal redirect and auth. This will prepend\n # settings.PROTECTED_S3_STORAGE_KWARGS['endpoint_url'] to the url\n storage = ProtectedS3Storage(internal=True)\n\n if not storage.exists(name=name):\n raise Http404(\"File not found.\")\n\n url = storage.url(name=name)\n\n # Now strip the endpoint_url\n external_url = re.match(\n f\"^{settings.PROTECTED_S3_STORAGE_KWARGS['endpoint_url']}(.*)$\", url\n ).group(1)\n\n response = HttpResponse()\n response[\"X-Accel-Redirect\"] = external_url\n\n return response\n\n\ndef serve_folder(request, *, challenge_name=None, folder=None, path):\n \"\"\"\n Serve static files in a folder.\n\n If the file is in a challenge folder, then the subfolders of this challenge\n will be checked for permissions, see `can_access`.\n\n If the challenge_short_name is not set, then the folder must be set.\n ALL FILES IN THIS FOLDER WILL BE AVAILABLE TO DOWNLOAD.\n \"\"\"\n path = posixpath.normpath(path).lstrip(\"/\")\n\n if challenge_name:\n if folder:\n raise AttributeError(\"Only challenge_name or folder should be set\")\n\n challenge = get_object_or_404(\n Challenge, short_name__iexact=challenge_name\n )\n\n document_root = safe_join(settings.MEDIA_ROOT, challenge.short_name)\n fullpath = safe_join(document_root, path)\n allowed = can_access(\n request.user,\n fullpath[len(document_root) :].lstrip(\"/\"),\n challenge=challenge,\n )\n elif folder:\n document_root = safe_join(settings.MEDIA_ROOT, folder)\n fullpath = safe_join(document_root, path)\n allowed = True\n else:\n raise AttributeError(\"challenge_name or folder must be set\")\n\n if not allowed:\n raise Http404(\"File not found.\")\n\n return serve_fullpath(fullpath=fullpath)\n\n\ndef serve_images(request, *, pk, path):\n document_root = safe_join(f\"/{settings.IMAGE_FILES_SUBDIRECTORY}\", pk)\n path = posixpath.normpath(path).lstrip(\"/\")\n name = safe_join(document_root, path)\n\n try:\n image = Image.objects.get(pk=pk)\n except Image.DoesNotExist:\n raise Http404(\"File not found.\")\n\n try:\n user, _ = TokenAuthentication().authenticate(request)\n except (AuthenticationFailed, TypeError):\n user = request.user\n\n if user_can_download_image(user=user, image=image):\n return protected_storage_redirect(name=name)\n\n raise Http404(\"File not found.\")\n\n\ndef serve_submissions(request, *, submission_pk, **_):\n try:\n submission = Submission.objects.get(pk=submission_pk)\n except Submission.DoesNotExist:\n raise Http404(\"File not found.\")\n\n if user_can_download_submission(user=request.user, submission=submission):\n return protected_storage_redirect(name=submission.file.name)\n\n raise Http404(\"File not found.\")\n", "path": "app/grandchallenge/serving/views.py"}], "after_files": [{"content": "import os\nimport posixpath\nimport re\n\nfrom django.conf import settings\nfrom django.core.files import File\nfrom django.core.files.storage import DefaultStorage\nfrom django.http import Http404, HttpResponse\nfrom django.shortcuts import get_object_or_404\nfrom django.utils._os import safe_join\nfrom rest_framework.authentication import TokenAuthentication\nfrom rest_framework.exceptions import AuthenticationFailed\n\nfrom grandchallenge.cases.models import Image\nfrom grandchallenge.challenges.models import Challenge\nfrom grandchallenge.core.storage import ProtectedS3Storage\nfrom grandchallenge.evaluation.models import Submission\nfrom grandchallenge.serving.api import serve_file\nfrom grandchallenge.serving.permissions import (\n can_access,\n user_can_download_image,\n user_can_download_submission,\n)\n\n\ndef serve_fullpath(*, fullpath):\n storage = DefaultStorage()\n\n if not (os.path.abspath(fullpath) == fullpath) or not storage.exists(\n fullpath\n ):\n raise Http404(\"File not found.\")\n\n try:\n f = storage.open(fullpath, \"rb\")\n file = File(f)\n return serve_file(file, save_as=True)\n except IOError:\n raise Http404(\"File not found.\")\n\n\ndef protected_storage_redirect(*, name):\n # Get the storage with the internal redirect and auth. This will prepend\n # settings.PROTECTED_S3_STORAGE_KWARGS['endpoint_url'] to the url\n storage = ProtectedS3Storage(internal=True)\n\n if not storage.exists(name=name):\n raise Http404(\"File not found.\")\n\n url = storage.url(name=name)\n\n # Now strip the endpoint_url\n external_url = re.match(\n f\"^{settings.PROTECTED_S3_STORAGE_KWARGS['endpoint_url']}(.*)$\", url\n ).group(1)\n\n response = HttpResponse()\n response[\"X-Accel-Redirect\"] = external_url\n\n return response\n\n\ndef serve_folder(request, *, challenge_name=None, folder=None, path):\n \"\"\"\n Serve static files in a folder.\n\n If the file is in a challenge folder, then the subfolders of this challenge\n will be checked for permissions, see `can_access`.\n\n If the challenge_short_name is not set, then the folder must be set.\n ALL FILES IN THIS FOLDER WILL BE AVAILABLE TO DOWNLOAD.\n \"\"\"\n path = posixpath.normpath(path).lstrip(\"/\")\n\n if challenge_name:\n if folder:\n raise AttributeError(\"Only challenge_name or folder should be set\")\n\n challenge = get_object_or_404(\n Challenge, short_name__iexact=challenge_name\n )\n\n document_root = safe_join(settings.MEDIA_ROOT, challenge.short_name)\n fullpath = safe_join(document_root, path)\n allowed = can_access(\n request.user,\n fullpath[len(document_root) :].lstrip(\"/\"),\n challenge=challenge,\n )\n elif folder:\n document_root = safe_join(settings.MEDIA_ROOT, folder)\n fullpath = safe_join(document_root, path)\n allowed = True\n else:\n raise AttributeError(\"challenge_name or folder must be set\")\n\n if not allowed:\n raise Http404(\"File not found.\")\n\n return serve_fullpath(fullpath=fullpath)\n\n\ndef serve_images(request, *, pk, path):\n document_root = safe_join(f\"/{settings.IMAGE_FILES_SUBDIRECTORY}\", str(pk))\n path = posixpath.normpath(path).lstrip(\"/\")\n name = safe_join(document_root, path)\n\n try:\n image = Image.objects.get(pk=pk)\n except Image.DoesNotExist:\n raise Http404(\"File not found.\")\n\n try:\n user, _ = TokenAuthentication().authenticate(request)\n except (AuthenticationFailed, TypeError):\n user = request.user\n\n if user_can_download_image(user=user, image=image):\n return protected_storage_redirect(name=name)\n\n raise Http404(\"File not found.\")\n\n\ndef serve_submissions(request, *, submission_pk, **_):\n try:\n submission = Submission.objects.get(pk=submission_pk)\n except Submission.DoesNotExist:\n raise Http404(\"File not found.\")\n\n if user_can_download_submission(user=request.user, submission=submission):\n return protected_storage_redirect(name=submission.file.name)\n\n raise Http404(\"File not found.\")\n", "path": "app/grandchallenge/serving/views.py"}]} | 1,495 | 128 |
gh_patches_debug_13393 | rasdani/github-patches | git_diff | pulp__pulpcore-4310 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Import-check may return error saying "director does not exist" when it actually just lack permissions
Endpoint /pulp/api/v3/importers/core/pulp/import-check/ returns error saying "Directory does not exist" when pulp user lack permissions to read said directory.
**To Reproduce**
Try importing content from a directory where pulp user doesn't have read access.
**Expected behavior**
Error returned should indicate the permission error.
**Additional context**
Pulp is using os.path.exists() method to verify if the directory exists: https://github.com/pulp/pulpcore/blob/main/pulpcore/app/views/importer.py#L44-L45
However, the method can return false if permission is not granted to access the directory even if the directory exists
~~~
os.path.exists(path)
Return True if path refers to an existing path or an open file descriptor. Returns False for broken symbolic links. On some platforms, this function may return False if permission is not granted to execute os.stat() on the requested file, even if the path physically exists.
~~~
os.path method documentation -> https://docs.python.org/3/library/os.path.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/views/importer.py`
Content:
```
1 from gettext import gettext as _
2 import json
3 import os
4 from drf_spectacular.utils import extend_schema
5 from rest_framework.views import APIView
6 from rest_framework.response import Response
7
8 from pulpcore.app import settings
9 from pulpcore.app.serializers import PulpImportCheckResponseSerializer, PulpImportCheckSerializer
10
11
12 def _check_allowed_import_path(a_path):
13 user_provided_realpath = os.path.realpath(a_path)
14 for allowed_path in settings.ALLOWED_IMPORT_PATHS:
15 if user_provided_realpath.startswith(allowed_path):
16 return True, None
17 return False, _(
18 "{} is not an allowed import path".format(os.path.dirname(os.path.realpath(a_path)))
19 )
20
21
22 def _validate_file(in_param, data):
23 """
24 Returns a (is-valid, msgs[]) tuple describing all problems found with data[in_param]
25
26 We check for a number of things, attempting to return all the errors we can find. We don't want
27 to give out information for files in arbitrary locations on the filesystem; if the check
28 for ALLOWED_IMPORT_PATHS fails, we report that and ignore any other problems.
29
30 If the directory containing the base-file doesn't exist, or isn't readable, or the specified
31 file doesn't exist, report and return.
32
33 Error-messages for all other checks are additive.
34 """
35 # check allowed, leave if failed
36 file = data[in_param]
37 real_file = os.path.realpath(file)
38 rc, msg = _check_allowed_import_path(real_file)
39 if not rc:
40 return rc, [msg]
41
42 # check directory-sanity, leave if failed
43 owning_dir = os.path.dirname(real_file)
44 if not os.path.exists(owning_dir):
45 return False, [_("directory {} does not exist").format(owning_dir)]
46 if not os.access(owning_dir, os.R_OK):
47 return False, [_("directory {} does not allow read-access").format(owning_dir)]
48
49 # check file-exists, leave if failed
50 if not os.path.exists(real_file):
51 return False, [_("file {} does not exist").format(real_file)]
52
53 # check file-sanity
54 msgs = []
55 isfile = os.path.isfile(real_file)
56 readable = os.access(real_file, os.R_OK)
57
58 rc = isfile and readable
59 if not isfile:
60 msgs.append(_("{} is not a file".format(real_file)))
61 if not readable:
62 msgs.append(_("{} exists but cannot be read".format(real_file)))
63
64 # extra check for toc-dir-write
65 if in_param == "toc":
66 if not os.access(owning_dir, os.W_OK):
67 rc = False
68 msgs.append(_("directory {} must allow pulp write-access".format(owning_dir)))
69
70 return rc, msgs
71
72
73 class PulpImporterImportCheckView(APIView):
74 """
75 Returns validity of proposed parameters for a PulpImport call.
76 """
77
78 @extend_schema(
79 summary="Validate the parameters to be used for a PulpImport call",
80 operation_id="pulp_import_check_post",
81 request=PulpImportCheckSerializer,
82 responses={200: PulpImportCheckResponseSerializer},
83 )
84 def post(self, request, format=None):
85 """
86 Evaluates validity of proposed PulpImport parameters 'toc', 'path', and 'repo_mapping'.
87
88 * Checks that toc, path are in ALLOWED_IMPORT_PATHS
89 * if ALLOWED:
90 * Checks that toc, path exist and are readable
91 * If toc specified, checks that containing dir is writeable
92 * Checks that repo_mapping is valid JSON
93 """
94 serializer = PulpImportCheckSerializer(data=request.data)
95 if serializer.is_valid():
96 data = {}
97 if "toc" in serializer.data:
98 data["toc"] = {}
99 data["toc"]["context"] = serializer.data["toc"]
100 data["toc"]["is_valid"], data["toc"]["messages"] = _validate_file(
101 "toc", serializer.data
102 )
103
104 if "path" in serializer.data:
105 data["path"] = {}
106 data["path"]["context"] = serializer.data["path"]
107 data["path"]["is_valid"], data["path"]["messages"] = _validate_file(
108 "path", serializer.data
109 )
110
111 if "repo_mapping" in serializer.data:
112 data["repo_mapping"] = {}
113 data["repo_mapping"]["context"] = serializer.data["repo_mapping"]
114 try:
115 json.loads(serializer.data["repo_mapping"])
116 data["repo_mapping"]["is_valid"] = True
117 data["repo_mapping"]["messages"] = []
118 except json.JSONDecodeError:
119 data["repo_mapping"]["is_valid"] = False
120 data["repo_mapping"]["messages"] = [_("invalid JSON")]
121
122 crs = PulpImportCheckResponseSerializer(data, context={"request": request})
123 return Response(crs.data)
124 return Response(serializer.errors, status=400)
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pulpcore/app/views/importer.py b/pulpcore/app/views/importer.py
--- a/pulpcore/app/views/importer.py
+++ b/pulpcore/app/views/importer.py
@@ -40,11 +40,14 @@
return rc, [msg]
# check directory-sanity, leave if failed
+ # use os.stat to ensure directory exists and pulp has read-access
+ # return any errors received from os.stat to the user
+
owning_dir = os.path.dirname(real_file)
- if not os.path.exists(owning_dir):
- return False, [_("directory {} does not exist").format(owning_dir)]
- if not os.access(owning_dir, os.R_OK):
- return False, [_("directory {} does not allow read-access").format(owning_dir)]
+ try:
+ os.stat(owning_dir)
+ except OSError as e:
+ return False, [_("{}").format(e)]
# check file-exists, leave if failed
if not os.path.exists(real_file):
| {"golden_diff": "diff --git a/pulpcore/app/views/importer.py b/pulpcore/app/views/importer.py\n--- a/pulpcore/app/views/importer.py\n+++ b/pulpcore/app/views/importer.py\n@@ -40,11 +40,14 @@\n return rc, [msg]\n \n # check directory-sanity, leave if failed\n+ # use os.stat to ensure directory exists and pulp has read-access\n+ # return any errors received from os.stat to the user\n+\n owning_dir = os.path.dirname(real_file)\n- if not os.path.exists(owning_dir):\n- return False, [_(\"directory {} does not exist\").format(owning_dir)]\n- if not os.access(owning_dir, os.R_OK):\n- return False, [_(\"directory {} does not allow read-access\").format(owning_dir)]\n+ try:\n+ os.stat(owning_dir)\n+ except OSError as e:\n+ return False, [_(\"{}\").format(e)]\n \n # check file-exists, leave if failed\n if not os.path.exists(real_file):\n", "issue": "Import-check may return error saying \"director does not exist\" when it actually just lack permissions\nEndpoint /pulp/api/v3/importers/core/pulp/import-check/ returns error saying \"Directory does not exist\" when pulp user lack permissions to read said directory.\r\n\r\n**To Reproduce**\r\n\r\nTry importing content from a directory where pulp user doesn't have read access.\r\n\r\n**Expected behavior**\r\nError returned should indicate the permission error.\r\n\r\n**Additional context**\r\n\r\nPulp is using os.path.exists() method to verify if the directory exists: https://github.com/pulp/pulpcore/blob/main/pulpcore/app/views/importer.py#L44-L45\r\n\r\nHowever, the method can return false if permission is not granted to access the directory even if the directory exists\r\n\r\n~~~\r\nos.path.exists(path)\r\nReturn True if path refers to an existing path or an open file descriptor. Returns False for broken symbolic links. On some platforms, this function may return False if permission is not granted to execute os.stat() on the requested file, even if the path physically exists.\r\n~~~\r\n\r\nos.path method documentation -> https://docs.python.org/3/library/os.path.html\r\n\n", "before_files": [{"content": "from gettext import gettext as _\nimport json\nimport os\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework.views import APIView\nfrom rest_framework.response import Response\n\nfrom pulpcore.app import settings\nfrom pulpcore.app.serializers import PulpImportCheckResponseSerializer, PulpImportCheckSerializer\n\n\ndef _check_allowed_import_path(a_path):\n user_provided_realpath = os.path.realpath(a_path)\n for allowed_path in settings.ALLOWED_IMPORT_PATHS:\n if user_provided_realpath.startswith(allowed_path):\n return True, None\n return False, _(\n \"{} is not an allowed import path\".format(os.path.dirname(os.path.realpath(a_path)))\n )\n\n\ndef _validate_file(in_param, data):\n \"\"\"\n Returns a (is-valid, msgs[]) tuple describing all problems found with data[in_param]\n\n We check for a number of things, attempting to return all the errors we can find. We don't want\n to give out information for files in arbitrary locations on the filesystem; if the check\n for ALLOWED_IMPORT_PATHS fails, we report that and ignore any other problems.\n\n If the directory containing the base-file doesn't exist, or isn't readable, or the specified\n file doesn't exist, report and return.\n\n Error-messages for all other checks are additive.\n \"\"\"\n # check allowed, leave if failed\n file = data[in_param]\n real_file = os.path.realpath(file)\n rc, msg = _check_allowed_import_path(real_file)\n if not rc:\n return rc, [msg]\n\n # check directory-sanity, leave if failed\n owning_dir = os.path.dirname(real_file)\n if not os.path.exists(owning_dir):\n return False, [_(\"directory {} does not exist\").format(owning_dir)]\n if not os.access(owning_dir, os.R_OK):\n return False, [_(\"directory {} does not allow read-access\").format(owning_dir)]\n\n # check file-exists, leave if failed\n if not os.path.exists(real_file):\n return False, [_(\"file {} does not exist\").format(real_file)]\n\n # check file-sanity\n msgs = []\n isfile = os.path.isfile(real_file)\n readable = os.access(real_file, os.R_OK)\n\n rc = isfile and readable\n if not isfile:\n msgs.append(_(\"{} is not a file\".format(real_file)))\n if not readable:\n msgs.append(_(\"{} exists but cannot be read\".format(real_file)))\n\n # extra check for toc-dir-write\n if in_param == \"toc\":\n if not os.access(owning_dir, os.W_OK):\n rc = False\n msgs.append(_(\"directory {} must allow pulp write-access\".format(owning_dir)))\n\n return rc, msgs\n\n\nclass PulpImporterImportCheckView(APIView):\n \"\"\"\n Returns validity of proposed parameters for a PulpImport call.\n \"\"\"\n\n @extend_schema(\n summary=\"Validate the parameters to be used for a PulpImport call\",\n operation_id=\"pulp_import_check_post\",\n request=PulpImportCheckSerializer,\n responses={200: PulpImportCheckResponseSerializer},\n )\n def post(self, request, format=None):\n \"\"\"\n Evaluates validity of proposed PulpImport parameters 'toc', 'path', and 'repo_mapping'.\n\n * Checks that toc, path are in ALLOWED_IMPORT_PATHS\n * if ALLOWED:\n * Checks that toc, path exist and are readable\n * If toc specified, checks that containing dir is writeable\n * Checks that repo_mapping is valid JSON\n \"\"\"\n serializer = PulpImportCheckSerializer(data=request.data)\n if serializer.is_valid():\n data = {}\n if \"toc\" in serializer.data:\n data[\"toc\"] = {}\n data[\"toc\"][\"context\"] = serializer.data[\"toc\"]\n data[\"toc\"][\"is_valid\"], data[\"toc\"][\"messages\"] = _validate_file(\n \"toc\", serializer.data\n )\n\n if \"path\" in serializer.data:\n data[\"path\"] = {}\n data[\"path\"][\"context\"] = serializer.data[\"path\"]\n data[\"path\"][\"is_valid\"], data[\"path\"][\"messages\"] = _validate_file(\n \"path\", serializer.data\n )\n\n if \"repo_mapping\" in serializer.data:\n data[\"repo_mapping\"] = {}\n data[\"repo_mapping\"][\"context\"] = serializer.data[\"repo_mapping\"]\n try:\n json.loads(serializer.data[\"repo_mapping\"])\n data[\"repo_mapping\"][\"is_valid\"] = True\n data[\"repo_mapping\"][\"messages\"] = []\n except json.JSONDecodeError:\n data[\"repo_mapping\"][\"is_valid\"] = False\n data[\"repo_mapping\"][\"messages\"] = [_(\"invalid JSON\")]\n\n crs = PulpImportCheckResponseSerializer(data, context={\"request\": request})\n return Response(crs.data)\n return Response(serializer.errors, status=400)\n", "path": "pulpcore/app/views/importer.py"}], "after_files": [{"content": "from gettext import gettext as _\nimport json\nimport os\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework.views import APIView\nfrom rest_framework.response import Response\n\nfrom pulpcore.app import settings\nfrom pulpcore.app.serializers import PulpImportCheckResponseSerializer, PulpImportCheckSerializer\n\n\ndef _check_allowed_import_path(a_path):\n user_provided_realpath = os.path.realpath(a_path)\n for allowed_path in settings.ALLOWED_IMPORT_PATHS:\n if user_provided_realpath.startswith(allowed_path):\n return True, None\n return False, _(\n \"{} is not an allowed import path\".format(os.path.dirname(os.path.realpath(a_path)))\n )\n\n\ndef _validate_file(in_param, data):\n \"\"\"\n Returns a (is-valid, msgs[]) tuple describing all problems found with data[in_param]\n\n We check for a number of things, attempting to return all the errors we can find. We don't want\n to give out information for files in arbitrary locations on the filesystem; if the check\n for ALLOWED_IMPORT_PATHS fails, we report that and ignore any other problems.\n\n If the directory containing the base-file doesn't exist, or isn't readable, or the specified\n file doesn't exist, report and return.\n\n Error-messages for all other checks are additive.\n \"\"\"\n # check allowed, leave if failed\n file = data[in_param]\n real_file = os.path.realpath(file)\n rc, msg = _check_allowed_import_path(real_file)\n if not rc:\n return rc, [msg]\n\n # check directory-sanity, leave if failed\n # use os.stat to ensure directory exists and pulp has read-access\n # return any errors received from os.stat to the user\n\n owning_dir = os.path.dirname(real_file)\n try:\n os.stat(owning_dir)\n except OSError as e:\n return False, [_(\"{}\").format(e)]\n\n # check file-exists, leave if failed\n if not os.path.exists(real_file):\n return False, [_(\"file {} does not exist\").format(real_file)]\n\n # check file-sanity\n msgs = []\n isfile = os.path.isfile(real_file)\n readable = os.access(real_file, os.R_OK)\n\n rc = isfile and readable\n if not isfile:\n msgs.append(_(\"{} is not a file\".format(real_file)))\n if not readable:\n msgs.append(_(\"{} exists but cannot be read\".format(real_file)))\n\n # extra check for toc-dir-write\n if in_param == \"toc\":\n if not os.access(owning_dir, os.W_OK):\n rc = False\n msgs.append(_(\"directory {} must allow pulp write-access\".format(owning_dir)))\n\n return rc, msgs\n\n\nclass PulpImporterImportCheckView(APIView):\n \"\"\"\n Returns validity of proposed parameters for a PulpImport call.\n \"\"\"\n\n @extend_schema(\n summary=\"Validate the parameters to be used for a PulpImport call\",\n operation_id=\"pulp_import_check_post\",\n request=PulpImportCheckSerializer,\n responses={200: PulpImportCheckResponseSerializer},\n )\n def post(self, request, format=None):\n \"\"\"\n Evaluates validity of proposed PulpImport parameters 'toc', 'path', and 'repo_mapping'.\n\n * Checks that toc, path are in ALLOWED_IMPORT_PATHS\n * if ALLOWED:\n * Checks that toc, path exist and are readable\n * If toc specified, checks that containing dir is writeable\n * Checks that repo_mapping is valid JSON\n \"\"\"\n serializer = PulpImportCheckSerializer(data=request.data)\n if serializer.is_valid():\n data = {}\n if \"toc\" in serializer.data:\n data[\"toc\"] = {}\n data[\"toc\"][\"context\"] = serializer.data[\"toc\"]\n data[\"toc\"][\"is_valid\"], data[\"toc\"][\"messages\"] = _validate_file(\n \"toc\", serializer.data\n )\n\n if \"path\" in serializer.data:\n data[\"path\"] = {}\n data[\"path\"][\"context\"] = serializer.data[\"path\"]\n data[\"path\"][\"is_valid\"], data[\"path\"][\"messages\"] = _validate_file(\n \"path\", serializer.data\n )\n\n if \"repo_mapping\" in serializer.data:\n data[\"repo_mapping\"] = {}\n data[\"repo_mapping\"][\"context\"] = serializer.data[\"repo_mapping\"]\n try:\n json.loads(serializer.data[\"repo_mapping\"])\n data[\"repo_mapping\"][\"is_valid\"] = True\n data[\"repo_mapping\"][\"messages\"] = []\n except json.JSONDecodeError:\n data[\"repo_mapping\"][\"is_valid\"] = False\n data[\"repo_mapping\"][\"messages\"] = [_(\"invalid JSON\")]\n\n crs = PulpImportCheckResponseSerializer(data, context={\"request\": request})\n return Response(crs.data)\n return Response(serializer.errors, status=400)\n", "path": "pulpcore/app/views/importer.py"}]} | 1,824 | 237 |
gh_patches_debug_11032 | rasdani/github-patches | git_diff | spesmilo__electrum-1608 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
allow non-root installation on Linux
I want to install Electrum on my Ubuntu machine in my user's `$HOME` directory, i.e., not via `sudo`. If I run the command
```
pip install https://download.electrum.org/2.5.4/Electrum-2.5.4.tar.gz
```
I get an error when the installation tries to create a desktop shortcut:
```
copying electrum.desktop -> /usr/share/applications/
error: [Errno 13] Permission denied: '/usr/share/applications/electrum.desktop'
```
Maybe the installation script could `try` if this copy operation works, and try again at `$HOME/.local/share/applications` (or, better, `$XDG_DATA_HOME/applications`) in case it doesn't.
If you agree this would make sense and point me to the file where I would need to do this, I'd be happy to provide a PR.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python2
2
3 # python setup.py sdist --format=zip,gztar
4
5 from setuptools import setup
6 import os
7 import sys
8 import platform
9 import imp
10
11 version = imp.load_source('version', 'lib/version.py')
12
13 if sys.version_info[:3] < (2, 7, 0):
14 sys.exit("Error: Electrum requires Python version >= 2.7.0...")
15
16 data_files = []
17
18 if platform.system() in ['Linux', 'FreeBSD', 'DragonFly']:
19 usr_share = os.path.join(sys.prefix, "share")
20 data_files += [
21 (os.path.join(usr_share, 'applications/'), ['electrum.desktop']),
22 (os.path.join(usr_share, 'pixmaps/'), ['icons/electrum.png'])
23 ]
24
25 setup(
26 name="Electrum",
27 version=version.ELECTRUM_VERSION,
28 install_requires=[
29 'slowaes>=0.1a1',
30 'ecdsa>=0.9',
31 'pbkdf2',
32 'requests',
33 'qrcode',
34 'protobuf',
35 'dnspython',
36 'jsonrpclib',
37 ],
38 packages=[
39 'electrum',
40 'electrum_gui',
41 'electrum_gui.qt',
42 'electrum_plugins',
43 'electrum_plugins.audio_modem',
44 'electrum_plugins.cosigner_pool',
45 'electrum_plugins.email_requests',
46 'electrum_plugins.exchange_rate',
47 'electrum_plugins.greenaddress_instant',
48 'electrum_plugins.keepkey',
49 'electrum_plugins.labels',
50 'electrum_plugins.ledger',
51 'electrum_plugins.plot',
52 'electrum_plugins.trezor',
53 'electrum_plugins.trustedcoin',
54 'electrum_plugins.virtualkeyboard',
55 ],
56 package_dir={
57 'electrum': 'lib',
58 'electrum_gui': 'gui',
59 'electrum_plugins': 'plugins',
60 },
61 package_data={
62 'electrum': [
63 'www/index.html',
64 'wordlist/*.txt',
65 'locale/*/LC_MESSAGES/electrum.mo',
66 ]
67 },
68 scripts=['electrum'],
69 data_files=data_files,
70 description="Lightweight Bitcoin Wallet",
71 author="Thomas Voegtlin",
72 author_email="[email protected]",
73 license="GNU GPLv3",
74 url="https://electrum.org",
75 long_description="""Lightweight Bitcoin Wallet"""
76 )
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
old mode 100644
new mode 100755
--- a/setup.py
+++ b/setup.py
@@ -17,6 +17,11 @@
if platform.system() in ['Linux', 'FreeBSD', 'DragonFly']:
usr_share = os.path.join(sys.prefix, "share")
+ if not os.access(usr_share, os.W_OK):
+ if 'XDG_DATA_HOME' in os.environ.keys():
+ usr_share = os.environ['$XDG_DATA_HOME']
+ else:
+ usr_share = os.path.expanduser('~/.local/share')
data_files += [
(os.path.join(usr_share, 'applications/'), ['electrum.desktop']),
(os.path.join(usr_share, 'pixmaps/'), ['icons/electrum.png'])
| {"golden_diff": "diff --git a/setup.py b/setup.py\nold mode 100644\nnew mode 100755\n--- a/setup.py\n+++ b/setup.py\n@@ -17,6 +17,11 @@\n \n if platform.system() in ['Linux', 'FreeBSD', 'DragonFly']:\n usr_share = os.path.join(sys.prefix, \"share\")\n+ if not os.access(usr_share, os.W_OK):\n+ if 'XDG_DATA_HOME' in os.environ.keys():\n+ usr_share = os.environ['$XDG_DATA_HOME']\n+ else:\n+ usr_share = os.path.expanduser('~/.local/share')\n data_files += [\n (os.path.join(usr_share, 'applications/'), ['electrum.desktop']),\n (os.path.join(usr_share, 'pixmaps/'), ['icons/electrum.png'])\n", "issue": "allow non-root installation on Linux\nI want to install Electrum on my Ubuntu machine in my user's `$HOME` directory, i.e., not via `sudo`. If I run the command\n\n```\npip install https://download.electrum.org/2.5.4/Electrum-2.5.4.tar.gz\n```\n\nI get an error when the installation tries to create a desktop shortcut:\n\n```\ncopying electrum.desktop -> /usr/share/applications/\n\nerror: [Errno 13] Permission denied: '/usr/share/applications/electrum.desktop'\n```\n\nMaybe the installation script could `try` if this copy operation works, and try again at `$HOME/.local/share/applications` (or, better, `$XDG_DATA_HOME/applications`) in case it doesn't.\n\nIf you agree this would make sense and point me to the file where I would need to do this, I'd be happy to provide a PR.\n\n", "before_files": [{"content": "#!/usr/bin/env python2\n\n# python setup.py sdist --format=zip,gztar\n\nfrom setuptools import setup\nimport os\nimport sys\nimport platform\nimport imp\n\nversion = imp.load_source('version', 'lib/version.py')\n\nif sys.version_info[:3] < (2, 7, 0):\n sys.exit(\"Error: Electrum requires Python version >= 2.7.0...\")\n\ndata_files = []\n\nif platform.system() in ['Linux', 'FreeBSD', 'DragonFly']:\n usr_share = os.path.join(sys.prefix, \"share\")\n data_files += [\n (os.path.join(usr_share, 'applications/'), ['electrum.desktop']),\n (os.path.join(usr_share, 'pixmaps/'), ['icons/electrum.png'])\n ]\n\nsetup(\n name=\"Electrum\",\n version=version.ELECTRUM_VERSION,\n install_requires=[\n 'slowaes>=0.1a1',\n 'ecdsa>=0.9',\n 'pbkdf2',\n 'requests',\n 'qrcode',\n 'protobuf',\n 'dnspython',\n 'jsonrpclib',\n ],\n packages=[\n 'electrum',\n 'electrum_gui',\n 'electrum_gui.qt',\n 'electrum_plugins',\n 'electrum_plugins.audio_modem',\n 'electrum_plugins.cosigner_pool',\n 'electrum_plugins.email_requests',\n 'electrum_plugins.exchange_rate',\n 'electrum_plugins.greenaddress_instant',\n 'electrum_plugins.keepkey',\n 'electrum_plugins.labels',\n 'electrum_plugins.ledger',\n 'electrum_plugins.plot',\n 'electrum_plugins.trezor',\n 'electrum_plugins.trustedcoin',\n 'electrum_plugins.virtualkeyboard',\n ],\n package_dir={\n 'electrum': 'lib',\n 'electrum_gui': 'gui',\n 'electrum_plugins': 'plugins',\n },\n package_data={\n 'electrum': [\n 'www/index.html',\n 'wordlist/*.txt',\n 'locale/*/LC_MESSAGES/electrum.mo',\n ]\n },\n scripts=['electrum'],\n data_files=data_files,\n description=\"Lightweight Bitcoin Wallet\",\n author=\"Thomas Voegtlin\",\n author_email=\"[email protected]\",\n license=\"GNU GPLv3\",\n url=\"https://electrum.org\",\n long_description=\"\"\"Lightweight Bitcoin Wallet\"\"\"\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python2\n\n# python setup.py sdist --format=zip,gztar\n\nfrom setuptools import setup\nimport os\nimport sys\nimport platform\nimport imp\n\nversion = imp.load_source('version', 'lib/version.py')\n\nif sys.version_info[:3] < (2, 7, 0):\n sys.exit(\"Error: Electrum requires Python version >= 2.7.0...\")\n\ndata_files = []\n\nif platform.system() in ['Linux', 'FreeBSD', 'DragonFly']:\n usr_share = os.path.join(sys.prefix, \"share\")\n if not os.access(usr_share, os.W_OK):\n if 'XDG_DATA_HOME' in os.environ.keys():\n usr_share = os.environ['$XDG_DATA_HOME']\n else:\n usr_share = os.path.expanduser('~/.local/share')\n data_files += [\n (os.path.join(usr_share, 'applications/'), ['electrum.desktop']),\n (os.path.join(usr_share, 'pixmaps/'), ['icons/electrum.png'])\n ]\n\nsetup(\n name=\"Electrum\",\n version=version.ELECTRUM_VERSION,\n install_requires=[\n 'slowaes>=0.1a1',\n 'ecdsa>=0.9',\n 'pbkdf2',\n 'requests',\n 'qrcode',\n 'protobuf',\n 'dnspython',\n 'jsonrpclib',\n ],\n packages=[\n 'electrum',\n 'electrum_gui',\n 'electrum_gui.qt',\n 'electrum_plugins',\n 'electrum_plugins.audio_modem',\n 'electrum_plugins.cosigner_pool',\n 'electrum_plugins.email_requests',\n 'electrum_plugins.exchange_rate',\n 'electrum_plugins.greenaddress_instant',\n 'electrum_plugins.keepkey',\n 'electrum_plugins.labels',\n 'electrum_plugins.ledger',\n 'electrum_plugins.plot',\n 'electrum_plugins.trezor',\n 'electrum_plugins.trustedcoin',\n 'electrum_plugins.virtualkeyboard',\n ],\n package_dir={\n 'electrum': 'lib',\n 'electrum_gui': 'gui',\n 'electrum_plugins': 'plugins',\n },\n package_data={\n 'electrum': [\n 'www/index.html',\n 'wordlist/*.txt',\n 'locale/*/LC_MESSAGES/electrum.mo',\n ]\n },\n scripts=['electrum'],\n data_files=data_files,\n description=\"Lightweight Bitcoin Wallet\",\n author=\"Thomas Voegtlin\",\n author_email=\"[email protected]\",\n license=\"GNU GPLv3\",\n url=\"https://electrum.org\",\n long_description=\"\"\"Lightweight Bitcoin Wallet\"\"\"\n)\n", "path": "setup.py"}]} | 1,117 | 187 |
gh_patches_debug_21674 | rasdani/github-patches | git_diff | fedora-infra__bodhi-2007 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bodhi-dequqe-stable dies if any update in the queue is no longer eligible to go stable
QuLogic from Freenode reported today that batched updates didn't go stable at 03:00 UTC like they should have. I confirmed that the cron job ran, but I didn't see any notes about its output. I then ran the command by hand and received this output:
```
[bowlofeggs@bodhi-backend01 ~][PROD]$ sudo -u apache /usr/bin/bodhi-dequeue-stable
No handlers could be found for logger "bodhi.server"
This update has not yet met the minimum testing requirements defined in the <a href="https://fedoraproject.org/wiki/Package_update_acceptance_criteria">Package Update Acceptance Criteria</a>
```
The [```dequeue_stable()```](https://github.com/fedora-infra/bodhi/blob/3.0.0/bodhi/server/scripts/dequeue_stable.py#L28-L46) function runs a large transaction with only a single try/except. It seems that some update in the queue no longer meets testing requirements (probably due to receiving a -1 karma after going to batched) and is raising an Exception when the tool attempts to mark it for stable. Since there is only one try/except handler, this causes the whole transaction to be rolled back.
It should be easy to fix this - we just need a try/except around each update.
Thanks to QuLogic from Freenode for reporting this issue to me.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bodhi/server/scripts/dequeue_stable.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright © 2017 Caleigh Runge-Hottman
3 #
4 # This file is part of Bodhi.
5 #
6 # This program is free software; you can redistribute it and/or
7 # modify it under the terms of the GNU General Public License
8 # as published by the Free Software Foundation; either version 2
9 # of the License, or (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with this program; if not, write to the Free Software
18 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
19 """This script is responsible for moving all updates with a batched request to a stable request."""
20
21 import sys
22
23 import click
24
25 from bodhi.server import buildsys, config, models, Session, initialize_db
26
27
28 @click.command()
29 @click.version_option(message='%(version)s')
30 def dequeue_stable():
31 """Convert all batched requests to stable requests."""
32 initialize_db(config.config)
33 buildsys.setup_buildsystem(config.config)
34 db = Session()
35
36 try:
37 batched = db.query(models.Update).filter_by(request=models.UpdateRequest.batched).all()
38 for update in batched:
39 update.set_request(db, models.UpdateRequest.stable, u'bodhi')
40 db.commit()
41
42 except Exception as e:
43 print(str(e))
44 db.rollback()
45 Session.remove()
46 sys.exit(1)
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bodhi/server/scripts/dequeue_stable.py b/bodhi/server/scripts/dequeue_stable.py
--- a/bodhi/server/scripts/dequeue_stable.py
+++ b/bodhi/server/scripts/dequeue_stable.py
@@ -1,5 +1,5 @@
# -*- coding: utf-8 -*-
-# Copyright © 2017 Caleigh Runge-Hottman
+# Copyright © 2017 Caleigh Runge-Hottman and Red Hat, Inc.
#
# This file is part of Bodhi.
#
@@ -36,11 +36,17 @@
try:
batched = db.query(models.Update).filter_by(request=models.UpdateRequest.batched).all()
for update in batched:
- update.set_request(db, models.UpdateRequest.stable, u'bodhi')
- db.commit()
-
+ try:
+ update.set_request(db, models.UpdateRequest.stable, u'bodhi')
+ db.commit()
+ except Exception as e:
+ print('Unable to stabilize {}: {}'.format(update.alias, str(e)))
+ db.rollback()
+ msg = u"Bodhi is unable to request this update for stabilization: {}"
+ update.comment(db, msg.format(str(e)), author=u'bodhi')
+ db.commit()
except Exception as e:
print(str(e))
- db.rollback()
- Session.remove()
sys.exit(1)
+ finally:
+ Session.remove()
| {"golden_diff": "diff --git a/bodhi/server/scripts/dequeue_stable.py b/bodhi/server/scripts/dequeue_stable.py\n--- a/bodhi/server/scripts/dequeue_stable.py\n+++ b/bodhi/server/scripts/dequeue_stable.py\n@@ -1,5 +1,5 @@\n # -*- coding: utf-8 -*-\n-# Copyright \u00a9 2017 Caleigh Runge-Hottman\n+# Copyright \u00a9 2017 Caleigh Runge-Hottman and Red Hat, Inc.\n #\n # This file is part of Bodhi.\n #\n@@ -36,11 +36,17 @@\n try:\n batched = db.query(models.Update).filter_by(request=models.UpdateRequest.batched).all()\n for update in batched:\n- update.set_request(db, models.UpdateRequest.stable, u'bodhi')\n- db.commit()\n-\n+ try:\n+ update.set_request(db, models.UpdateRequest.stable, u'bodhi')\n+ db.commit()\n+ except Exception as e:\n+ print('Unable to stabilize {}: {}'.format(update.alias, str(e)))\n+ db.rollback()\n+ msg = u\"Bodhi is unable to request this update for stabilization: {}\"\n+ update.comment(db, msg.format(str(e)), author=u'bodhi')\n+ db.commit()\n except Exception as e:\n print(str(e))\n- db.rollback()\n- Session.remove()\n sys.exit(1)\n+ finally:\n+ Session.remove()\n", "issue": "bodhi-dequqe-stable dies if any update in the queue is no longer eligible to go stable\nQuLogic from Freenode reported today that batched updates didn't go stable at 03:00 UTC like they should have. I confirmed that the cron job ran, but I didn't see any notes about its output. I then ran the command by hand and received this output:\r\n\r\n```\r\n[bowlofeggs@bodhi-backend01 ~][PROD]$ sudo -u apache /usr/bin/bodhi-dequeue-stable\r\nNo handlers could be found for logger \"bodhi.server\"\r\nThis update has not yet met the minimum testing requirements defined in the <a href=\"https://fedoraproject.org/wiki/Package_update_acceptance_criteria\">Package Update Acceptance Criteria</a>\r\n```\r\n\r\nThe [```dequeue_stable()```](https://github.com/fedora-infra/bodhi/blob/3.0.0/bodhi/server/scripts/dequeue_stable.py#L28-L46) function runs a large transaction with only a single try/except. It seems that some update in the queue no longer meets testing requirements (probably due to receiving a -1 karma after going to batched) and is raising an Exception when the tool attempts to mark it for stable. Since there is only one try/except handler, this causes the whole transaction to be rolled back.\r\n\r\nIt should be easy to fix this - we just need a try/except around each update.\r\n\r\nThanks to QuLogic from Freenode for reporting this issue to me.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright \u00a9 2017 Caleigh Runge-Hottman\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"This script is responsible for moving all updates with a batched request to a stable request.\"\"\"\n\nimport sys\n\nimport click\n\nfrom bodhi.server import buildsys, config, models, Session, initialize_db\n\n\[email protected]()\[email protected]_option(message='%(version)s')\ndef dequeue_stable():\n \"\"\"Convert all batched requests to stable requests.\"\"\"\n initialize_db(config.config)\n buildsys.setup_buildsystem(config.config)\n db = Session()\n\n try:\n batched = db.query(models.Update).filter_by(request=models.UpdateRequest.batched).all()\n for update in batched:\n update.set_request(db, models.UpdateRequest.stable, u'bodhi')\n db.commit()\n\n except Exception as e:\n print(str(e))\n db.rollback()\n Session.remove()\n sys.exit(1)\n", "path": "bodhi/server/scripts/dequeue_stable.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright \u00a9 2017 Caleigh Runge-Hottman and Red Hat, Inc.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"This script is responsible for moving all updates with a batched request to a stable request.\"\"\"\n\nimport sys\n\nimport click\n\nfrom bodhi.server import buildsys, config, models, Session, initialize_db\n\n\[email protected]()\[email protected]_option(message='%(version)s')\ndef dequeue_stable():\n \"\"\"Convert all batched requests to stable requests.\"\"\"\n initialize_db(config.config)\n buildsys.setup_buildsystem(config.config)\n db = Session()\n\n try:\n batched = db.query(models.Update).filter_by(request=models.UpdateRequest.batched).all()\n for update in batched:\n try:\n update.set_request(db, models.UpdateRequest.stable, u'bodhi')\n db.commit()\n except Exception as e:\n print('Unable to stabilize {}: {}'.format(update.alias, str(e)))\n db.rollback()\n msg = u\"Bodhi is unable to request this update for stabilization: {}\"\n update.comment(db, msg.format(str(e)), author=u'bodhi')\n db.commit()\n except Exception as e:\n print(str(e))\n sys.exit(1)\n finally:\n Session.remove()\n", "path": "bodhi/server/scripts/dequeue_stable.py"}]} | 1,057 | 319 |
gh_patches_debug_57022 | rasdani/github-patches | git_diff | astronomer__astro-sdk-1401 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Doc: Cross link to API reference page from Operators page
Currently there is no way to jump to Func/Operator definition from https://astro-sdk-python.readthedocs.io/en/stable/astro/sql/operators/get_value_list.html (and other operators listed on https://astro-sdk-python.readthedocs.io/en/stable/guides/operators.html) to their definition https://astro-sdk-python.readthedocs.io/en/stable/autoapi/index.html
We should cross-link them using https://www.sphinx-doc.org/en/master/usage/restructuredtext/domains.html#cross-referencing-syntax
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python-sdk/src/astro/sql/__init__.py`
Content:
```
1 from airflow.configuration import conf
2 from airflow.decorators.base import get_unique_task_id
3 from airflow.models.xcom_arg import XComArg
4
5 from astro.sql.operators.append import AppendOperator, append
6 from astro.sql.operators.cleanup import CleanupOperator, cleanup
7 from astro.sql.operators.dataframe import DataframeOperator, dataframe
8 from astro.sql.operators.drop import DropTableOperator, drop_table
9 from astro.sql.operators.export_file import ExportFileOperator, export_file
10 from astro.sql.operators.load_file import LoadFileOperator, load_file
11 from astro.sql.operators.merge import MergeOperator, merge
12 from astro.sql.operators.raw_sql import RawSQLOperator, run_raw_sql
13 from astro.sql.operators.transform import TransformOperator, transform, transform_file
14 from astro.table import Metadata, Table
15
16 __all__ = [
17 "AppendOperator",
18 "append",
19 "CleanupOperator",
20 "cleanup",
21 "DataframeOperator",
22 "dataframe",
23 "DropTableOperator",
24 "drop_table",
25 "ExportFileOperator",
26 "export_file",
27 "LoadFileOperator",
28 "load_file",
29 "MergeOperator",
30 "merge",
31 "Metadata",
32 "run_raw_sql",
33 "Table",
34 "TransformOperator",
35 "transform_file",
36 "transform",
37 ]
38
39
40 def get_value_list(sql: str, conn_id: str, **kwargs) -> XComArg:
41 """
42 Execute a sql statement and return the result.
43 By default, the response size is less than equal to value of ``max_map_length`` conf.
44 You can call a callable handler to alter the response by default it call ``fetchall`` on database result set.
45
46
47 :param sql: sql query to execute.
48 If the sql query will return huge number of row then it can overload the XCOM.
49 also, If you are using output of this method to expand a task using dynamic task map then
50 it can create lots of parallel task. So it is advisable to limit your sql query statement.
51 :param conn_id: Airflow connection id. This connection id will be used to identify the database client
52 and connect with it at runtime
53 """
54 handler = kwargs.get("handler") or (lambda result_set: result_set.fetchall())
55 max_map_length = int(conf.get(section="core", key="max_map_length"))
56 op_kwargs = {
57 "handler": handler,
58 "response_limit": max_map_length,
59 }
60 task_id = kwargs.get("task_id") or get_unique_task_id(
61 "get_value_list", dag=kwargs.get("dag"), task_group=kwargs.get("task_group")
62 )
63 kwargs.update({"task_id": task_id})
64 return RawSQLOperator(
65 sql=sql, conn_id=conn_id, op_kwargs=op_kwargs, python_callable=(lambda *args: None), **kwargs
66 ).output
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python-sdk/src/astro/sql/__init__.py b/python-sdk/src/astro/sql/__init__.py
--- a/python-sdk/src/astro/sql/__init__.py
+++ b/python-sdk/src/astro/sql/__init__.py
@@ -24,6 +24,7 @@
"drop_table",
"ExportFileOperator",
"export_file",
+ "get_value_list",
"LoadFileOperator",
"load_file",
"MergeOperator",
| {"golden_diff": "diff --git a/python-sdk/src/astro/sql/__init__.py b/python-sdk/src/astro/sql/__init__.py\n--- a/python-sdk/src/astro/sql/__init__.py\n+++ b/python-sdk/src/astro/sql/__init__.py\n@@ -24,6 +24,7 @@\n \"drop_table\",\n \"ExportFileOperator\",\n \"export_file\",\n+ \"get_value_list\",\n \"LoadFileOperator\",\n \"load_file\",\n \"MergeOperator\",\n", "issue": "Doc: Cross link to API reference page from Operators page\nCurrently there is no way to jump to Func/Operator definition from https://astro-sdk-python.readthedocs.io/en/stable/astro/sql/operators/get_value_list.html (and other operators listed on https://astro-sdk-python.readthedocs.io/en/stable/guides/operators.html) to their definition https://astro-sdk-python.readthedocs.io/en/stable/autoapi/index.html\r\n\r\nWe should cross-link them using https://www.sphinx-doc.org/en/master/usage/restructuredtext/domains.html#cross-referencing-syntax\n", "before_files": [{"content": "from airflow.configuration import conf\nfrom airflow.decorators.base import get_unique_task_id\nfrom airflow.models.xcom_arg import XComArg\n\nfrom astro.sql.operators.append import AppendOperator, append\nfrom astro.sql.operators.cleanup import CleanupOperator, cleanup\nfrom astro.sql.operators.dataframe import DataframeOperator, dataframe\nfrom astro.sql.operators.drop import DropTableOperator, drop_table\nfrom astro.sql.operators.export_file import ExportFileOperator, export_file\nfrom astro.sql.operators.load_file import LoadFileOperator, load_file\nfrom astro.sql.operators.merge import MergeOperator, merge\nfrom astro.sql.operators.raw_sql import RawSQLOperator, run_raw_sql\nfrom astro.sql.operators.transform import TransformOperator, transform, transform_file\nfrom astro.table import Metadata, Table\n\n__all__ = [\n \"AppendOperator\",\n \"append\",\n \"CleanupOperator\",\n \"cleanup\",\n \"DataframeOperator\",\n \"dataframe\",\n \"DropTableOperator\",\n \"drop_table\",\n \"ExportFileOperator\",\n \"export_file\",\n \"LoadFileOperator\",\n \"load_file\",\n \"MergeOperator\",\n \"merge\",\n \"Metadata\",\n \"run_raw_sql\",\n \"Table\",\n \"TransformOperator\",\n \"transform_file\",\n \"transform\",\n]\n\n\ndef get_value_list(sql: str, conn_id: str, **kwargs) -> XComArg:\n \"\"\"\n Execute a sql statement and return the result.\n By default, the response size is less than equal to value of ``max_map_length`` conf.\n You can call a callable handler to alter the response by default it call ``fetchall`` on database result set.\n\n\n :param sql: sql query to execute.\n If the sql query will return huge number of row then it can overload the XCOM.\n also, If you are using output of this method to expand a task using dynamic task map then\n it can create lots of parallel task. So it is advisable to limit your sql query statement.\n :param conn_id: Airflow connection id. This connection id will be used to identify the database client\n and connect with it at runtime\n \"\"\"\n handler = kwargs.get(\"handler\") or (lambda result_set: result_set.fetchall())\n max_map_length = int(conf.get(section=\"core\", key=\"max_map_length\"))\n op_kwargs = {\n \"handler\": handler,\n \"response_limit\": max_map_length,\n }\n task_id = kwargs.get(\"task_id\") or get_unique_task_id(\n \"get_value_list\", dag=kwargs.get(\"dag\"), task_group=kwargs.get(\"task_group\")\n )\n kwargs.update({\"task_id\": task_id})\n return RawSQLOperator(\n sql=sql, conn_id=conn_id, op_kwargs=op_kwargs, python_callable=(lambda *args: None), **kwargs\n ).output\n", "path": "python-sdk/src/astro/sql/__init__.py"}], "after_files": [{"content": "from airflow.configuration import conf\nfrom airflow.decorators.base import get_unique_task_id\nfrom airflow.models.xcom_arg import XComArg\n\nfrom astro.sql.operators.append import AppendOperator, append\nfrom astro.sql.operators.cleanup import CleanupOperator, cleanup\nfrom astro.sql.operators.dataframe import DataframeOperator, dataframe\nfrom astro.sql.operators.drop import DropTableOperator, drop_table\nfrom astro.sql.operators.export_file import ExportFileOperator, export_file\nfrom astro.sql.operators.load_file import LoadFileOperator, load_file\nfrom astro.sql.operators.merge import MergeOperator, merge\nfrom astro.sql.operators.raw_sql import RawSQLOperator, run_raw_sql\nfrom astro.sql.operators.transform import TransformOperator, transform, transform_file\nfrom astro.table import Metadata, Table\n\n__all__ = [\n \"AppendOperator\",\n \"append\",\n \"CleanupOperator\",\n \"cleanup\",\n \"DataframeOperator\",\n \"dataframe\",\n \"DropTableOperator\",\n \"drop_table\",\n \"ExportFileOperator\",\n \"export_file\",\n \"get_value_list\",\n \"LoadFileOperator\",\n \"load_file\",\n \"MergeOperator\",\n \"merge\",\n \"Metadata\",\n \"run_raw_sql\",\n \"Table\",\n \"TransformOperator\",\n \"transform_file\",\n \"transform\",\n]\n\n\ndef get_value_list(sql: str, conn_id: str, **kwargs) -> XComArg:\n \"\"\"\n Execute a sql statement and return the result.\n By default, the response size is less than equal to value of ``max_map_length`` conf.\n You can call a callable handler to alter the response by default it call ``fetchall`` on database result set.\n\n\n :param sql: sql query to execute.\n If the sql query will return huge number of row then it can overload the XCOM.\n also, If you are using output of this method to expand a task using dynamic task map then\n it can create lots of parallel task. So it is advisable to limit your sql query statement.\n :param conn_id: Airflow connection id. This connection id will be used to identify the database client\n and connect with it at runtime\n \"\"\"\n handler = kwargs.get(\"handler\") or (lambda result_set: result_set.fetchall())\n max_map_length = int(conf.get(section=\"core\", key=\"max_map_length\"))\n op_kwargs = {\n \"handler\": handler,\n \"response_limit\": max_map_length,\n }\n task_id = kwargs.get(\"task_id\") or get_unique_task_id(\n \"get_value_list\", dag=kwargs.get(\"dag\"), task_group=kwargs.get(\"task_group\")\n )\n kwargs.update({\"task_id\": task_id})\n return RawSQLOperator(\n sql=sql, conn_id=conn_id, op_kwargs=op_kwargs, python_callable=(lambda *args: None), **kwargs\n ).output\n", "path": "python-sdk/src/astro/sql/__init__.py"}]} | 1,109 | 103 |
gh_patches_debug_5063 | rasdani/github-patches | git_diff | cltk__cltk-371 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Inconsistent case of data directory
[The latin module](https://github.com/cltk/cltk/blob/36b6fe7aa23a8c4f41ef694ddb64b4b02534146c/cltk/corpus/latin/__init__.py#L20) tries to load the latin library from "~/CLTK_DATA", which fails on a case-sensitive filesystem.
All other modules seem to use "~/cltk_data", so that should be standardized.
(It would probably be best if the path was factored out and also configurable, but fixing at least that one inconsistency lets me actually import stuff without errors.)
Inconsistent case of data directory
[The latin module](https://github.com/cltk/cltk/blob/36b6fe7aa23a8c4f41ef694ddb64b4b02534146c/cltk/corpus/latin/__init__.py#L20) tries to load the latin library from "~/CLTK_DATA", which fails on a case-sensitive filesystem.
All other modules seem to use "~/cltk_data", so that should be standardized.
(It would probably be best if the path was factored out and also configurable, but fixing at least that one inconsistency lets me actually import stuff without errors.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cltk/corpus/latin/__init__.py`
Content:
```
1 # CLTK: Latin Corpus Readers
2
3 __author__ = 'Patrick J. Burns <[email protected]>'
4 __license__ = 'MIT License. See LICENSE.'
5
6 """
7 CLTK Latin corpus readers
8 """
9
10 import os.path
11 from nltk.corpus.reader.plaintext import PlaintextCorpusReader
12 from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktParameters
13
14 from cltk.tokenize.sentence import TokenizeSentence
15 from cltk.tokenize.word import WordTokenizer
16
17 # Would like to have this search through a CLTK_DATA environment variable
18 # Better to use something like make_cltk_path in cltk.utils.file_operations?
19 home = os.path.expanduser('~')
20 cltk_path = os.path.join(home, 'CLTK_DATA')
21
22 word_tokenizer = WordTokenizer('latin')
23
24 if os.path.exists(cltk_path + 'latin/model/latin_models_cltk/tokenizers/sentence'):
25 sent_tokenizer = TokenizeSentence('latin')
26 else:
27 punkt_param = PunktParameters()
28 abbreviations = ['c', 'l', 'm', 'p', 'q', 't', 'ti', 'sex', 'a', 'd', 'cn', 'sp', "m'", 'ser', 'ap', 'n', 'v', 'k', 'mam', 'post', 'f', 'oct', 'opet', 'paul', 'pro', 'sert', 'st', 'sta', 'v', 'vol', 'vop']
29 punkt_param.abbrev_types = set(abbreviations)
30 sent_tokenizer = PunktSentenceTokenizer(punkt_param)
31
32 # Latin Library
33 try:
34 latinlibrary = PlaintextCorpusReader(cltk_path + '/latin/text/latin_text_latin_library',
35 '.*\.txt',
36 word_tokenizer=word_tokenizer,
37 sent_tokenizer=sent_tokenizer,
38 encoding='utf-8')
39 pass
40 except IOError as e:
41 print("Corpus not found. Please check that the Latin Library is installed in CLTK_DATA.")
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cltk/corpus/latin/__init__.py b/cltk/corpus/latin/__init__.py
--- a/cltk/corpus/latin/__init__.py
+++ b/cltk/corpus/latin/__init__.py
@@ -17,7 +17,7 @@
# Would like to have this search through a CLTK_DATA environment variable
# Better to use something like make_cltk_path in cltk.utils.file_operations?
home = os.path.expanduser('~')
-cltk_path = os.path.join(home, 'CLTK_DATA')
+cltk_path = os.path.join(home, 'cltk_data')
word_tokenizer = WordTokenizer('latin')
| {"golden_diff": "diff --git a/cltk/corpus/latin/__init__.py b/cltk/corpus/latin/__init__.py\n--- a/cltk/corpus/latin/__init__.py\n+++ b/cltk/corpus/latin/__init__.py\n@@ -17,7 +17,7 @@\n # Would like to have this search through a CLTK_DATA environment variable\n # Better to use something like make_cltk_path in cltk.utils.file_operations?\n home = os.path.expanduser('~')\n-cltk_path = os.path.join(home, 'CLTK_DATA')\n+cltk_path = os.path.join(home, 'cltk_data')\n \n word_tokenizer = WordTokenizer('latin')\n", "issue": "Inconsistent case of data directory\n[The latin module](https://github.com/cltk/cltk/blob/36b6fe7aa23a8c4f41ef694ddb64b4b02534146c/cltk/corpus/latin/__init__.py#L20) tries to load the latin library from \"~/CLTK_DATA\", which fails on a case-sensitive filesystem.\n\nAll other modules seem to use \"~/cltk_data\", so that should be standardized. \n\n(It would probably be best if the path was factored out and also configurable, but fixing at least that one inconsistency lets me actually import stuff without errors.)\n\nInconsistent case of data directory\n[The latin module](https://github.com/cltk/cltk/blob/36b6fe7aa23a8c4f41ef694ddb64b4b02534146c/cltk/corpus/latin/__init__.py#L20) tries to load the latin library from \"~/CLTK_DATA\", which fails on a case-sensitive filesystem.\n\nAll other modules seem to use \"~/cltk_data\", so that should be standardized. \n\n(It would probably be best if the path was factored out and also configurable, but fixing at least that one inconsistency lets me actually import stuff without errors.)\n\n", "before_files": [{"content": "# CLTK: Latin Corpus Readers\n\n__author__ = 'Patrick J. Burns <[email protected]>'\n__license__ = 'MIT License. See LICENSE.'\n\n\"\"\"\nCLTK Latin corpus readers\n\"\"\"\n\nimport os.path\nfrom nltk.corpus.reader.plaintext import PlaintextCorpusReader\nfrom nltk.tokenize.punkt import PunktSentenceTokenizer, PunktParameters\n\nfrom cltk.tokenize.sentence import TokenizeSentence\nfrom cltk.tokenize.word import WordTokenizer\n\n# Would like to have this search through a CLTK_DATA environment variable\n# Better to use something like make_cltk_path in cltk.utils.file_operations?\nhome = os.path.expanduser('~')\ncltk_path = os.path.join(home, 'CLTK_DATA')\n\nword_tokenizer = WordTokenizer('latin')\n\nif os.path.exists(cltk_path + 'latin/model/latin_models_cltk/tokenizers/sentence'):\n sent_tokenizer = TokenizeSentence('latin')\nelse:\n punkt_param = PunktParameters()\n abbreviations = ['c', 'l', 'm', 'p', 'q', 't', 'ti', 'sex', 'a', 'd', 'cn', 'sp', \"m'\", 'ser', 'ap', 'n', 'v', 'k', 'mam', 'post', 'f', 'oct', 'opet', 'paul', 'pro', 'sert', 'st', 'sta', 'v', 'vol', 'vop']\n punkt_param.abbrev_types = set(abbreviations)\n sent_tokenizer = PunktSentenceTokenizer(punkt_param)\n\n# Latin Library\ntry:\n latinlibrary = PlaintextCorpusReader(cltk_path + '/latin/text/latin_text_latin_library', \n '.*\\.txt',\n word_tokenizer=word_tokenizer, \n sent_tokenizer=sent_tokenizer, \n encoding='utf-8') \n pass\nexcept IOError as e:\n print(\"Corpus not found. Please check that the Latin Library is installed in CLTK_DATA.\")\n", "path": "cltk/corpus/latin/__init__.py"}], "after_files": [{"content": "# CLTK: Latin Corpus Readers\n\n__author__ = 'Patrick J. Burns <[email protected]>'\n__license__ = 'MIT License. See LICENSE.'\n\n\"\"\"\nCLTK Latin corpus readers\n\"\"\"\n\nimport os.path\nfrom nltk.corpus.reader.plaintext import PlaintextCorpusReader\nfrom nltk.tokenize.punkt import PunktSentenceTokenizer, PunktParameters\n\nfrom cltk.tokenize.sentence import TokenizeSentence\nfrom cltk.tokenize.word import WordTokenizer\n\n# Would like to have this search through a CLTK_DATA environment variable\n# Better to use something like make_cltk_path in cltk.utils.file_operations?\nhome = os.path.expanduser('~')\ncltk_path = os.path.join(home, 'cltk_data')\n\nword_tokenizer = WordTokenizer('latin')\n\nif os.path.exists(cltk_path + 'latin/model/latin_models_cltk/tokenizers/sentence'):\n sent_tokenizer = TokenizeSentence('latin')\nelse:\n punkt_param = PunktParameters()\n abbreviations = ['c', 'l', 'm', 'p', 'q', 't', 'ti', 'sex', 'a', 'd', 'cn', 'sp', \"m'\", 'ser', 'ap', 'n', 'v', 'k', 'mam', 'post', 'f', 'oct', 'opet', 'paul', 'pro', 'sert', 'st', 'sta', 'v', 'vol', 'vop']\n punkt_param.abbrev_types = set(abbreviations)\n sent_tokenizer = PunktSentenceTokenizer(punkt_param)\n\n# Latin Library\ntry:\n latinlibrary = PlaintextCorpusReader(cltk_path + '/latin/text/latin_text_latin_library', \n '.*\\.txt',\n word_tokenizer=word_tokenizer, \n sent_tokenizer=sent_tokenizer, \n encoding='utf-8') \n pass\nexcept IOError as e:\n print(\"Corpus not found. Please check that the Latin Library is installed in CLTK_DATA.\")\n", "path": "cltk/corpus/latin/__init__.py"}]} | 1,058 | 147 |
gh_patches_debug_35230 | rasdani/github-patches | git_diff | ocadotechnology__aimmo-191 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove "staff required" for aimmo pages when URL viewing restrictions are merged
Will require https://github.com/ocadotechnology/codeforlife-deploy-appengine/pull/35
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `players/urls.py`
Content:
```
1 from django.conf.urls import url
2 from django.views.generic import TemplateView
3 from django.contrib.auth.decorators import login_required
4 from django.contrib.admin.views.decorators import staff_member_required
5
6 from players import views
7
8 urlpatterns = [
9 url(r'^$', staff_member_required(TemplateView.as_view(template_name='players/home.html')), name='aimmo/home'),
10
11 url(r'^program/$', staff_member_required(login_required(TemplateView.as_view(template_name='players/program.html'))), name='aimmo/program'),
12 url(r'^watch/$', staff_member_required(views.WatchView.as_view()), name='aimmo/watch'),
13 url(r'^statistics/$', staff_member_required(TemplateView.as_view(template_name='players/statistics.html')), name='aimmo/statistics'),
14
15 url(r'^api/code/$', staff_member_required(views.code), name='aimmo/code'),
16 url(r'^api/games/$', views.games, name='aimmo/games'),
17
18 url(r'^jsreverse/$', 'django_js_reverse.views.urls_js', name='aimmo/js_reverse'), # TODO: Pull request to make django_js_reverse.urls
19 ]
20
```
Path: `example_project/example_project/settings.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Code for Life
3 #
4 # Copyright (C) 2015, Ocado Innovation Limited
5 #
6 # This program is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU Affero General Public License as
8 # published by the Free Software Foundation, either version 3 of the
9 # License, or (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU Affero General Public License for more details.
15 #
16 # You should have received a copy of the GNU Affero General Public License
17 # along with this program. If not, see <http://www.gnu.org/licenses/>.
18 #
19 # ADDITIONAL TERMS – Section 7 GNU General Public Licence
20 #
21 # This licence does not grant any right, title or interest in any “Ocado” logos,
22 # trade names or the trademark “Ocado” or any other trademarks or domain names
23 # owned by Ocado Innovation Limited or the Ocado group of companies or any other
24 # distinctive brand features of “Ocado” as may be secured from time to time. You
25 # must not distribute any modification of this program using the trademark
26 # “Ocado” or claim any affiliation or association with Ocado or its employees.
27 #
28 # You are not authorised to use the name Ocado (or any of its trade names) or
29 # the names of any author or contributor in advertising or for publicity purposes
30 # pertaining to the distribution of this program, without the prior written
31 # authorisation of Ocado.
32 #
33 # Any propagation, distribution or conveyance of this program must include this
34 # copyright notice and these terms. You must not misrepresent the origins of this
35 # program; modified versions of the program must be marked as such and not
36 # identified as the original program.
37 """Django settings for example_project project."""
38 import os
39 import subprocess
40
41 DEBUG = True
42
43 DATABASES = {
44 'default': {
45 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
46 'NAME': os.path.join(os.path.abspath(os.path.dirname(__file__)), 'db.sqlite3'), # Or path to database file if using sqlite3.
47 }
48 }
49
50 USE_I18N = True
51 USE_L10N = True
52
53 TIME_ZONE = 'Europe/London'
54 LANGUAGE_CODE = 'en-gb'
55 STATIC_ROOT = os.path.join(os.path.dirname(__file__), 'static')
56 STATIC_URL = '/static/'
57 SECRET_KEY = 'not-a-secret'
58
59 ROOT_URLCONF = 'django_autoconfig.autourlconf'
60
61 WSGI_APPLICATION = 'example_project.wsgi.application'
62
63 INSTALLED_APPS = (
64 'django.contrib.admin',
65 'players',
66 )
67
68 LOGGING = {
69 'version': 1,
70 'disable_existing_loggers': False,
71 'filters': {
72 'require_debug_false': {
73 '()': 'django.utils.log.RequireDebugFalse'
74 }
75 },
76 'handlers': {
77 'console': {
78 'level': 'DEBUG',
79 'class': 'logging.StreamHandler'
80 },
81 },
82 'loggers': {
83 'views': {
84 'handlers': ['console'],
85 'level': 'DEBUG'
86 },
87 }
88 }
89
90
91 def get_url(game):
92 if os.environ.get('AIMMO_MODE', '') == 'minikube':
93 output = subprocess.check_output(['./test-bin/minikube', 'service', 'game-%s' % game, '--url'])
94 return (output.strip(), '/game/%s/socket.io' % game)
95 else:
96 return ('http://localhost:5000', '/socket.io')
97
98 AIMMO_GAME_SERVER_LOCATION_FUNCTION = get_url
99
100 try:
101 from example_project.local_settings import * # pylint: disable=E0611
102 except ImportError:
103 pass
104
105 from django_autoconfig import autoconfig
106 autoconfig.configure_settings(globals())
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/example_project/example_project/settings.py b/example_project/example_project/settings.py
--- a/example_project/example_project/settings.py
+++ b/example_project/example_project/settings.py
@@ -62,6 +62,8 @@
INSTALLED_APPS = (
'django.contrib.admin',
+ 'django.contrib.auth',
+ 'django.contrib.contenttypes',
'players',
)
@@ -87,6 +89,16 @@
}
}
+LOGIN_URL = '/players/accounts/login/'
+
+MIDDLEWARE_CLASSES = [
+ 'django.contrib.sessions.middleware.SessionMiddleware',
+ 'django.middleware.locale.LocaleMiddleware',
+ 'django.middleware.common.CommonMiddleware',
+ 'django.middleware.csrf.CsrfViewMiddleware',
+ 'django.contrib.auth.middleware.AuthenticationMiddleware',
+ 'django.contrib.messages.middleware.MessageMiddleware',
+]
def get_url(game):
if os.environ.get('AIMMO_MODE', '') == 'minikube':
diff --git a/players/urls.py b/players/urls.py
--- a/players/urls.py
+++ b/players/urls.py
@@ -1,18 +1,19 @@
from django.conf.urls import url
from django.views.generic import TemplateView
+from django.contrib.auth import views as auth_views
from django.contrib.auth.decorators import login_required
-from django.contrib.admin.views.decorators import staff_member_required
from players import views
urlpatterns = [
- url(r'^$', staff_member_required(TemplateView.as_view(template_name='players/home.html')), name='aimmo/home'),
+ url(r'^$', TemplateView.as_view(template_name='players/home.html'), name='aimmo/home'),
- url(r'^program/$', staff_member_required(login_required(TemplateView.as_view(template_name='players/program.html'))), name='aimmo/program'),
- url(r'^watch/$', staff_member_required(views.WatchView.as_view()), name='aimmo/watch'),
- url(r'^statistics/$', staff_member_required(TemplateView.as_view(template_name='players/statistics.html')), name='aimmo/statistics'),
+ url(r'^program/$', login_required(TemplateView.as_view(template_name='players/program.html')), name='aimmo/program'),
+ url(r'^accounts/login/$', auth_views.login),
+ url(r'^watch/$', views.WatchView.as_view(), name='aimmo/watch'),
+ url(r'^statistics/$', TemplateView.as_view(template_name='players/statistics.html'), name='aimmo/statistics'),
- url(r'^api/code/$', staff_member_required(views.code), name='aimmo/code'),
+ url(r'^api/code/$', views.code, name='aimmo/code'),
url(r'^api/games/$', views.games, name='aimmo/games'),
url(r'^jsreverse/$', 'django_js_reverse.views.urls_js', name='aimmo/js_reverse'), # TODO: Pull request to make django_js_reverse.urls
| {"golden_diff": "diff --git a/example_project/example_project/settings.py b/example_project/example_project/settings.py\n--- a/example_project/example_project/settings.py\n+++ b/example_project/example_project/settings.py\n@@ -62,6 +62,8 @@\n \n INSTALLED_APPS = (\n 'django.contrib.admin',\n+ 'django.contrib.auth',\n+ 'django.contrib.contenttypes',\n 'players',\n )\n \n@@ -87,6 +89,16 @@\n }\n }\n \n+LOGIN_URL = '/players/accounts/login/'\n+\n+MIDDLEWARE_CLASSES = [\n+ 'django.contrib.sessions.middleware.SessionMiddleware',\n+ 'django.middleware.locale.LocaleMiddleware',\n+ 'django.middleware.common.CommonMiddleware',\n+ 'django.middleware.csrf.CsrfViewMiddleware',\n+ 'django.contrib.auth.middleware.AuthenticationMiddleware',\n+ 'django.contrib.messages.middleware.MessageMiddleware',\n+]\n \n def get_url(game):\n if os.environ.get('AIMMO_MODE', '') == 'minikube':\ndiff --git a/players/urls.py b/players/urls.py\n--- a/players/urls.py\n+++ b/players/urls.py\n@@ -1,18 +1,19 @@\n from django.conf.urls import url\n from django.views.generic import TemplateView\n+from django.contrib.auth import views as auth_views\n from django.contrib.auth.decorators import login_required\n-from django.contrib.admin.views.decorators import staff_member_required\n \n from players import views\n \n urlpatterns = [\n- url(r'^$', staff_member_required(TemplateView.as_view(template_name='players/home.html')), name='aimmo/home'),\n+ url(r'^$', TemplateView.as_view(template_name='players/home.html'), name='aimmo/home'),\n \n- url(r'^program/$', staff_member_required(login_required(TemplateView.as_view(template_name='players/program.html'))), name='aimmo/program'),\n- url(r'^watch/$', staff_member_required(views.WatchView.as_view()), name='aimmo/watch'),\n- url(r'^statistics/$', staff_member_required(TemplateView.as_view(template_name='players/statistics.html')), name='aimmo/statistics'),\n+ url(r'^program/$', login_required(TemplateView.as_view(template_name='players/program.html')), name='aimmo/program'),\n+ url(r'^accounts/login/$', auth_views.login),\n+ url(r'^watch/$', views.WatchView.as_view(), name='aimmo/watch'),\n+ url(r'^statistics/$', TemplateView.as_view(template_name='players/statistics.html'), name='aimmo/statistics'),\n \n- url(r'^api/code/$', staff_member_required(views.code), name='aimmo/code'),\n+ url(r'^api/code/$', views.code, name='aimmo/code'),\n url(r'^api/games/$', views.games, name='aimmo/games'),\n \n url(r'^jsreverse/$', 'django_js_reverse.views.urls_js', name='aimmo/js_reverse'), # TODO: Pull request to make django_js_reverse.urls\n", "issue": "Remove \"staff required\" for aimmo pages when URL viewing restrictions are merged\nWill require https://github.com/ocadotechnology/codeforlife-deploy-appengine/pull/35\n", "before_files": [{"content": "from django.conf.urls import url\nfrom django.views.generic import TemplateView\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.admin.views.decorators import staff_member_required\n\nfrom players import views\n\nurlpatterns = [\n url(r'^$', staff_member_required(TemplateView.as_view(template_name='players/home.html')), name='aimmo/home'),\n\n url(r'^program/$', staff_member_required(login_required(TemplateView.as_view(template_name='players/program.html'))), name='aimmo/program'),\n url(r'^watch/$', staff_member_required(views.WatchView.as_view()), name='aimmo/watch'),\n url(r'^statistics/$', staff_member_required(TemplateView.as_view(template_name='players/statistics.html')), name='aimmo/statistics'),\n\n url(r'^api/code/$', staff_member_required(views.code), name='aimmo/code'),\n url(r'^api/games/$', views.games, name='aimmo/games'),\n\n url(r'^jsreverse/$', 'django_js_reverse.views.urls_js', name='aimmo/js_reverse'), # TODO: Pull request to make django_js_reverse.urls\n]\n", "path": "players/urls.py"}, {"content": "# -*- coding: utf-8 -*-\n# Code for Life\n#\n# Copyright (C) 2015, Ocado Innovation Limited\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as\n# published by the Free Software Foundation, either version 3 of the\n# License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n# ADDITIONAL TERMS \u2013 Section 7 GNU General Public Licence\n#\n# This licence does not grant any right, title or interest in any \u201cOcado\u201d logos,\n# trade names or the trademark \u201cOcado\u201d or any other trademarks or domain names\n# owned by Ocado Innovation Limited or the Ocado group of companies or any other\n# distinctive brand features of \u201cOcado\u201d as may be secured from time to time. You\n# must not distribute any modification of this program using the trademark\n# \u201cOcado\u201d or claim any affiliation or association with Ocado or its employees.\n#\n# You are not authorised to use the name Ocado (or any of its trade names) or\n# the names of any author or contributor in advertising or for publicity purposes\n# pertaining to the distribution of this program, without the prior written\n# authorisation of Ocado.\n#\n# Any propagation, distribution or conveyance of this program must include this\n# copyright notice and these terms. You must not misrepresent the origins of this\n# program; modified versions of the program must be marked as such and not\n# identified as the original program.\n\"\"\"Django settings for example_project project.\"\"\"\nimport os\nimport subprocess\n\nDEBUG = True\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.\n 'NAME': os.path.join(os.path.abspath(os.path.dirname(__file__)), 'db.sqlite3'), # Or path to database file if using sqlite3.\n }\n}\n\nUSE_I18N = True\nUSE_L10N = True\n\nTIME_ZONE = 'Europe/London'\nLANGUAGE_CODE = 'en-gb'\nSTATIC_ROOT = os.path.join(os.path.dirname(__file__), 'static')\nSTATIC_URL = '/static/'\nSECRET_KEY = 'not-a-secret'\n\nROOT_URLCONF = 'django_autoconfig.autourlconf'\n\nWSGI_APPLICATION = 'example_project.wsgi.application'\n\nINSTALLED_APPS = (\n 'django.contrib.admin',\n 'players',\n)\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse'\n }\n },\n 'handlers': {\n 'console': {\n 'level': 'DEBUG',\n 'class': 'logging.StreamHandler'\n },\n },\n 'loggers': {\n 'views': {\n 'handlers': ['console'],\n 'level': 'DEBUG'\n },\n }\n}\n\n\ndef get_url(game):\n if os.environ.get('AIMMO_MODE', '') == 'minikube':\n output = subprocess.check_output(['./test-bin/minikube', 'service', 'game-%s' % game, '--url'])\n return (output.strip(), '/game/%s/socket.io' % game)\n else:\n return ('http://localhost:5000', '/socket.io')\n\nAIMMO_GAME_SERVER_LOCATION_FUNCTION = get_url\n\ntry:\n from example_project.local_settings import * # pylint: disable=E0611\nexcept ImportError:\n pass\n\nfrom django_autoconfig import autoconfig\nautoconfig.configure_settings(globals())\n", "path": "example_project/example_project/settings.py"}], "after_files": [{"content": "from django.conf.urls import url\nfrom django.views.generic import TemplateView\nfrom django.contrib.auth import views as auth_views\nfrom django.contrib.auth.decorators import login_required\n\nfrom players import views\n\nurlpatterns = [\n url(r'^$', TemplateView.as_view(template_name='players/home.html'), name='aimmo/home'),\n\n url(r'^program/$', login_required(TemplateView.as_view(template_name='players/program.html')), name='aimmo/program'),\n url(r'^accounts/login/$', auth_views.login),\n url(r'^watch/$', views.WatchView.as_view(), name='aimmo/watch'),\n url(r'^statistics/$', TemplateView.as_view(template_name='players/statistics.html'), name='aimmo/statistics'),\n\n url(r'^api/code/$', views.code, name='aimmo/code'),\n url(r'^api/games/$', views.games, name='aimmo/games'),\n\n url(r'^jsreverse/$', 'django_js_reverse.views.urls_js', name='aimmo/js_reverse'), # TODO: Pull request to make django_js_reverse.urls\n]\n", "path": "players/urls.py"}, {"content": "# -*- coding: utf-8 -*-\n# Code for Life\n#\n# Copyright (C) 2015, Ocado Innovation Limited\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as\n# published by the Free Software Foundation, either version 3 of the\n# License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n# ADDITIONAL TERMS \u2013 Section 7 GNU General Public Licence\n#\n# This licence does not grant any right, title or interest in any \u201cOcado\u201d logos,\n# trade names or the trademark \u201cOcado\u201d or any other trademarks or domain names\n# owned by Ocado Innovation Limited or the Ocado group of companies or any other\n# distinctive brand features of \u201cOcado\u201d as may be secured from time to time. You\n# must not distribute any modification of this program using the trademark\n# \u201cOcado\u201d or claim any affiliation or association with Ocado or its employees.\n#\n# You are not authorised to use the name Ocado (or any of its trade names) or\n# the names of any author or contributor in advertising or for publicity purposes\n# pertaining to the distribution of this program, without the prior written\n# authorisation of Ocado.\n#\n# Any propagation, distribution or conveyance of this program must include this\n# copyright notice and these terms. You must not misrepresent the origins of this\n# program; modified versions of the program must be marked as such and not\n# identified as the original program.\n\"\"\"Django settings for example_project project.\"\"\"\nimport os\nimport subprocess\n\nDEBUG = True\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.\n 'NAME': os.path.join(os.path.abspath(os.path.dirname(__file__)), 'db.sqlite3'), # Or path to database file if using sqlite3.\n }\n}\n\nUSE_I18N = True\nUSE_L10N = True\n\nTIME_ZONE = 'Europe/London'\nLANGUAGE_CODE = 'en-gb'\nSTATIC_ROOT = os.path.join(os.path.dirname(__file__), 'static')\nSTATIC_URL = '/static/'\nSECRET_KEY = 'not-a-secret'\n\nROOT_URLCONF = 'django_autoconfig.autourlconf'\n\nWSGI_APPLICATION = 'example_project.wsgi.application'\n\nINSTALLED_APPS = (\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'players',\n)\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse'\n }\n },\n 'handlers': {\n 'console': {\n 'level': 'DEBUG',\n 'class': 'logging.StreamHandler'\n },\n },\n 'loggers': {\n 'views': {\n 'handlers': ['console'],\n 'level': 'DEBUG'\n },\n }\n}\n\nLOGIN_URL = '/players/accounts/login/'\n\nMIDDLEWARE_CLASSES = [\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n]\n\ndef get_url(game):\n if os.environ.get('AIMMO_MODE', '') == 'minikube':\n output = subprocess.check_output(['./test-bin/minikube', 'service', 'game-%s' % game, '--url'])\n return (output.strip(), '/game/%s/socket.io' % game)\n else:\n return ('http://localhost:5000', '/socket.io')\n\nAIMMO_GAME_SERVER_LOCATION_FUNCTION = get_url\n\ntry:\n from example_project.local_settings import * # pylint: disable=E0611\nexcept ImportError:\n pass\n\nfrom django_autoconfig import autoconfig\nautoconfig.configure_settings(globals())\n", "path": "example_project/example_project/settings.py"}]} | 1,657 | 612 |
gh_patches_debug_18386 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-1168 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
On creating a plan, "Berlin" may be selected as district
"Berlin" is part of the dropdown in the vorhaben create and update form
@MagdaN should i add this issue to the whiteboard or taiga, too?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/plans/models.py`
Content:
```
1 from ckeditor.fields import RichTextField
2 from django.conf import settings
3 from django.core.urlresolvers import reverse
4 from django.db import models
5 from django.utils.translation import ugettext_lazy as _
6
7 from adhocracy4 import transforms
8 from adhocracy4.maps import fields as map_fields
9 from adhocracy4.models.base import UserGeneratedContentModel
10 from adhocracy4.projects import models as project_models
11 from meinberlin.apps.maps.models import MapPreset
12
13 STATUS_TODO = 0
14 STATUS_PLANNING = 1
15 STATUS_IMPLEMENTATION = 2
16 STATUS_DONE = 3
17 STATUS_STOPPED = 4
18
19 PARTICIPATION_NO = 0
20 PARTICIPATION_YES = 1
21 PARTICIPATION_UNDECIDED = 2
22
23
24 class Plan(UserGeneratedContentModel):
25 title = models.CharField(max_length=120, verbose_name=_('Title'))
26 organisation = models.ForeignKey(
27 settings.A4_ORGANISATIONS_MODEL,
28 on_delete=models.CASCADE)
29 project = models.ForeignKey(project_models.Project, blank=True, null=True)
30 point = map_fields.PointField(
31 verbose_name=_('Where can the plan be located on a map?'),
32 help_text=_('Click inside marked area on the map to set a marker. '
33 'Drag and drop the marker to change its place. '
34 'Alternatively you can use the search field to search '
35 'for an address.'))
36 point_label = models.CharField(
37 blank=True,
38 default='',
39 max_length=255,
40 verbose_name=_('Label of the location'),
41 help_text=_('This could be an address or the name of a landmark.'),
42 )
43 district = models.ForeignKey(
44 MapPreset,
45 limit_choices_to={'category__name': 'Berlin'})
46 contact = models.TextField(max_length=255, verbose_name=_('Contact'))
47 cost = models.PositiveIntegerField(blank=True, null=True,
48 verbose_name=_('Cost'))
49 description = RichTextField(verbose_name=_('Description'), blank=True)
50 category = models.CharField(max_length=255, verbose_name=_('Type of plan'))
51 status = models.SmallIntegerField(choices=(
52 (STATUS_TODO, _('Idea')),
53 (STATUS_PLANNING, _('Planning')),
54 (STATUS_IMPLEMENTATION, _('Implementation')),
55 (STATUS_DONE, _('Done')),
56 (STATUS_STOPPED, _('Stopped')),
57 ))
58 participation = models.SmallIntegerField(choices=(
59 (PARTICIPATION_YES, _('Yes')),
60 (PARTICIPATION_NO, _('No')),
61 (PARTICIPATION_UNDECIDED, _('Still undecided')),
62 ))
63
64 class Meta:
65 ordering = ['-created']
66
67 @property
68 def reference_number(self):
69 return '{:d}-{:05d}'.format(self.created.year, self.pk)
70
71 def __str__(self):
72 return self.title
73
74 def get_absolute_url(self):
75 return reverse('meinberlin_plans:plan-detail',
76 kwargs=dict(pk=self.pk, year=self.created.year))
77
78 def save(self, *args, **kwargs):
79 self.description = transforms.clean_html_field(self.description)
80 super().save(*args, **kwargs)
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/apps/plans/models.py b/meinberlin/apps/plans/models.py
--- a/meinberlin/apps/plans/models.py
+++ b/meinberlin/apps/plans/models.py
@@ -2,6 +2,7 @@
from django.conf import settings
from django.core.urlresolvers import reverse
from django.db import models
+from django.db.models import Q
from django.utils.translation import ugettext_lazy as _
from adhocracy4 import transforms
@@ -42,7 +43,7 @@
)
district = models.ForeignKey(
MapPreset,
- limit_choices_to={'category__name': 'Berlin'})
+ limit_choices_to=Q(category__name='Berlin') & ~Q(name='Berlin'))
contact = models.TextField(max_length=255, verbose_name=_('Contact'))
cost = models.PositiveIntegerField(blank=True, null=True,
verbose_name=_('Cost'))
| {"golden_diff": "diff --git a/meinberlin/apps/plans/models.py b/meinberlin/apps/plans/models.py\n--- a/meinberlin/apps/plans/models.py\n+++ b/meinberlin/apps/plans/models.py\n@@ -2,6 +2,7 @@\n from django.conf import settings\n from django.core.urlresolvers import reverse\n from django.db import models\n+from django.db.models import Q\n from django.utils.translation import ugettext_lazy as _\n \n from adhocracy4 import transforms\n@@ -42,7 +43,7 @@\n )\n district = models.ForeignKey(\n MapPreset,\n- limit_choices_to={'category__name': 'Berlin'})\n+ limit_choices_to=Q(category__name='Berlin') & ~Q(name='Berlin'))\n contact = models.TextField(max_length=255, verbose_name=_('Contact'))\n cost = models.PositiveIntegerField(blank=True, null=True,\n verbose_name=_('Cost'))\n", "issue": "On creating a plan, \"Berlin\" may be selected as district\n\"Berlin\" is part of the dropdown in the vorhaben create and update form\r\n\r\n@MagdaN should i add this issue to the whiteboard or taiga, too?\n", "before_files": [{"content": "from ckeditor.fields import RichTextField\nfrom django.conf import settings\nfrom django.core.urlresolvers import reverse\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom adhocracy4 import transforms\nfrom adhocracy4.maps import fields as map_fields\nfrom adhocracy4.models.base import UserGeneratedContentModel\nfrom adhocracy4.projects import models as project_models\nfrom meinberlin.apps.maps.models import MapPreset\n\nSTATUS_TODO = 0\nSTATUS_PLANNING = 1\nSTATUS_IMPLEMENTATION = 2\nSTATUS_DONE = 3\nSTATUS_STOPPED = 4\n\nPARTICIPATION_NO = 0\nPARTICIPATION_YES = 1\nPARTICIPATION_UNDECIDED = 2\n\n\nclass Plan(UserGeneratedContentModel):\n title = models.CharField(max_length=120, verbose_name=_('Title'))\n organisation = models.ForeignKey(\n settings.A4_ORGANISATIONS_MODEL,\n on_delete=models.CASCADE)\n project = models.ForeignKey(project_models.Project, blank=True, null=True)\n point = map_fields.PointField(\n verbose_name=_('Where can the plan be located on a map?'),\n help_text=_('Click inside marked area on the map to set a marker. '\n 'Drag and drop the marker to change its place. '\n 'Alternatively you can use the search field to search '\n 'for an address.'))\n point_label = models.CharField(\n blank=True,\n default='',\n max_length=255,\n verbose_name=_('Label of the location'),\n help_text=_('This could be an address or the name of a landmark.'),\n )\n district = models.ForeignKey(\n MapPreset,\n limit_choices_to={'category__name': 'Berlin'})\n contact = models.TextField(max_length=255, verbose_name=_('Contact'))\n cost = models.PositiveIntegerField(blank=True, null=True,\n verbose_name=_('Cost'))\n description = RichTextField(verbose_name=_('Description'), blank=True)\n category = models.CharField(max_length=255, verbose_name=_('Type of plan'))\n status = models.SmallIntegerField(choices=(\n (STATUS_TODO, _('Idea')),\n (STATUS_PLANNING, _('Planning')),\n (STATUS_IMPLEMENTATION, _('Implementation')),\n (STATUS_DONE, _('Done')),\n (STATUS_STOPPED, _('Stopped')),\n ))\n participation = models.SmallIntegerField(choices=(\n (PARTICIPATION_YES, _('Yes')),\n (PARTICIPATION_NO, _('No')),\n (PARTICIPATION_UNDECIDED, _('Still undecided')),\n ))\n\n class Meta:\n ordering = ['-created']\n\n @property\n def reference_number(self):\n return '{:d}-{:05d}'.format(self.created.year, self.pk)\n\n def __str__(self):\n return self.title\n\n def get_absolute_url(self):\n return reverse('meinberlin_plans:plan-detail',\n kwargs=dict(pk=self.pk, year=self.created.year))\n\n def save(self, *args, **kwargs):\n self.description = transforms.clean_html_field(self.description)\n super().save(*args, **kwargs)\n", "path": "meinberlin/apps/plans/models.py"}], "after_files": [{"content": "from ckeditor.fields import RichTextField\nfrom django.conf import settings\nfrom django.core.urlresolvers import reverse\nfrom django.db import models\nfrom django.db.models import Q\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom adhocracy4 import transforms\nfrom adhocracy4.maps import fields as map_fields\nfrom adhocracy4.models.base import UserGeneratedContentModel\nfrom adhocracy4.projects import models as project_models\nfrom meinberlin.apps.maps.models import MapPreset\n\nSTATUS_TODO = 0\nSTATUS_PLANNING = 1\nSTATUS_IMPLEMENTATION = 2\nSTATUS_DONE = 3\nSTATUS_STOPPED = 4\n\nPARTICIPATION_NO = 0\nPARTICIPATION_YES = 1\nPARTICIPATION_UNDECIDED = 2\n\n\nclass Plan(UserGeneratedContentModel):\n title = models.CharField(max_length=120, verbose_name=_('Title'))\n organisation = models.ForeignKey(\n settings.A4_ORGANISATIONS_MODEL,\n on_delete=models.CASCADE)\n project = models.ForeignKey(project_models.Project, blank=True, null=True)\n point = map_fields.PointField(\n verbose_name=_('Where can the plan be located on a map?'),\n help_text=_('Click inside marked area on the map to set a marker. '\n 'Drag and drop the marker to change its place. '\n 'Alternatively you can use the search field to search '\n 'for an address.'))\n point_label = models.CharField(\n blank=True,\n default='',\n max_length=255,\n verbose_name=_('Label of the location'),\n help_text=_('This could be an address or the name of a landmark.'),\n )\n district = models.ForeignKey(\n MapPreset,\n limit_choices_to=Q(category__name='Berlin') & ~Q(name='Berlin'))\n contact = models.TextField(max_length=255, verbose_name=_('Contact'))\n cost = models.PositiveIntegerField(blank=True, null=True,\n verbose_name=_('Cost'))\n description = RichTextField(verbose_name=_('Description'), blank=True)\n category = models.CharField(max_length=255, verbose_name=_('Type of plan'))\n status = models.SmallIntegerField(choices=(\n (STATUS_TODO, _('Idea')),\n (STATUS_PLANNING, _('Planning')),\n (STATUS_IMPLEMENTATION, _('Implementation')),\n (STATUS_DONE, _('Done')),\n (STATUS_STOPPED, _('Stopped')),\n ))\n participation = models.SmallIntegerField(choices=(\n (PARTICIPATION_YES, _('Yes')),\n (PARTICIPATION_NO, _('No')),\n (PARTICIPATION_UNDECIDED, _('Still undecided')),\n ))\n\n class Meta:\n ordering = ['-created']\n\n @property\n def reference_number(self):\n return '{:d}-{:05d}'.format(self.created.year, self.pk)\n\n def __str__(self):\n return self.title\n\n def get_absolute_url(self):\n return reverse('meinberlin_plans:plan-detail',\n kwargs=dict(pk=self.pk, year=self.created.year))\n\n def save(self, *args, **kwargs):\n self.description = transforms.clean_html_field(self.description)\n super().save(*args, **kwargs)\n", "path": "meinberlin/apps/plans/models.py"}]} | 1,124 | 199 |
gh_patches_debug_29253 | rasdani/github-patches | git_diff | mars-project__mars-2957 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Submit query condition to remote node instead of fetch to local then query
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Is your feature request related to a problem? Please describe.**
Curently if ray fetcher gets objects with condition, it will fetch objects to local, then filter local objects with conditions. This will incur much objects transfer cost and high memory footsprint in local node.
**Describe the solution you'd like**
We should submit query to objects' node then fetch queried result instead.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mars/services/task/execution/ray/fetcher.py`
Content:
```
1 # Copyright 1999-2021 Alibaba Group Holding Ltd.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import asyncio
16 from collections import namedtuple
17 from typing import Dict, List
18 from ..api import Fetcher, register_fetcher_cls
19
20
21 _FetchInfo = namedtuple("FetchInfo", ["key", "object_ref", "conditions"])
22
23
24 @register_fetcher_cls
25 class RayFetcher(Fetcher):
26 name = "ray"
27 required_meta_keys = ("object_refs",)
28
29 def __init__(self, **kwargs):
30 self._fetch_info_list = []
31 self._no_conditions = True
32
33 async def append(self, chunk_key: str, chunk_meta: Dict, conditions: List = None):
34 if conditions is not None:
35 self._no_conditions = False
36 self._fetch_info_list.append(
37 _FetchInfo(chunk_key, chunk_meta["object_refs"][0], conditions)
38 )
39
40 async def get(self):
41 objects = await asyncio.gather(
42 *(info.object_ref for info in self._fetch_info_list)
43 )
44 if self._no_conditions:
45 return objects
46 results = []
47 for o, fetch_info in zip(objects, self._fetch_info_list):
48 if fetch_info.conditions is None:
49 results.append(o)
50 else:
51 try:
52 results.append(o.iloc[fetch_info.conditions])
53 except AttributeError:
54 results.append(o[fetch_info.conditions])
55 return results
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mars/services/task/execution/ray/fetcher.py b/mars/services/task/execution/ray/fetcher.py
--- a/mars/services/task/execution/ray/fetcher.py
+++ b/mars/services/task/execution/ray/fetcher.py
@@ -15,9 +15,10 @@
import asyncio
from collections import namedtuple
from typing import Dict, List
+from .....utils import lazy_import
from ..api import Fetcher, register_fetcher_cls
-
+ray = lazy_import("ray")
_FetchInfo = namedtuple("FetchInfo", ["key", "object_ref", "conditions"])
@@ -38,18 +39,27 @@
)
async def get(self):
- objects = await asyncio.gather(
- *(info.object_ref for info in self._fetch_info_list)
- )
if self._no_conditions:
- return objects
- results = []
- for o, fetch_info in zip(objects, self._fetch_info_list):
+ return await asyncio.gather(
+ *(info.object_ref for info in self._fetch_info_list)
+ )
+ refs = [None] * len(self._fetch_info_list)
+ for index, fetch_info in enumerate(self._fetch_info_list):
if fetch_info.conditions is None:
- results.append(o)
+ refs[index] = fetch_info.object_ref
else:
- try:
- results.append(o.iloc[fetch_info.conditions])
- except AttributeError:
- results.append(o[fetch_info.conditions])
- return results
+ refs[index] = query_object_with_condition.remote(
+ fetch_info.object_ref, fetch_info.conditions
+ )
+ return await asyncio.gather(*refs)
+
+
+def query_object_with_condition(o, conditions):
+ try:
+ return o.iloc[conditions]
+ except AttributeError:
+ return o[conditions]
+
+
+if ray:
+ query_object_with_condition = ray.remote(query_object_with_condition)
| {"golden_diff": "diff --git a/mars/services/task/execution/ray/fetcher.py b/mars/services/task/execution/ray/fetcher.py\n--- a/mars/services/task/execution/ray/fetcher.py\n+++ b/mars/services/task/execution/ray/fetcher.py\n@@ -15,9 +15,10 @@\n import asyncio\n from collections import namedtuple\n from typing import Dict, List\n+from .....utils import lazy_import\n from ..api import Fetcher, register_fetcher_cls\n \n-\n+ray = lazy_import(\"ray\")\n _FetchInfo = namedtuple(\"FetchInfo\", [\"key\", \"object_ref\", \"conditions\"])\n \n \n@@ -38,18 +39,27 @@\n )\n \n async def get(self):\n- objects = await asyncio.gather(\n- *(info.object_ref for info in self._fetch_info_list)\n- )\n if self._no_conditions:\n- return objects\n- results = []\n- for o, fetch_info in zip(objects, self._fetch_info_list):\n+ return await asyncio.gather(\n+ *(info.object_ref for info in self._fetch_info_list)\n+ )\n+ refs = [None] * len(self._fetch_info_list)\n+ for index, fetch_info in enumerate(self._fetch_info_list):\n if fetch_info.conditions is None:\n- results.append(o)\n+ refs[index] = fetch_info.object_ref\n else:\n- try:\n- results.append(o.iloc[fetch_info.conditions])\n- except AttributeError:\n- results.append(o[fetch_info.conditions])\n- return results\n+ refs[index] = query_object_with_condition.remote(\n+ fetch_info.object_ref, fetch_info.conditions\n+ )\n+ return await asyncio.gather(*refs)\n+\n+\n+def query_object_with_condition(o, conditions):\n+ try:\n+ return o.iloc[conditions]\n+ except AttributeError:\n+ return o[conditions]\n+\n+\n+if ray:\n+ query_object_with_condition = ray.remote(query_object_with_condition)\n", "issue": "Submit query condition to remote node instead of fetch to local then query\n<!--\r\nThank you for your contribution!\r\n\r\nPlease review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.\r\n-->\r\n\r\n**Is your feature request related to a problem? Please describe.**\r\nCurently if ray fetcher gets objects with condition, it will fetch objects to local, then filter local objects with conditions. This will incur much objects transfer cost and high memory footsprint in local node. \r\n\r\n**Describe the solution you'd like**\r\nWe should submit query to objects' node then fetch queried result instead. \r\n\r\n\n", "before_files": [{"content": "# Copyright 1999-2021 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nfrom collections import namedtuple\nfrom typing import Dict, List\nfrom ..api import Fetcher, register_fetcher_cls\n\n\n_FetchInfo = namedtuple(\"FetchInfo\", [\"key\", \"object_ref\", \"conditions\"])\n\n\n@register_fetcher_cls\nclass RayFetcher(Fetcher):\n name = \"ray\"\n required_meta_keys = (\"object_refs\",)\n\n def __init__(self, **kwargs):\n self._fetch_info_list = []\n self._no_conditions = True\n\n async def append(self, chunk_key: str, chunk_meta: Dict, conditions: List = None):\n if conditions is not None:\n self._no_conditions = False\n self._fetch_info_list.append(\n _FetchInfo(chunk_key, chunk_meta[\"object_refs\"][0], conditions)\n )\n\n async def get(self):\n objects = await asyncio.gather(\n *(info.object_ref for info in self._fetch_info_list)\n )\n if self._no_conditions:\n return objects\n results = []\n for o, fetch_info in zip(objects, self._fetch_info_list):\n if fetch_info.conditions is None:\n results.append(o)\n else:\n try:\n results.append(o.iloc[fetch_info.conditions])\n except AttributeError:\n results.append(o[fetch_info.conditions])\n return results\n", "path": "mars/services/task/execution/ray/fetcher.py"}], "after_files": [{"content": "# Copyright 1999-2021 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nfrom collections import namedtuple\nfrom typing import Dict, List\nfrom .....utils import lazy_import\nfrom ..api import Fetcher, register_fetcher_cls\n\nray = lazy_import(\"ray\")\n_FetchInfo = namedtuple(\"FetchInfo\", [\"key\", \"object_ref\", \"conditions\"])\n\n\n@register_fetcher_cls\nclass RayFetcher(Fetcher):\n name = \"ray\"\n required_meta_keys = (\"object_refs\",)\n\n def __init__(self, **kwargs):\n self._fetch_info_list = []\n self._no_conditions = True\n\n async def append(self, chunk_key: str, chunk_meta: Dict, conditions: List = None):\n if conditions is not None:\n self._no_conditions = False\n self._fetch_info_list.append(\n _FetchInfo(chunk_key, chunk_meta[\"object_refs\"][0], conditions)\n )\n\n async def get(self):\n if self._no_conditions:\n return await asyncio.gather(\n *(info.object_ref for info in self._fetch_info_list)\n )\n refs = [None] * len(self._fetch_info_list)\n for index, fetch_info in enumerate(self._fetch_info_list):\n if fetch_info.conditions is None:\n refs[index] = fetch_info.object_ref\n else:\n refs[index] = query_object_with_condition.remote(\n fetch_info.object_ref, fetch_info.conditions\n )\n return await asyncio.gather(*refs)\n\n\ndef query_object_with_condition(o, conditions):\n try:\n return o.iloc[conditions]\n except AttributeError:\n return o[conditions]\n\n\nif ray:\n query_object_with_condition = ray.remote(query_object_with_condition)\n", "path": "mars/services/task/execution/ray/fetcher.py"}]} | 918 | 432 |
gh_patches_debug_1383 | rasdani/github-patches | git_diff | vega__altair-150 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
to_dict() not in Chart.__dir__
All in title.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `altair/schema/baseobject.py`
Content:
```
1 import pandas as pd
2 import traitlets as T
3
4 from ..utils._py3k_compat import string_types
5
6 _attr_template = "Attribute not found: {0}. Valid keyword arguments for this class: {1}"
7
8
9 class BaseObject(T.HasTraits):
10
11 skip = []
12
13 def __init__(self, **kwargs):
14 all_traits = list(self.traits())
15 for k in kwargs:
16 if k not in all_traits:
17 raise KeyError(_attr_template.format(k, all_traits))
18 super(BaseObject, self).__init__(**kwargs)
19
20 @classmethod
21 def infer_keywords(cls, *args, **kwargs):
22 """Utility to initialize object from args and kwargs
23
24 Arguments are converted to keyword arguments by inferring the keyword
25 from their type.
26 Keyword arguments are converted to the correct Instance class
27 if required.
28 """
29 def get_class(trait):
30 # TODO: what do do with lists?
31 if isinstance(trait, T.Union):
32 for klass in map(get_class, trait.trait_types):
33 if klass:
34 return klass
35 elif isinstance(trait, T.Instance):
36 return trait.klass
37
38 traits = cls.class_traits()
39 classes = {n: get_class(t) for n, t in traits.items()}
40
41 # Turn all keyword arguments to the appropriate class
42 for name, arg in kwargs.items():
43 Trait = classes.get(name, None)
44 if Trait is not None and not isinstance(arg, Trait):
45 try:
46 kwargs[name] = Trait(arg)
47 except (TypeError, T.TraitError):
48 pass # errors will handled by traitlets below
49
50 # find forward/backward mapping among unique classes
51 name_to_trait = {}
52 while classes:
53 name, trait = classes.popitem()
54 if trait is None:
55 continue
56 if trait not in set.union(set(classes.values()),
57 set(name_to_trait.values())):
58 name_to_trait[name] = trait
59 trait_to_name = {t: n for n, t in name_to_trait.items()}
60
61 # Update all arguments
62 for arg in args:
63 name = trait_to_name.get(type(arg), None)
64 if name is None:
65 raise ValueError("{0}: Unable to infer argument name for {1}".format(cls, arg))
66 elif name in kwargs:
67 raise ValueError("{0}: {1} specified both by arg and kwarg".format(cls, name))
68 else:
69 kwargs[name] = arg
70 return kwargs
71
72 def update_traits(self, **kwargs):
73 for key, val in kwargs.items():
74 self.set_trait(key, val)
75 return self
76
77 def update_inferred_traits(self, *args, **kwargs):
78 kwargs = self.infer_keywords(*args, **kwargs)
79 return self.update_traits(**kwargs)
80
81 def update_subtraits(self, attrs, *args, **kwargs):
82 """Update sub-traits without overwriting other traits"""
83 if not (args or kwargs):
84 return self
85 if isinstance(attrs, string_types):
86 attrs = (attrs,)
87 if len(attrs) == 0:
88 self.update_inferred_traits(*args, **kwargs)
89 else:
90 attr = attrs[0]
91 if attr not in self.traits():
92 raise ValueError('{0} has no trait {1}'.format(self, attr))
93 trait = getattr(self, attr)
94 if trait is None:
95 trait = self.traits()[attr].klass()
96 setattr(self, attr, trait.update_subtraits(attrs[1:], *args, **kwargs))
97 return self
98
99 def __contains__(self, key):
100 try:
101 value = getattr(self, key)
102 except AttributeError:
103 return False
104
105 # comparison to None will break, so check DataFrame specifically
106 if isinstance(value, pd.DataFrame):
107 return True
108 elif value is not None:
109 if isinstance(value, (int, float, bool)):
110 return True
111 else:
112 return bool(value)
113 else:
114 return False
115
116 def __dir__(self):
117 """Customize tab completed attributes."""
118 return list(self.traits())
119
120 @classmethod
121 def from_dict(cls, dct):
122 """Instantiate the object from a valid JSON dictionary"""
123 from ..utils.visitors import FromDict
124 return FromDict().clsvisit(cls, dct)
125
126 def to_dict(self, data=True):
127 """Emit the JSON representation for this object as as dict."""
128 from ..utils.visitors import ToDict
129 self._finalize()
130 return ToDict().visit(self, data)
131
132 def _finalize(self, **kwargs):
133 """Finalize the object, and all contained objects, for export."""
134 def finalize_obj(obj):
135 if isinstance(obj, BaseObject):
136 obj._finalize(**kwargs)
137 elif isinstance(obj, list):
138 for item in obj:
139 finalize_obj(item)
140
141 for name in self.traits():
142 value = getattr(self, name)
143 finalize_obj(value)
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/altair/schema/baseobject.py b/altair/schema/baseobject.py
--- a/altair/schema/baseobject.py
+++ b/altair/schema/baseobject.py
@@ -115,7 +115,7 @@
def __dir__(self):
"""Customize tab completed attributes."""
- return list(self.traits())
+ return list(self.traits())+['to_dict', 'from_dict']
@classmethod
def from_dict(cls, dct):
| {"golden_diff": "diff --git a/altair/schema/baseobject.py b/altair/schema/baseobject.py\n--- a/altair/schema/baseobject.py\n+++ b/altair/schema/baseobject.py\n@@ -115,7 +115,7 @@\n \n def __dir__(self):\n \"\"\"Customize tab completed attributes.\"\"\"\n- return list(self.traits())\n+ return list(self.traits())+['to_dict', 'from_dict']\n \n @classmethod\n def from_dict(cls, dct):\n", "issue": "to_dict() not in Chart.__dir__\nAll in title.\n\n", "before_files": [{"content": "import pandas as pd\nimport traitlets as T\n\nfrom ..utils._py3k_compat import string_types\n\n_attr_template = \"Attribute not found: {0}. Valid keyword arguments for this class: {1}\"\n\n\nclass BaseObject(T.HasTraits):\n\n skip = []\n\n def __init__(self, **kwargs):\n all_traits = list(self.traits())\n for k in kwargs:\n if k not in all_traits:\n raise KeyError(_attr_template.format(k, all_traits))\n super(BaseObject, self).__init__(**kwargs)\n\n @classmethod\n def infer_keywords(cls, *args, **kwargs):\n \"\"\"Utility to initialize object from args and kwargs\n\n Arguments are converted to keyword arguments by inferring the keyword\n from their type.\n Keyword arguments are converted to the correct Instance class\n if required.\n \"\"\"\n def get_class(trait):\n # TODO: what do do with lists?\n if isinstance(trait, T.Union):\n for klass in map(get_class, trait.trait_types):\n if klass:\n return klass\n elif isinstance(trait, T.Instance):\n return trait.klass\n\n traits = cls.class_traits()\n classes = {n: get_class(t) for n, t in traits.items()}\n\n # Turn all keyword arguments to the appropriate class\n for name, arg in kwargs.items():\n Trait = classes.get(name, None)\n if Trait is not None and not isinstance(arg, Trait):\n try:\n kwargs[name] = Trait(arg)\n except (TypeError, T.TraitError):\n pass # errors will handled by traitlets below\n\n # find forward/backward mapping among unique classes\n name_to_trait = {}\n while classes:\n name, trait = classes.popitem()\n if trait is None:\n continue\n if trait not in set.union(set(classes.values()),\n set(name_to_trait.values())):\n name_to_trait[name] = trait\n trait_to_name = {t: n for n, t in name_to_trait.items()}\n\n # Update all arguments\n for arg in args:\n name = trait_to_name.get(type(arg), None)\n if name is None:\n raise ValueError(\"{0}: Unable to infer argument name for {1}\".format(cls, arg))\n elif name in kwargs:\n raise ValueError(\"{0}: {1} specified both by arg and kwarg\".format(cls, name))\n else:\n kwargs[name] = arg\n return kwargs\n\n def update_traits(self, **kwargs):\n for key, val in kwargs.items():\n self.set_trait(key, val)\n return self\n\n def update_inferred_traits(self, *args, **kwargs):\n kwargs = self.infer_keywords(*args, **kwargs)\n return self.update_traits(**kwargs)\n\n def update_subtraits(self, attrs, *args, **kwargs):\n \"\"\"Update sub-traits without overwriting other traits\"\"\"\n if not (args or kwargs):\n return self\n if isinstance(attrs, string_types):\n attrs = (attrs,)\n if len(attrs) == 0:\n self.update_inferred_traits(*args, **kwargs)\n else:\n attr = attrs[0]\n if attr not in self.traits():\n raise ValueError('{0} has no trait {1}'.format(self, attr))\n trait = getattr(self, attr)\n if trait is None:\n trait = self.traits()[attr].klass()\n setattr(self, attr, trait.update_subtraits(attrs[1:], *args, **kwargs))\n return self\n\n def __contains__(self, key):\n try:\n value = getattr(self, key)\n except AttributeError:\n return False\n\n # comparison to None will break, so check DataFrame specifically\n if isinstance(value, pd.DataFrame):\n return True\n elif value is not None:\n if isinstance(value, (int, float, bool)):\n return True\n else:\n return bool(value)\n else:\n return False\n\n def __dir__(self):\n \"\"\"Customize tab completed attributes.\"\"\"\n return list(self.traits())\n\n @classmethod\n def from_dict(cls, dct):\n \"\"\"Instantiate the object from a valid JSON dictionary\"\"\"\n from ..utils.visitors import FromDict\n return FromDict().clsvisit(cls, dct)\n\n def to_dict(self, data=True):\n \"\"\"Emit the JSON representation for this object as as dict.\"\"\"\n from ..utils.visitors import ToDict\n self._finalize()\n return ToDict().visit(self, data)\n\n def _finalize(self, **kwargs):\n \"\"\"Finalize the object, and all contained objects, for export.\"\"\"\n def finalize_obj(obj):\n if isinstance(obj, BaseObject):\n obj._finalize(**kwargs)\n elif isinstance(obj, list):\n for item in obj:\n finalize_obj(item)\n\n for name in self.traits():\n value = getattr(self, name)\n finalize_obj(value)\n", "path": "altair/schema/baseobject.py"}], "after_files": [{"content": "import pandas as pd\nimport traitlets as T\n\nfrom ..utils._py3k_compat import string_types\n\n_attr_template = \"Attribute not found: {0}. Valid keyword arguments for this class: {1}\"\n\n\nclass BaseObject(T.HasTraits):\n\n skip = []\n\n def __init__(self, **kwargs):\n all_traits = list(self.traits())\n for k in kwargs:\n if k not in all_traits:\n raise KeyError(_attr_template.format(k, all_traits))\n super(BaseObject, self).__init__(**kwargs)\n\n @classmethod\n def infer_keywords(cls, *args, **kwargs):\n \"\"\"Utility to initialize object from args and kwargs\n\n Arguments are converted to keyword arguments by inferring the keyword\n from their type.\n Keyword arguments are converted to the correct Instance class\n if required.\n \"\"\"\n def get_class(trait):\n # TODO: what do do with lists?\n if isinstance(trait, T.Union):\n for klass in map(get_class, trait.trait_types):\n if klass:\n return klass\n elif isinstance(trait, T.Instance):\n return trait.klass\n\n traits = cls.class_traits()\n classes = {n: get_class(t) for n, t in traits.items()}\n\n # Turn all keyword arguments to the appropriate class\n for name, arg in kwargs.items():\n Trait = classes.get(name, None)\n if Trait is not None and not isinstance(arg, Trait):\n try:\n kwargs[name] = Trait(arg)\n except (TypeError, T.TraitError):\n pass # errors will handled by traitlets below\n\n # find forward/backward mapping among unique classes\n name_to_trait = {}\n while classes:\n name, trait = classes.popitem()\n if trait is None:\n continue\n if trait not in set.union(set(classes.values()),\n set(name_to_trait.values())):\n name_to_trait[name] = trait\n trait_to_name = {t: n for n, t in name_to_trait.items()}\n\n # Update all arguments\n for arg in args:\n name = trait_to_name.get(type(arg), None)\n if name is None:\n raise ValueError(\"{0}: Unable to infer argument name for {1}\".format(cls, arg))\n elif name in kwargs:\n raise ValueError(\"{0}: {1} specified both by arg and kwarg\".format(cls, name))\n else:\n kwargs[name] = arg\n return kwargs\n\n def update_traits(self, **kwargs):\n for key, val in kwargs.items():\n self.set_trait(key, val)\n return self\n\n def update_inferred_traits(self, *args, **kwargs):\n kwargs = self.infer_keywords(*args, **kwargs)\n return self.update_traits(**kwargs)\n\n def update_subtraits(self, attrs, *args, **kwargs):\n \"\"\"Update sub-traits without overwriting other traits\"\"\"\n if not (args or kwargs):\n return self\n if isinstance(attrs, string_types):\n attrs = (attrs,)\n if len(attrs) == 0:\n self.update_inferred_traits(*args, **kwargs)\n else:\n attr = attrs[0]\n if attr not in self.traits():\n raise ValueError('{0} has no trait {1}'.format(self, attr))\n trait = getattr(self, attr)\n if trait is None:\n trait = self.traits()[attr].klass()\n setattr(self, attr, trait.update_subtraits(attrs[1:], *args, **kwargs))\n return self\n\n def __contains__(self, key):\n try:\n value = getattr(self, key)\n except AttributeError:\n return False\n\n # comparison to None will break, so check DataFrame specifically\n if isinstance(value, pd.DataFrame):\n return True\n elif value is not None:\n if isinstance(value, (int, float, bool)):\n return True\n else:\n return bool(value)\n else:\n return False\n\n def __dir__(self):\n \"\"\"Customize tab completed attributes.\"\"\"\n return list(self.traits())+['to_dict', 'from_dict']\n\n @classmethod\n def from_dict(cls, dct):\n \"\"\"Instantiate the object from a valid JSON dictionary\"\"\"\n from ..utils.visitors import FromDict\n return FromDict().clsvisit(cls, dct)\n\n def to_dict(self, data=True):\n \"\"\"Emit the JSON representation for this object as as dict.\"\"\"\n from ..utils.visitors import ToDict\n self._finalize()\n return ToDict().visit(self, data)\n\n def _finalize(self, **kwargs):\n \"\"\"Finalize the object, and all contained objects, for export.\"\"\"\n def finalize_obj(obj):\n if isinstance(obj, BaseObject):\n obj._finalize(**kwargs)\n elif isinstance(obj, list):\n for item in obj:\n finalize_obj(item)\n\n for name in self.traits():\n value = getattr(self, name)\n finalize_obj(value)\n", "path": "altair/schema/baseobject.py"}]} | 1,650 | 108 |
gh_patches_debug_20770 | rasdani/github-patches | git_diff | hartwork__jawanndenn-45 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error 500 when pasting JSON with single surrogate characters
Example:
```json
{
"equal_width": false,
"title": "Which fruit do *you* like?",
"options": [
"Apple",
"Banana",
"Orange",
"Papaya", "\ud800"
]
}
```
Error 500 when pasting JSON with single surrogate characters
Example:
```json
{
"equal_width": false,
"title": "Which fruit do *you* like?",
"options": [
"Apple",
"Banana",
"Orange",
"Papaya", "\ud800"
]
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jawanndenn/views.py`
Content:
```
1 # Copyright (C) 2019 Sebastian Pipping <[email protected]>
2 # Licensed under GNU Affero GPL v3 or later
3
4 import json # lgtm [py/import-and-import-from]
5 from functools import wraps
6 from json import JSONDecodeError
7
8 from django.conf import settings
9 from django.db import transaction
10 from django.http import (Http404, HttpResponseBadRequest, HttpResponseNotFound,
11 JsonResponse)
12 from django.shortcuts import redirect
13 from django.template.response import TemplateResponse
14 from django.utils.module_loading import import_string
15 from django.views.decorators.http import require_GET, require_POST
16 from django.views.defaults import bad_request
17 from django.views.static import serve
18 from jawanndenn.markup import safe_html
19 from jawanndenn.models import Ballot, Poll, Vote
20 from jawanndenn.serializers import PollConfigSerializer
21 from rest_framework.exceptions import ValidationError
22
23 _staticfile_finders = [import_string(cls_string)()
24 for cls_string in settings.STATICFILES_FINDERS]
25
26
27 def _except_poll_does_not_exist(wrappee):
28 """Decorator that turns Poll.DoesNotExist into 404 Not Found"""
29 @wraps(wrappee)
30 def wrapper(*args, **kwargs):
31 try:
32 return wrappee(*args, **kwargs)
33 except Poll.DoesNotExist:
34 return HttpResponseNotFound('No such poll')
35
36 return wrapper
37
38
39 def _except_validation_error(wrappee):
40 """Decorator that turns ValidationError into 400 Bad Request"""
41 @wraps(wrappee)
42 def wrapper(request, *args, **kwargs):
43 try:
44 return wrappee(request, *args, **kwargs)
45 except ValidationError as exception:
46 return bad_request(request, exception)
47
48 return wrapper
49
50
51 @require_GET
52 def index_get_view(request):
53 return TemplateResponse(request,
54 template='html/setup.xhtml',
55 content_type='application/xhtml+xml')
56
57
58 @require_POST
59 @_except_validation_error
60 def poll_post_view(request):
61 config_json = request.POST.get('config', '{}')
62 try:
63 config = json.loads(config_json)
64 except JSONDecodeError:
65 raise ValidationError('Poll configuration is not well-formed JSON.')
66
67 serializer = PollConfigSerializer(data=config)
68 serializer.is_valid(raise_exception=True)
69
70 with transaction.atomic():
71 if Poll.objects.count() >= settings.JAWANNDENN_MAX_POLLS:
72 return HttpResponseBadRequest(
73 f'Maximum number of {settings.JAWANNDENN_MAX_POLLS} polls '
74 'reached, please contact the administrator.')
75
76 poll = serializer.save()
77
78 return redirect(poll)
79
80
81 @require_GET
82 @_except_poll_does_not_exist
83 def poll_data_get_view(request, poll_id):
84 with transaction.atomic():
85 poll = Poll.objects.get(slug=poll_id)
86 poll_config = {
87 'equal_width': poll.equal_width,
88 'title': poll.title,
89 'options': list(poll.options.order_by('position')
90 .values_list('name', flat=True)),
91 }
92 votes = [
93 [ballot.voter_name, [vote.yes for vote
94 in ballot.votes.order_by('option__position')]]
95 for ballot
96 in poll.ballots.order_by('created', 'id')
97 ]
98
99 data = {
100 'config': poll_config,
101 'votes': votes,
102 }
103
104 return JsonResponse(data)
105
106
107 @require_GET
108 @_except_poll_does_not_exist
109 def poll_get_view(request, poll_id):
110 Poll.objects.get(slug=poll_id)
111
112 return TemplateResponse(request,
113 template='html/poll.xhtml',
114 content_type='application/xhtml+xml')
115
116
117 @require_POST
118 @_except_poll_does_not_exist
119 def vote_post_view(request, poll_id):
120 with transaction.atomic():
121 poll = Poll.objects.get(slug=poll_id)
122
123 if poll.ballots.count() >= settings.JAWANNDENN_MAX_VOTES_PER_POLL:
124 return HttpResponseBadRequest(
125 f'Maximum number of {settings.JAWANNDENN_MAX_VOTES_PER_POLL} '
126 'votes reached for this poll'
127 ', please contact the administrator.')
128
129 voter_name = safe_html(request.POST.get('voterName'))
130 votes = [
131 request.POST.get(f'option{i}', 'off') == 'on'
132 for i
133 in range(poll.options.count())
134 ]
135
136 ballot = Ballot.objects.create(poll=poll, voter_name=voter_name)
137 for option, vote in zip(poll.options.order_by('position'), votes):
138 Vote.objects.create(ballot=ballot, option=option, yes=vote)
139
140 return redirect(poll)
141
142
143 @require_GET
144 def serve_using_finders(request, path, show_indexes=False):
145 """
146 Wrapper around django.views.static.serve that uses
147 settings.STATICFILES_FINDERS rather than a single document_root
148 """
149 for finder in _staticfile_finders:
150 fullpath = finder.find(path)
151 if fullpath:
152 document_root = fullpath[:-len(path)] if path else fullpath
153 return serve(request, path, document_root=document_root,
154 show_indexes=show_indexes)
155 else:
156 raise Http404
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/jawanndenn/views.py b/jawanndenn/views.py
--- a/jawanndenn/views.py
+++ b/jawanndenn/views.py
@@ -1,10 +1,9 @@
# Copyright (C) 2019 Sebastian Pipping <[email protected]>
# Licensed under GNU Affero GPL v3 or later
-import json # lgtm [py/import-and-import-from]
from functools import wraps
-from json import JSONDecodeError
+import rapidjson as json # lgtm [py/import-and-import-from]
from django.conf import settings
from django.db import transaction
from django.http import (Http404, HttpResponseBadRequest, HttpResponseNotFound,
@@ -18,6 +17,7 @@
from jawanndenn.markup import safe_html
from jawanndenn.models import Ballot, Poll, Vote
from jawanndenn.serializers import PollConfigSerializer
+from rapidjson import JSONDecodeError
from rest_framework.exceptions import ValidationError
_staticfile_finders = [import_string(cls_string)()
| {"golden_diff": "diff --git a/jawanndenn/views.py b/jawanndenn/views.py\n--- a/jawanndenn/views.py\n+++ b/jawanndenn/views.py\n@@ -1,10 +1,9 @@\n # Copyright (C) 2019 Sebastian Pipping <[email protected]>\n # Licensed under GNU Affero GPL v3 or later\n \n-import json # lgtm [py/import-and-import-from]\n from functools import wraps\n-from json import JSONDecodeError\n \n+import rapidjson as json # lgtm [py/import-and-import-from]\n from django.conf import settings\n from django.db import transaction\n from django.http import (Http404, HttpResponseBadRequest, HttpResponseNotFound,\n@@ -18,6 +17,7 @@\n from jawanndenn.markup import safe_html\n from jawanndenn.models import Ballot, Poll, Vote\n from jawanndenn.serializers import PollConfigSerializer\n+from rapidjson import JSONDecodeError\n from rest_framework.exceptions import ValidationError\n \n _staticfile_finders = [import_string(cls_string)()\n", "issue": "Error 500 when pasting JSON with single surrogate characters\nExample:\r\n```json\r\n{\r\n \"equal_width\": false,\r\n \"title\": \"Which fruit do *you* like?\",\r\n \"options\": [\r\n \"Apple\",\r\n \"Banana\",\r\n \"Orange\",\r\n \"Papaya\", \"\\ud800\"\r\n ]\r\n}\r\n```\nError 500 when pasting JSON with single surrogate characters\nExample:\r\n```json\r\n{\r\n \"equal_width\": false,\r\n \"title\": \"Which fruit do *you* like?\",\r\n \"options\": [\r\n \"Apple\",\r\n \"Banana\",\r\n \"Orange\",\r\n \"Papaya\", \"\\ud800\"\r\n ]\r\n}\r\n```\n", "before_files": [{"content": "# Copyright (C) 2019 Sebastian Pipping <[email protected]>\n# Licensed under GNU Affero GPL v3 or later\n\nimport json # lgtm [py/import-and-import-from]\nfrom functools import wraps\nfrom json import JSONDecodeError\n\nfrom django.conf import settings\nfrom django.db import transaction\nfrom django.http import (Http404, HttpResponseBadRequest, HttpResponseNotFound,\n JsonResponse)\nfrom django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.module_loading import import_string\nfrom django.views.decorators.http import require_GET, require_POST\nfrom django.views.defaults import bad_request\nfrom django.views.static import serve\nfrom jawanndenn.markup import safe_html\nfrom jawanndenn.models import Ballot, Poll, Vote\nfrom jawanndenn.serializers import PollConfigSerializer\nfrom rest_framework.exceptions import ValidationError\n\n_staticfile_finders = [import_string(cls_string)()\n for cls_string in settings.STATICFILES_FINDERS]\n\n\ndef _except_poll_does_not_exist(wrappee):\n \"\"\"Decorator that turns Poll.DoesNotExist into 404 Not Found\"\"\"\n @wraps(wrappee)\n def wrapper(*args, **kwargs):\n try:\n return wrappee(*args, **kwargs)\n except Poll.DoesNotExist:\n return HttpResponseNotFound('No such poll')\n\n return wrapper\n\n\ndef _except_validation_error(wrappee):\n \"\"\"Decorator that turns ValidationError into 400 Bad Request\"\"\"\n @wraps(wrappee)\n def wrapper(request, *args, **kwargs):\n try:\n return wrappee(request, *args, **kwargs)\n except ValidationError as exception:\n return bad_request(request, exception)\n\n return wrapper\n\n\n@require_GET\ndef index_get_view(request):\n return TemplateResponse(request,\n template='html/setup.xhtml',\n content_type='application/xhtml+xml')\n\n\n@require_POST\n@_except_validation_error\ndef poll_post_view(request):\n config_json = request.POST.get('config', '{}')\n try:\n config = json.loads(config_json)\n except JSONDecodeError:\n raise ValidationError('Poll configuration is not well-formed JSON.')\n\n serializer = PollConfigSerializer(data=config)\n serializer.is_valid(raise_exception=True)\n\n with transaction.atomic():\n if Poll.objects.count() >= settings.JAWANNDENN_MAX_POLLS:\n return HttpResponseBadRequest(\n f'Maximum number of {settings.JAWANNDENN_MAX_POLLS} polls '\n 'reached, please contact the administrator.')\n\n poll = serializer.save()\n\n return redirect(poll)\n\n\n@require_GET\n@_except_poll_does_not_exist\ndef poll_data_get_view(request, poll_id):\n with transaction.atomic():\n poll = Poll.objects.get(slug=poll_id)\n poll_config = {\n 'equal_width': poll.equal_width,\n 'title': poll.title,\n 'options': list(poll.options.order_by('position')\n .values_list('name', flat=True)),\n }\n votes = [\n [ballot.voter_name, [vote.yes for vote\n in ballot.votes.order_by('option__position')]]\n for ballot\n in poll.ballots.order_by('created', 'id')\n ]\n\n data = {\n 'config': poll_config,\n 'votes': votes,\n }\n\n return JsonResponse(data)\n\n\n@require_GET\n@_except_poll_does_not_exist\ndef poll_get_view(request, poll_id):\n Poll.objects.get(slug=poll_id)\n\n return TemplateResponse(request,\n template='html/poll.xhtml',\n content_type='application/xhtml+xml')\n\n\n@require_POST\n@_except_poll_does_not_exist\ndef vote_post_view(request, poll_id):\n with transaction.atomic():\n poll = Poll.objects.get(slug=poll_id)\n\n if poll.ballots.count() >= settings.JAWANNDENN_MAX_VOTES_PER_POLL:\n return HttpResponseBadRequest(\n f'Maximum number of {settings.JAWANNDENN_MAX_VOTES_PER_POLL} '\n 'votes reached for this poll'\n ', please contact the administrator.')\n\n voter_name = safe_html(request.POST.get('voterName'))\n votes = [\n request.POST.get(f'option{i}', 'off') == 'on'\n for i\n in range(poll.options.count())\n ]\n\n ballot = Ballot.objects.create(poll=poll, voter_name=voter_name)\n for option, vote in zip(poll.options.order_by('position'), votes):\n Vote.objects.create(ballot=ballot, option=option, yes=vote)\n\n return redirect(poll)\n\n\n@require_GET\ndef serve_using_finders(request, path, show_indexes=False):\n \"\"\"\n Wrapper around django.views.static.serve that uses\n settings.STATICFILES_FINDERS rather than a single document_root\n \"\"\"\n for finder in _staticfile_finders:\n fullpath = finder.find(path)\n if fullpath:\n document_root = fullpath[:-len(path)] if path else fullpath\n return serve(request, path, document_root=document_root,\n show_indexes=show_indexes)\n else:\n raise Http404\n", "path": "jawanndenn/views.py"}], "after_files": [{"content": "# Copyright (C) 2019 Sebastian Pipping <[email protected]>\n# Licensed under GNU Affero GPL v3 or later\n\nfrom functools import wraps\n\nimport rapidjson as json # lgtm [py/import-and-import-from]\nfrom django.conf import settings\nfrom django.db import transaction\nfrom django.http import (Http404, HttpResponseBadRequest, HttpResponseNotFound,\n JsonResponse)\nfrom django.shortcuts import redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.module_loading import import_string\nfrom django.views.decorators.http import require_GET, require_POST\nfrom django.views.defaults import bad_request\nfrom django.views.static import serve\nfrom jawanndenn.markup import safe_html\nfrom jawanndenn.models import Ballot, Poll, Vote\nfrom jawanndenn.serializers import PollConfigSerializer\nfrom rapidjson import JSONDecodeError\nfrom rest_framework.exceptions import ValidationError\n\n_staticfile_finders = [import_string(cls_string)()\n for cls_string in settings.STATICFILES_FINDERS]\n\n\ndef _except_poll_does_not_exist(wrappee):\n \"\"\"Decorator that turns Poll.DoesNotExist into 404 Not Found\"\"\"\n @wraps(wrappee)\n def wrapper(*args, **kwargs):\n try:\n return wrappee(*args, **kwargs)\n except Poll.DoesNotExist:\n return HttpResponseNotFound('No such poll')\n\n return wrapper\n\n\ndef _except_validation_error(wrappee):\n \"\"\"Decorator that turns ValidationError into 400 Bad Request\"\"\"\n @wraps(wrappee)\n def wrapper(request, *args, **kwargs):\n try:\n return wrappee(request, *args, **kwargs)\n except ValidationError as exception:\n return bad_request(request, exception)\n\n return wrapper\n\n\n@require_GET\ndef index_get_view(request):\n return TemplateResponse(request,\n template='html/setup.xhtml',\n content_type='application/xhtml+xml')\n\n\n@require_POST\n@_except_validation_error\ndef poll_post_view(request):\n config_json = request.POST.get('config', '{}')\n try:\n config = json.loads(config_json)\n except JSONDecodeError:\n raise ValidationError('Poll configuration is not well-formed JSON.')\n\n serializer = PollConfigSerializer(data=config)\n serializer.is_valid(raise_exception=True)\n\n with transaction.atomic():\n if Poll.objects.count() >= settings.JAWANNDENN_MAX_POLLS:\n return HttpResponseBadRequest(\n f'Maximum number of {settings.JAWANNDENN_MAX_POLLS} polls '\n 'reached, please contact the administrator.')\n\n poll = serializer.save()\n\n return redirect(poll)\n\n\n@require_GET\n@_except_poll_does_not_exist\ndef poll_data_get_view(request, poll_id):\n with transaction.atomic():\n poll = Poll.objects.get(slug=poll_id)\n poll_config = {\n 'equal_width': poll.equal_width,\n 'title': poll.title,\n 'options': list(poll.options.order_by('position')\n .values_list('name', flat=True)),\n }\n votes = [\n [ballot.voter_name, [vote.yes for vote\n in ballot.votes.order_by('option__position')]]\n for ballot\n in poll.ballots.order_by('created', 'id')\n ]\n\n data = {\n 'config': poll_config,\n 'votes': votes,\n }\n\n return JsonResponse(data)\n\n\n@require_GET\n@_except_poll_does_not_exist\ndef poll_get_view(request, poll_id):\n Poll.objects.get(slug=poll_id)\n\n return TemplateResponse(request,\n template='html/poll.xhtml',\n content_type='application/xhtml+xml')\n\n\n@require_POST\n@_except_poll_does_not_exist\ndef vote_post_view(request, poll_id):\n with transaction.atomic():\n poll = Poll.objects.get(slug=poll_id)\n\n if poll.ballots.count() >= settings.JAWANNDENN_MAX_VOTES_PER_POLL:\n return HttpResponseBadRequest(\n f'Maximum number of {settings.JAWANNDENN_MAX_VOTES_PER_POLL} '\n 'votes reached for this poll'\n ', please contact the administrator.')\n\n voter_name = safe_html(request.POST.get('voterName'))\n votes = [\n request.POST.get(f'option{i}', 'off') == 'on'\n for i\n in range(poll.options.count())\n ]\n\n ballot = Ballot.objects.create(poll=poll, voter_name=voter_name)\n for option, vote in zip(poll.options.order_by('position'), votes):\n Vote.objects.create(ballot=ballot, option=option, yes=vote)\n\n return redirect(poll)\n\n\n@require_GET\ndef serve_using_finders(request, path, show_indexes=False):\n \"\"\"\n Wrapper around django.views.static.serve that uses\n settings.STATICFILES_FINDERS rather than a single document_root\n \"\"\"\n for finder in _staticfile_finders:\n fullpath = finder.find(path)\n if fullpath:\n document_root = fullpath[:-len(path)] if path else fullpath\n return serve(request, path, document_root=document_root,\n show_indexes=show_indexes)\n else:\n raise Http404\n", "path": "jawanndenn/views.py"}]} | 1,865 | 229 |
gh_patches_debug_435 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-1864 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing documentation about unit testing inline scripts
At the company I work on we are developing some inline scripts to use internally.
We are struggling to create test for our scripts because the only code we found to test scripts are the ones at [/mitmproxy/mitmproxy/blob/v0.17/test/mitmproxy/test_examples.py](/mitmproxy/mitmproxy/blob/v0.17/test/mitmproxy/test_examples.py).
The examples are quite good but real documentation is missing for the ones who wants to unit test.
Is there another place where I can find better (maybe more advanced) test examples ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/events.py`
Content:
```
1 from mitmproxy import controller
2 from mitmproxy import http
3 from mitmproxy import tcp
4 from mitmproxy import websocket
5
6 Events = frozenset([
7 "clientconnect",
8 "clientdisconnect",
9 "serverconnect",
10 "serverdisconnect",
11
12 "tcp_start",
13 "tcp_message",
14 "tcp_error",
15 "tcp_end",
16
17 "http_connect",
18 "request",
19 "requestheaders",
20 "response",
21 "responseheaders",
22 "error",
23
24 "intercept",
25 "resume",
26
27 "websocket_handshake",
28 "websocket_start",
29 "websocket_message",
30 "websocket_error",
31 "websocket_end",
32
33 "next_layer",
34
35 "configure",
36 "done",
37 "log",
38 "start",
39 "tick",
40 ])
41
42
43 def event_sequence(f):
44 if isinstance(f, http.HTTPFlow):
45 if f.request:
46 yield "requestheaders", f
47 yield "request", f
48 if f.response:
49 yield "responseheaders", f
50 yield "response", f
51 if f.error:
52 yield "error", f
53 elif isinstance(f, websocket.WebSocketFlow):
54 messages = f.messages
55 f.messages = []
56 f.reply = controller.DummyReply()
57 yield "websocket_start", f
58 while messages:
59 f.messages.append(messages.pop(0))
60 yield "websocket_message", f
61 if f.error:
62 yield "websocket_error", f
63 yield "websocket_end", f
64 elif isinstance(f, tcp.TCPFlow):
65 messages = f.messages
66 f.messages = []
67 f.reply = controller.DummyReply()
68 yield "tcp_start", f
69 while messages:
70 f.messages.append(messages.pop(0))
71 yield "tcp_message", f
72 if f.error:
73 yield "tcp_error", f
74 yield "tcp_end", f
75 else:
76 raise NotImplementedError
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mitmproxy/events.py b/mitmproxy/events.py
--- a/mitmproxy/events.py
+++ b/mitmproxy/events.py
@@ -21,9 +21,6 @@
"responseheaders",
"error",
- "intercept",
- "resume",
-
"websocket_handshake",
"websocket_start",
"websocket_message",
| {"golden_diff": "diff --git a/mitmproxy/events.py b/mitmproxy/events.py\n--- a/mitmproxy/events.py\n+++ b/mitmproxy/events.py\n@@ -21,9 +21,6 @@\n \"responseheaders\",\n \"error\",\n \n- \"intercept\",\n- \"resume\",\n-\n \"websocket_handshake\",\n \"websocket_start\",\n \"websocket_message\",\n", "issue": "Missing documentation about unit testing inline scripts\nAt the company I work on we are developing some inline scripts to use internally.\n\nWe are struggling to create test for our scripts because the only code we found to test scripts are the ones at [/mitmproxy/mitmproxy/blob/v0.17/test/mitmproxy/test_examples.py](/mitmproxy/mitmproxy/blob/v0.17/test/mitmproxy/test_examples.py).\n\nThe examples are quite good but real documentation is missing for the ones who wants to unit test.\n\nIs there another place where I can find better (maybe more advanced) test examples ?\n\n", "before_files": [{"content": "from mitmproxy import controller\nfrom mitmproxy import http\nfrom mitmproxy import tcp\nfrom mitmproxy import websocket\n\nEvents = frozenset([\n \"clientconnect\",\n \"clientdisconnect\",\n \"serverconnect\",\n \"serverdisconnect\",\n\n \"tcp_start\",\n \"tcp_message\",\n \"tcp_error\",\n \"tcp_end\",\n\n \"http_connect\",\n \"request\",\n \"requestheaders\",\n \"response\",\n \"responseheaders\",\n \"error\",\n\n \"intercept\",\n \"resume\",\n\n \"websocket_handshake\",\n \"websocket_start\",\n \"websocket_message\",\n \"websocket_error\",\n \"websocket_end\",\n\n \"next_layer\",\n\n \"configure\",\n \"done\",\n \"log\",\n \"start\",\n \"tick\",\n])\n\n\ndef event_sequence(f):\n if isinstance(f, http.HTTPFlow):\n if f.request:\n yield \"requestheaders\", f\n yield \"request\", f\n if f.response:\n yield \"responseheaders\", f\n yield \"response\", f\n if f.error:\n yield \"error\", f\n elif isinstance(f, websocket.WebSocketFlow):\n messages = f.messages\n f.messages = []\n f.reply = controller.DummyReply()\n yield \"websocket_start\", f\n while messages:\n f.messages.append(messages.pop(0))\n yield \"websocket_message\", f\n if f.error:\n yield \"websocket_error\", f\n yield \"websocket_end\", f\n elif isinstance(f, tcp.TCPFlow):\n messages = f.messages\n f.messages = []\n f.reply = controller.DummyReply()\n yield \"tcp_start\", f\n while messages:\n f.messages.append(messages.pop(0))\n yield \"tcp_message\", f\n if f.error:\n yield \"tcp_error\", f\n yield \"tcp_end\", f\n else:\n raise NotImplementedError\n", "path": "mitmproxy/events.py"}], "after_files": [{"content": "from mitmproxy import controller\nfrom mitmproxy import http\nfrom mitmproxy import tcp\nfrom mitmproxy import websocket\n\nEvents = frozenset([\n \"clientconnect\",\n \"clientdisconnect\",\n \"serverconnect\",\n \"serverdisconnect\",\n\n \"tcp_start\",\n \"tcp_message\",\n \"tcp_error\",\n \"tcp_end\",\n\n \"http_connect\",\n \"request\",\n \"requestheaders\",\n \"response\",\n \"responseheaders\",\n \"error\",\n\n \"websocket_handshake\",\n \"websocket_start\",\n \"websocket_message\",\n \"websocket_error\",\n \"websocket_end\",\n\n \"next_layer\",\n\n \"configure\",\n \"done\",\n \"log\",\n \"start\",\n \"tick\",\n])\n\n\ndef event_sequence(f):\n if isinstance(f, http.HTTPFlow):\n if f.request:\n yield \"requestheaders\", f\n yield \"request\", f\n if f.response:\n yield \"responseheaders\", f\n yield \"response\", f\n if f.error:\n yield \"error\", f\n elif isinstance(f, websocket.WebSocketFlow):\n messages = f.messages\n f.messages = []\n f.reply = controller.DummyReply()\n yield \"websocket_start\", f\n while messages:\n f.messages.append(messages.pop(0))\n yield \"websocket_message\", f\n if f.error:\n yield \"websocket_error\", f\n yield \"websocket_end\", f\n elif isinstance(f, tcp.TCPFlow):\n messages = f.messages\n f.messages = []\n f.reply = controller.DummyReply()\n yield \"tcp_start\", f\n while messages:\n f.messages.append(messages.pop(0))\n yield \"tcp_message\", f\n if f.error:\n yield \"tcp_error\", f\n yield \"tcp_end\", f\n else:\n raise NotImplementedError\n", "path": "mitmproxy/events.py"}]} | 933 | 82 |
gh_patches_debug_34341 | rasdani/github-patches | git_diff | PrefectHQ__prefect-1729 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
When a flow has no storage attached, the agent should mark the run as failed
It appears that if the Agent picks up a run with no storage, it will mark it as submitted. This run will then exist in limbo forever, as it can never be run. The agent should put it in a failed state if it can recognize it'll never run.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/agent/local/agent.py`
Content:
```
1 from sys import platform
2 from typing import Iterable
3
4 import docker
5
6 from prefect import config, context
7 from prefect.agent import Agent
8 from prefect.environments.storage import Docker
9 from prefect.serialization.storage import StorageSchema
10 from prefect.utilities.graphql import GraphQLResult
11
12
13 class LocalAgent(Agent):
14 """
15 Agent which deploys flow runs locally as Docker containers. Information on using the
16 Local Agent can be found at https://docs.prefect.io/cloud/agent/local.html
17
18 Args:
19 - name (str, optional): An optional name to give this agent. Can also be set through
20 the environment variable `PREFECT__CLOUD__AGENT__NAME`. Defaults to "agent"
21 - labels (List[str], optional): a list of labels, which are arbitrary string identifiers used by Prefect
22 Agents when polling for work
23 - base_url (str, optional): URL for a Docker daemon server. Defaults to
24 `unix:///var/run/docker.sock` however other hosts such as
25 `tcp://0.0.0.0:2375` can be provided
26 - no_pull (bool, optional): Flag on whether or not to pull flow images.
27 Defaults to `False` if not provided here or in context.
28 """
29
30 def __init__(
31 self,
32 name: str = None,
33 labels: Iterable[str] = None,
34 base_url: str = None,
35 no_pull: bool = None,
36 ) -> None:
37 super().__init__(name=name, labels=labels)
38
39 if platform == "win32":
40 default_url = "npipe:////./pipe/docker_engine"
41 else:
42 default_url = "unix://var/run/docker.sock"
43 self.logger.debug(
44 "Platform {} and default docker daemon {}".format(platform, default_url)
45 )
46
47 # Determine Daemon URL
48 self.base_url = base_url or context.get("base_url", default_url)
49 self.logger.debug("Base docker daemon url {}".format(self.base_url))
50
51 # Determine pull specification
52 self.no_pull = no_pull or context.get("no_pull", False)
53 self.logger.debug("no_pull set to {}".format(self.no_pull))
54
55 self.docker_client = docker.APIClient(base_url=self.base_url, version="auto")
56
57 # Ping Docker daemon for connection issues
58 try:
59 self.logger.debug("Pinging docker daemon")
60 self.docker_client.ping()
61 except Exception as exc:
62 self.logger.exception(
63 "Issue connecting to the Docker daemon. Make sure it is running."
64 )
65 raise exc
66
67 def deploy_flows(self, flow_runs: list) -> None:
68 """
69 Deploy flow runs on your local machine as Docker containers
70
71 Args:
72 - flow_runs (list): A list of GraphQLResult flow run objects
73 """
74 for flow_run in flow_runs:
75 self.logger.info(
76 "Deploying flow run {}".format(flow_run.id) # type: ignore
77 )
78
79 storage = StorageSchema().load(flow_run.flow.storage)
80 if not isinstance(StorageSchema().load(flow_run.flow.storage), Docker):
81 self.logger.error(
82 "Storage for flow run {} is not of type Docker.".format(flow_run.id)
83 )
84 continue
85
86 env_vars = self.populate_env_vars(flow_run=flow_run)
87
88 if not self.no_pull and storage.registry_url:
89 self.logger.info("Pulling image {}...".format(storage.name))
90 try:
91 pull_output = self.docker_client.pull(
92 storage.name, stream=True, decode=True
93 )
94 for line in pull_output:
95 self.logger.debug(line)
96 self.logger.info(
97 "Successfully pulled image {}...".format(storage.name)
98 )
99 except docker.errors.APIError as exc:
100 self.logger.error("Issue pulling image {}".format(storage.name))
101
102 # Create a container
103 self.logger.debug("Creating Docker container {}".format(storage.name))
104 container = self.docker_client.create_container(
105 storage.name, command="prefect execute cloud-flow", environment=env_vars
106 )
107
108 # Start the container
109 self.logger.debug(
110 "Starting Docker container with ID {}".format(container.get("Id"))
111 )
112 self.docker_client.start(container=container.get("Id"))
113
114 def populate_env_vars(self, flow_run: GraphQLResult) -> dict:
115 """
116 Populate metadata and variables in the environment variables for a flow run
117
118 Args:
119 - flow_run (GraphQLResult): A flow run object
120
121 Returns:
122 - dict: a dictionary representing the populated environment variables
123 """
124 return {
125 "PREFECT__CLOUD__API": config.cloud.api,
126 "PREFECT__CLOUD__AUTH_TOKEN": config.cloud.agent.auth_token,
127 "PREFECT__CLOUD__AGENT__LABELS": str(self.labels),
128 "PREFECT__CONTEXT__FLOW_RUN_ID": flow_run.id, # type: ignore
129 "PREFECT__CLOUD__USE_LOCAL_SECRETS": "false",
130 "PREFECT__LOGGING__LOG_TO_CLOUD": "true",
131 "PREFECT__LOGGING__LEVEL": "DEBUG",
132 "PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS": "prefect.engine.cloud.CloudFlowRunner",
133 "PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS": "prefect.engine.cloud.CloudTaskRunner",
134 }
135
136
137 if __name__ == "__main__":
138 LocalAgent().start()
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/prefect/agent/local/agent.py b/src/prefect/agent/local/agent.py
--- a/src/prefect/agent/local/agent.py
+++ b/src/prefect/agent/local/agent.py
@@ -5,6 +5,7 @@
from prefect import config, context
from prefect.agent import Agent
+from prefect.engine.state import Failed
from prefect.environments.storage import Docker
from prefect.serialization.storage import StorageSchema
from prefect.utilities.graphql import GraphQLResult
@@ -78,9 +79,14 @@
storage = StorageSchema().load(flow_run.flow.storage)
if not isinstance(StorageSchema().load(flow_run.flow.storage), Docker):
- self.logger.error(
- "Storage for flow run {} is not of type Docker.".format(flow_run.id)
+ msg = "Storage for flow run {} is not of type Docker.".format(
+ flow_run.id
)
+ state_msg = "Agent {} failed to run flow: ".format(self.name) + msg
+ self.client.set_flow_run_state(
+ flow_run.id, version=flow_run.version, state=Failed(state_msg)
+ )
+ self.logger.error(msg)
continue
env_vars = self.populate_env_vars(flow_run=flow_run)
@@ -97,7 +103,15 @@
"Successfully pulled image {}...".format(storage.name)
)
except docker.errors.APIError as exc:
- self.logger.error("Issue pulling image {}".format(storage.name))
+ msg = "Issue pulling image {}".format(storage.name)
+ state_msg = (
+ "Agent {} failed to pull image for flow: ".format(self.name)
+ + msg
+ )
+ self.client.set_flow_run_state(
+ flow_run.id, version=flow_run.version, state=Failed(msg)
+ )
+ self.logger.error(msg)
# Create a container
self.logger.debug("Creating Docker container {}".format(storage.name))
| {"golden_diff": "diff --git a/src/prefect/agent/local/agent.py b/src/prefect/agent/local/agent.py\n--- a/src/prefect/agent/local/agent.py\n+++ b/src/prefect/agent/local/agent.py\n@@ -5,6 +5,7 @@\n \n from prefect import config, context\n from prefect.agent import Agent\n+from prefect.engine.state import Failed\n from prefect.environments.storage import Docker\n from prefect.serialization.storage import StorageSchema\n from prefect.utilities.graphql import GraphQLResult\n@@ -78,9 +79,14 @@\n \n storage = StorageSchema().load(flow_run.flow.storage)\n if not isinstance(StorageSchema().load(flow_run.flow.storage), Docker):\n- self.logger.error(\n- \"Storage for flow run {} is not of type Docker.\".format(flow_run.id)\n+ msg = \"Storage for flow run {} is not of type Docker.\".format(\n+ flow_run.id\n )\n+ state_msg = \"Agent {} failed to run flow: \".format(self.name) + msg\n+ self.client.set_flow_run_state(\n+ flow_run.id, version=flow_run.version, state=Failed(state_msg)\n+ )\n+ self.logger.error(msg)\n continue\n \n env_vars = self.populate_env_vars(flow_run=flow_run)\n@@ -97,7 +103,15 @@\n \"Successfully pulled image {}...\".format(storage.name)\n )\n except docker.errors.APIError as exc:\n- self.logger.error(\"Issue pulling image {}\".format(storage.name))\n+ msg = \"Issue pulling image {}\".format(storage.name)\n+ state_msg = (\n+ \"Agent {} failed to pull image for flow: \".format(self.name)\n+ + msg\n+ )\n+ self.client.set_flow_run_state(\n+ flow_run.id, version=flow_run.version, state=Failed(msg)\n+ )\n+ self.logger.error(msg)\n \n # Create a container\n self.logger.debug(\"Creating Docker container {}\".format(storage.name))\n", "issue": "When a flow has no storage attached, the agent should mark the run as failed\n\r\nIt appears that if the Agent picks up a run with no storage, it will mark it as submitted. This run will then exist in limbo forever, as it can never be run. The agent should put it in a failed state if it can recognize it'll never run.\n", "before_files": [{"content": "from sys import platform\nfrom typing import Iterable\n\nimport docker\n\nfrom prefect import config, context\nfrom prefect.agent import Agent\nfrom prefect.environments.storage import Docker\nfrom prefect.serialization.storage import StorageSchema\nfrom prefect.utilities.graphql import GraphQLResult\n\n\nclass LocalAgent(Agent):\n \"\"\"\n Agent which deploys flow runs locally as Docker containers. Information on using the\n Local Agent can be found at https://docs.prefect.io/cloud/agent/local.html\n\n Args:\n - name (str, optional): An optional name to give this agent. Can also be set through\n the environment variable `PREFECT__CLOUD__AGENT__NAME`. Defaults to \"agent\"\n - labels (List[str], optional): a list of labels, which are arbitrary string identifiers used by Prefect\n Agents when polling for work\n - base_url (str, optional): URL for a Docker daemon server. Defaults to\n `unix:///var/run/docker.sock` however other hosts such as\n `tcp://0.0.0.0:2375` can be provided\n - no_pull (bool, optional): Flag on whether or not to pull flow images.\n Defaults to `False` if not provided here or in context.\n \"\"\"\n\n def __init__(\n self,\n name: str = None,\n labels: Iterable[str] = None,\n base_url: str = None,\n no_pull: bool = None,\n ) -> None:\n super().__init__(name=name, labels=labels)\n\n if platform == \"win32\":\n default_url = \"npipe:////./pipe/docker_engine\"\n else:\n default_url = \"unix://var/run/docker.sock\"\n self.logger.debug(\n \"Platform {} and default docker daemon {}\".format(platform, default_url)\n )\n\n # Determine Daemon URL\n self.base_url = base_url or context.get(\"base_url\", default_url)\n self.logger.debug(\"Base docker daemon url {}\".format(self.base_url))\n\n # Determine pull specification\n self.no_pull = no_pull or context.get(\"no_pull\", False)\n self.logger.debug(\"no_pull set to {}\".format(self.no_pull))\n\n self.docker_client = docker.APIClient(base_url=self.base_url, version=\"auto\")\n\n # Ping Docker daemon for connection issues\n try:\n self.logger.debug(\"Pinging docker daemon\")\n self.docker_client.ping()\n except Exception as exc:\n self.logger.exception(\n \"Issue connecting to the Docker daemon. Make sure it is running.\"\n )\n raise exc\n\n def deploy_flows(self, flow_runs: list) -> None:\n \"\"\"\n Deploy flow runs on your local machine as Docker containers\n\n Args:\n - flow_runs (list): A list of GraphQLResult flow run objects\n \"\"\"\n for flow_run in flow_runs:\n self.logger.info(\n \"Deploying flow run {}\".format(flow_run.id) # type: ignore\n )\n\n storage = StorageSchema().load(flow_run.flow.storage)\n if not isinstance(StorageSchema().load(flow_run.flow.storage), Docker):\n self.logger.error(\n \"Storage for flow run {} is not of type Docker.\".format(flow_run.id)\n )\n continue\n\n env_vars = self.populate_env_vars(flow_run=flow_run)\n\n if not self.no_pull and storage.registry_url:\n self.logger.info(\"Pulling image {}...\".format(storage.name))\n try:\n pull_output = self.docker_client.pull(\n storage.name, stream=True, decode=True\n )\n for line in pull_output:\n self.logger.debug(line)\n self.logger.info(\n \"Successfully pulled image {}...\".format(storage.name)\n )\n except docker.errors.APIError as exc:\n self.logger.error(\"Issue pulling image {}\".format(storage.name))\n\n # Create a container\n self.logger.debug(\"Creating Docker container {}\".format(storage.name))\n container = self.docker_client.create_container(\n storage.name, command=\"prefect execute cloud-flow\", environment=env_vars\n )\n\n # Start the container\n self.logger.debug(\n \"Starting Docker container with ID {}\".format(container.get(\"Id\"))\n )\n self.docker_client.start(container=container.get(\"Id\"))\n\n def populate_env_vars(self, flow_run: GraphQLResult) -> dict:\n \"\"\"\n Populate metadata and variables in the environment variables for a flow run\n\n Args:\n - flow_run (GraphQLResult): A flow run object\n\n Returns:\n - dict: a dictionary representing the populated environment variables\n \"\"\"\n return {\n \"PREFECT__CLOUD__API\": config.cloud.api,\n \"PREFECT__CLOUD__AUTH_TOKEN\": config.cloud.agent.auth_token,\n \"PREFECT__CLOUD__AGENT__LABELS\": str(self.labels),\n \"PREFECT__CONTEXT__FLOW_RUN_ID\": flow_run.id, # type: ignore\n \"PREFECT__CLOUD__USE_LOCAL_SECRETS\": \"false\",\n \"PREFECT__LOGGING__LOG_TO_CLOUD\": \"true\",\n \"PREFECT__LOGGING__LEVEL\": \"DEBUG\",\n \"PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS\": \"prefect.engine.cloud.CloudFlowRunner\",\n \"PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS\": \"prefect.engine.cloud.CloudTaskRunner\",\n }\n\n\nif __name__ == \"__main__\":\n LocalAgent().start()\n", "path": "src/prefect/agent/local/agent.py"}], "after_files": [{"content": "from sys import platform\nfrom typing import Iterable\n\nimport docker\n\nfrom prefect import config, context\nfrom prefect.agent import Agent\nfrom prefect.engine.state import Failed\nfrom prefect.environments.storage import Docker\nfrom prefect.serialization.storage import StorageSchema\nfrom prefect.utilities.graphql import GraphQLResult\n\n\nclass LocalAgent(Agent):\n \"\"\"\n Agent which deploys flow runs locally as Docker containers. Information on using the\n Local Agent can be found at https://docs.prefect.io/cloud/agent/local.html\n\n Args:\n - name (str, optional): An optional name to give this agent. Can also be set through\n the environment variable `PREFECT__CLOUD__AGENT__NAME`. Defaults to \"agent\"\n - labels (List[str], optional): a list of labels, which are arbitrary string identifiers used by Prefect\n Agents when polling for work\n - base_url (str, optional): URL for a Docker daemon server. Defaults to\n `unix:///var/run/docker.sock` however other hosts such as\n `tcp://0.0.0.0:2375` can be provided\n - no_pull (bool, optional): Flag on whether or not to pull flow images.\n Defaults to `False` if not provided here or in context.\n \"\"\"\n\n def __init__(\n self,\n name: str = None,\n labels: Iterable[str] = None,\n base_url: str = None,\n no_pull: bool = None,\n ) -> None:\n super().__init__(name=name, labels=labels)\n\n if platform == \"win32\":\n default_url = \"npipe:////./pipe/docker_engine\"\n else:\n default_url = \"unix://var/run/docker.sock\"\n self.logger.debug(\n \"Platform {} and default docker daemon {}\".format(platform, default_url)\n )\n\n # Determine Daemon URL\n self.base_url = base_url or context.get(\"base_url\", default_url)\n self.logger.debug(\"Base docker daemon url {}\".format(self.base_url))\n\n # Determine pull specification\n self.no_pull = no_pull or context.get(\"no_pull\", False)\n self.logger.debug(\"no_pull set to {}\".format(self.no_pull))\n\n self.docker_client = docker.APIClient(base_url=self.base_url, version=\"auto\")\n\n # Ping Docker daemon for connection issues\n try:\n self.logger.debug(\"Pinging docker daemon\")\n self.docker_client.ping()\n except Exception as exc:\n self.logger.exception(\n \"Issue connecting to the Docker daemon. Make sure it is running.\"\n )\n raise exc\n\n def deploy_flows(self, flow_runs: list) -> None:\n \"\"\"\n Deploy flow runs on your local machine as Docker containers\n\n Args:\n - flow_runs (list): A list of GraphQLResult flow run objects\n \"\"\"\n for flow_run in flow_runs:\n self.logger.info(\n \"Deploying flow run {}\".format(flow_run.id) # type: ignore\n )\n\n storage = StorageSchema().load(flow_run.flow.storage)\n if not isinstance(StorageSchema().load(flow_run.flow.storage), Docker):\n msg = \"Storage for flow run {} is not of type Docker.\".format(\n flow_run.id\n )\n state_msg = \"Agent {} failed to run flow: \".format(self.name) + msg\n self.client.set_flow_run_state(\n flow_run.id, version=flow_run.version, state=Failed(state_msg)\n )\n self.logger.error(msg)\n continue\n\n env_vars = self.populate_env_vars(flow_run=flow_run)\n\n if not self.no_pull and storage.registry_url:\n self.logger.info(\"Pulling image {}...\".format(storage.name))\n try:\n pull_output = self.docker_client.pull(\n storage.name, stream=True, decode=True\n )\n for line in pull_output:\n self.logger.debug(line)\n self.logger.info(\n \"Successfully pulled image {}...\".format(storage.name)\n )\n except docker.errors.APIError as exc:\n msg = \"Issue pulling image {}\".format(storage.name)\n state_msg = (\n \"Agent {} failed to pull image for flow: \".format(self.name)\n + msg\n )\n self.client.set_flow_run_state(\n flow_run.id, version=flow_run.version, state=Failed(msg)\n )\n self.logger.error(msg)\n\n # Create a container\n self.logger.debug(\"Creating Docker container {}\".format(storage.name))\n container = self.docker_client.create_container(\n storage.name, command=\"prefect execute cloud-flow\", environment=env_vars\n )\n\n # Start the container\n self.logger.debug(\n \"Starting Docker container with ID {}\".format(container.get(\"Id\"))\n )\n self.docker_client.start(container=container.get(\"Id\"))\n\n def populate_env_vars(self, flow_run: GraphQLResult) -> dict:\n \"\"\"\n Populate metadata and variables in the environment variables for a flow run\n\n Args:\n - flow_run (GraphQLResult): A flow run object\n\n Returns:\n - dict: a dictionary representing the populated environment variables\n \"\"\"\n return {\n \"PREFECT__CLOUD__API\": config.cloud.api,\n \"PREFECT__CLOUD__AUTH_TOKEN\": config.cloud.agent.auth_token,\n \"PREFECT__CLOUD__AGENT__LABELS\": str(self.labels),\n \"PREFECT__CONTEXT__FLOW_RUN_ID\": flow_run.id, # type: ignore\n \"PREFECT__CLOUD__USE_LOCAL_SECRETS\": \"false\",\n \"PREFECT__LOGGING__LOG_TO_CLOUD\": \"true\",\n \"PREFECT__LOGGING__LEVEL\": \"DEBUG\",\n \"PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS\": \"prefect.engine.cloud.CloudFlowRunner\",\n \"PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS\": \"prefect.engine.cloud.CloudTaskRunner\",\n }\n\n\nif __name__ == \"__main__\":\n LocalAgent().start()\n", "path": "src/prefect/agent/local/agent.py"}]} | 1,799 | 428 |
gh_patches_debug_4680 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-5936 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Espaces insécables dans le titre de mise en Une
**Description du bug**
Espace non insécable dans l'aperçu de mis en Une. Ce problème n'a pas pu être par #5292 car l'aperçu est généré par du JavaScript.
**Comment reproduire ?**
La liste des étapes qui permet de reproduire le bug :
1. Se connecter en tant qu'administrateur ;
2. Accéder à la page d'un contenu ;
3. Mettre ce contenu en Une ;
4. Mettre le titre suivant : « `Comment alerter les secours ?` ».
**Comportement attendu**
L'espace doit automatiquement être remplacé par un espace insécable, avant ou après les signes concernés.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zds/utils/templatetags/french_typography.py`
Content:
```
1 from django import template
2 from django.template.defaultfilters import stringfilter
3 from django.utils.safestring import mark_safe
4
5 register = template.Library()
6
7
8 @register.filter()
9 @stringfilter
10 def french_typography(str):
11 """
12 Replaces spaces with non-breaking-spaces or narrow non-breaking-spaces
13 before or after some symbols, according to French typography.
14
15 This filter is naive and should not be used on Markdown content.
16 """
17 return mark_safe(
18 # Narrow non-breaking space:  
19 str.replace(' ;', ' ;')
20 .replace(' ?', ' ?')
21 .replace(' !', ' !')
22 .replace(' %', ' %')
23 # Non-breaking space:
24 .replace('« ', '« ')
25 .replace(' »', ' »')
26 .replace(' :', ' :')
27 )
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zds/utils/templatetags/french_typography.py b/zds/utils/templatetags/french_typography.py
--- a/zds/utils/templatetags/french_typography.py
+++ b/zds/utils/templatetags/french_typography.py
@@ -13,6 +13,9 @@
before or after some symbols, according to French typography.
This filter is naive and should not be used on Markdown content.
+
+
+ Any change here should also be made in assets/js/featured-resource-preview.js
"""
return mark_safe(
# Narrow non-breaking space:  
| {"golden_diff": "diff --git a/zds/utils/templatetags/french_typography.py b/zds/utils/templatetags/french_typography.py\n--- a/zds/utils/templatetags/french_typography.py\n+++ b/zds/utils/templatetags/french_typography.py\n@@ -13,6 +13,9 @@\n before or after some symbols, according to French typography.\n \n This filter is naive and should not be used on Markdown content.\n+\n+\n+ Any change here should also be made in assets/js/featured-resource-preview.js\n \"\"\"\n return mark_safe(\n # Narrow non-breaking space:  \n", "issue": "Espaces ins\u00e9cables dans le titre de mise en Une\n**Description du bug**\r\n\r\nEspace non ins\u00e9cable dans l'aper\u00e7u de mis en Une. Ce probl\u00e8me n'a pas pu \u00eatre par #5292 car l'aper\u00e7u est g\u00e9n\u00e9r\u00e9 par du JavaScript.\r\n\r\n**Comment reproduire ?**\r\n\r\nLa liste des \u00e9tapes qui permet de reproduire le bug :\r\n\r\n1. Se connecter en tant qu'administrateur\u00a0;\r\n2. Acc\u00e9der \u00e0 la page d'un contenu\u00a0;\r\n3. Mettre ce contenu en Une\u00a0;\r\n4. Mettre le titre suivant\u00a0: \u00ab\u00a0`Comment alerter les secours ?` \u00bb.\r\n\r\n**Comportement attendu**\r\n\r\nL'espace doit automatiquement \u00eatre remplac\u00e9 par un espace ins\u00e9cable, avant ou apr\u00e8s les signes concern\u00e9s.\n", "before_files": [{"content": "from django import template\nfrom django.template.defaultfilters import stringfilter\nfrom django.utils.safestring import mark_safe\n\nregister = template.Library()\n\n\[email protected]()\n@stringfilter\ndef french_typography(str):\n \"\"\"\n Replaces spaces with non-breaking-spaces or narrow non-breaking-spaces\n before or after some symbols, according to French typography.\n\n This filter is naive and should not be used on Markdown content.\n \"\"\"\n return mark_safe(\n # Narrow non-breaking space:  \n str.replace(' ;', ' ;')\n .replace(' ?', ' ?')\n .replace(' !', ' !')\n .replace(' %', ' %')\n # Non-breaking space: \n .replace('\u00ab ', '\u00ab ')\n .replace(' \u00bb', ' \u00bb')\n .replace(' :', ' :')\n )\n", "path": "zds/utils/templatetags/french_typography.py"}], "after_files": [{"content": "from django import template\nfrom django.template.defaultfilters import stringfilter\nfrom django.utils.safestring import mark_safe\n\nregister = template.Library()\n\n\[email protected]()\n@stringfilter\ndef french_typography(str):\n \"\"\"\n Replaces spaces with non-breaking-spaces or narrow non-breaking-spaces\n before or after some symbols, according to French typography.\n\n This filter is naive and should not be used on Markdown content.\n\n\n Any change here should also be made in assets/js/featured-resource-preview.js\n \"\"\"\n return mark_safe(\n # Narrow non-breaking space:  \n str.replace(' ;', ' ;')\n .replace(' ?', ' ?')\n .replace(' !', ' !')\n .replace(' %', ' %')\n # Non-breaking space: \n .replace('\u00ab ', '\u00ab ')\n .replace(' \u00bb', ' \u00bb')\n .replace(' :', ' :')\n )\n", "path": "zds/utils/templatetags/french_typography.py"}]} | 697 | 143 |
gh_patches_debug_22737 | rasdani/github-patches | git_diff | aio-libs__aiohttp-6144 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Switch http_parser to llhttp
Nodejs decide to move from their hard-coded C parser to this one.
https://github.com/nodejs/http-parser/pull/285#issuecomment-456025694
*Upd:*
proposed parser is https://llhttp.org
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 import pathlib
3 import re
4 import sys
5
6 from setuptools import Extension, setup
7
8 if sys.version_info < (3, 7):
9 raise RuntimeError("aiohttp 4.x requires Python 3.7+")
10
11
12 NO_EXTENSIONS = bool(os.environ.get("AIOHTTP_NO_EXTENSIONS")) # type: bool
13 HERE = pathlib.Path(__file__).parent
14 IS_GIT_REPO = (HERE / ".git").exists()
15
16
17 if sys.implementation.name != "cpython":
18 NO_EXTENSIONS = True
19
20
21 if IS_GIT_REPO and not (HERE / "vendor/http-parser/README.md").exists():
22 print("Install submodules when building from git clone", file=sys.stderr)
23 print("Hint:", file=sys.stderr)
24 print(" git submodule update --init", file=sys.stderr)
25 sys.exit(2)
26
27
28 # NOTE: makefile cythonizes all Cython modules
29
30 extensions = [
31 Extension("aiohttp._websocket", ["aiohttp/_websocket.c"]),
32 Extension(
33 "aiohttp._http_parser",
34 [
35 "aiohttp/_http_parser.c",
36 "vendor/http-parser/http_parser.c",
37 "aiohttp/_find_header.c",
38 ],
39 define_macros=[("HTTP_PARSER_STRICT", 0)],
40 ),
41 Extension("aiohttp._helpers", ["aiohttp/_helpers.c"]),
42 Extension("aiohttp._http_writer", ["aiohttp/_http_writer.c"]),
43 ]
44
45
46 txt = (HERE / "aiohttp" / "__init__.py").read_text("utf-8")
47 try:
48 version = re.findall(r'^__version__ = "([^"]+)"\r?$', txt, re.M)[0]
49 except IndexError:
50 raise RuntimeError("Unable to determine version.")
51
52 install_requires = [
53 "charset-normalizer>=2.0,<3.0",
54 "multidict>=4.5,<7.0",
55 "async_timeout>=4.0a2,<5.0",
56 'asynctest==0.13.0; python_version<"3.8"',
57 "yarl>=1.0,<2.0",
58 "typing_extensions>=3.7.4",
59 "frozenlist>=1.1.1",
60 "aiosignal>=1.1.2",
61 ]
62
63
64 def read(f):
65 return (HERE / f).read_text("utf-8").strip()
66
67
68 args = dict(
69 name="aiohttp",
70 version=version,
71 description="Async http client/server framework (asyncio)",
72 long_description="\n\n".join((read("README.rst"), read("CHANGES.rst"))),
73 long_description_content_type="text/x-rst",
74 classifiers=[
75 "License :: OSI Approved :: Apache Software License",
76 "Intended Audience :: Developers",
77 "Programming Language :: Python",
78 "Programming Language :: Python :: 3",
79 "Programming Language :: Python :: 3.7",
80 "Programming Language :: Python :: 3.8",
81 "Programming Language :: Python :: 3.9",
82 "Programming Language :: Python :: 3.10",
83 "Development Status :: 5 - Production/Stable",
84 "Operating System :: POSIX",
85 "Operating System :: MacOS :: MacOS X",
86 "Operating System :: Microsoft :: Windows",
87 "Topic :: Internet :: WWW/HTTP",
88 "Framework :: AsyncIO",
89 ],
90 author="Nikolay Kim",
91 author_email="[email protected]",
92 maintainer=", ".join(
93 (
94 "Nikolay Kim <[email protected]>",
95 "Andrew Svetlov <[email protected]>",
96 )
97 ),
98 maintainer_email="[email protected]",
99 url="https://github.com/aio-libs/aiohttp",
100 project_urls={
101 "Chat: Gitter": "https://gitter.im/aio-libs/Lobby",
102 "CI: GitHub Actions": "https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI", # noqa
103 "Coverage: codecov": "https://codecov.io/github/aio-libs/aiohttp",
104 "Docs: Changelog": "https://docs.aiohttp.org/en/stable/changes.html",
105 "Docs: RTD": "https://docs.aiohttp.org",
106 "GitHub: issues": "https://github.com/aio-libs/aiohttp/issues",
107 "GitHub: repo": "https://github.com/aio-libs/aiohttp",
108 },
109 license="Apache 2",
110 packages=["aiohttp"],
111 python_requires=">=3.7",
112 install_requires=install_requires,
113 extras_require={
114 "speedups": [
115 "aiodns>=1.1",
116 "Brotli",
117 "cchardet",
118 ],
119 },
120 include_package_data=True,
121 )
122
123 if not NO_EXTENSIONS:
124 print("*********************")
125 print("* Accelerated build *")
126 print("*********************")
127 setup(ext_modules=extensions, **args)
128 else:
129 print("*********************")
130 print("* Pure Python build *")
131 print("*********************")
132 setup(**args)
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,7 +18,7 @@
NO_EXTENSIONS = True
-if IS_GIT_REPO and not (HERE / "vendor/http-parser/README.md").exists():
+if IS_GIT_REPO and not (HERE / "vendor/llhttp/README.md").exists():
print("Install submodules when building from git clone", file=sys.stderr)
print("Hint:", file=sys.stderr)
print(" git submodule update --init", file=sys.stderr)
@@ -33,10 +33,13 @@
"aiohttp._http_parser",
[
"aiohttp/_http_parser.c",
- "vendor/http-parser/http_parser.c",
"aiohttp/_find_header.c",
+ "vendor/llhttp/build/c/llhttp.c",
+ "vendor/llhttp/src/native/api.c",
+ "vendor/llhttp/src/native/http.c",
],
- define_macros=[("HTTP_PARSER_STRICT", 0)],
+ define_macros=[("LLHTTP_STRICT_MODE", 0)],
+ include_dirs=["vendor/llhttp/build"],
),
Extension("aiohttp._helpers", ["aiohttp/_helpers.c"]),
Extension("aiohttp._http_writer", ["aiohttp/_http_writer.c"]),
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,7 +18,7 @@\n NO_EXTENSIONS = True\n \n \n-if IS_GIT_REPO and not (HERE / \"vendor/http-parser/README.md\").exists():\n+if IS_GIT_REPO and not (HERE / \"vendor/llhttp/README.md\").exists():\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" git submodule update --init\", file=sys.stderr)\n@@ -33,10 +33,13 @@\n \"aiohttp._http_parser\",\n [\n \"aiohttp/_http_parser.c\",\n- \"vendor/http-parser/http_parser.c\",\n \"aiohttp/_find_header.c\",\n+ \"vendor/llhttp/build/c/llhttp.c\",\n+ \"vendor/llhttp/src/native/api.c\",\n+ \"vendor/llhttp/src/native/http.c\",\n ],\n- define_macros=[(\"HTTP_PARSER_STRICT\", 0)],\n+ define_macros=[(\"LLHTTP_STRICT_MODE\", 0)],\n+ include_dirs=[\"vendor/llhttp/build\"],\n ),\n Extension(\"aiohttp._helpers\", [\"aiohttp/_helpers.c\"]),\n Extension(\"aiohttp._http_writer\", [\"aiohttp/_http_writer.c\"]),\n", "issue": "Switch http_parser to llhttp\nNodejs decide to move from their hard-coded C parser to this one.\r\n\r\nhttps://github.com/nodejs/http-parser/pull/285#issuecomment-456025694\r\n\r\n*Upd:* \r\nproposed parser is https://llhttp.org\r\n\n", "before_files": [{"content": "import os\nimport pathlib\nimport re\nimport sys\n\nfrom setuptools import Extension, setup\n\nif sys.version_info < (3, 7):\n raise RuntimeError(\"aiohttp 4.x requires Python 3.7+\")\n\n\nNO_EXTENSIONS = bool(os.environ.get(\"AIOHTTP_NO_EXTENSIONS\")) # type: bool\nHERE = pathlib.Path(__file__).parent\nIS_GIT_REPO = (HERE / \".git\").exists()\n\n\nif sys.implementation.name != \"cpython\":\n NO_EXTENSIONS = True\n\n\nif IS_GIT_REPO and not (HERE / \"vendor/http-parser/README.md\").exists():\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" git submodule update --init\", file=sys.stderr)\n sys.exit(2)\n\n\n# NOTE: makefile cythonizes all Cython modules\n\nextensions = [\n Extension(\"aiohttp._websocket\", [\"aiohttp/_websocket.c\"]),\n Extension(\n \"aiohttp._http_parser\",\n [\n \"aiohttp/_http_parser.c\",\n \"vendor/http-parser/http_parser.c\",\n \"aiohttp/_find_header.c\",\n ],\n define_macros=[(\"HTTP_PARSER_STRICT\", 0)],\n ),\n Extension(\"aiohttp._helpers\", [\"aiohttp/_helpers.c\"]),\n Extension(\"aiohttp._http_writer\", [\"aiohttp/_http_writer.c\"]),\n]\n\n\ntxt = (HERE / \"aiohttp\" / \"__init__.py\").read_text(\"utf-8\")\ntry:\n version = re.findall(r'^__version__ = \"([^\"]+)\"\\r?$', txt, re.M)[0]\nexcept IndexError:\n raise RuntimeError(\"Unable to determine version.\")\n\ninstall_requires = [\n \"charset-normalizer>=2.0,<3.0\",\n \"multidict>=4.5,<7.0\",\n \"async_timeout>=4.0a2,<5.0\",\n 'asynctest==0.13.0; python_version<\"3.8\"',\n \"yarl>=1.0,<2.0\",\n \"typing_extensions>=3.7.4\",\n \"frozenlist>=1.1.1\",\n \"aiosignal>=1.1.2\",\n]\n\n\ndef read(f):\n return (HERE / f).read_text(\"utf-8\").strip()\n\n\nargs = dict(\n name=\"aiohttp\",\n version=version,\n description=\"Async http client/server framework (asyncio)\",\n long_description=\"\\n\\n\".join((read(\"README.rst\"), read(\"CHANGES.rst\"))),\n long_description_content_type=\"text/x-rst\",\n classifiers=[\n \"License :: OSI Approved :: Apache Software License\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Development Status :: 5 - Production/Stable\",\n \"Operating System :: POSIX\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: Microsoft :: Windows\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Framework :: AsyncIO\",\n ],\n author=\"Nikolay Kim\",\n author_email=\"[email protected]\",\n maintainer=\", \".join(\n (\n \"Nikolay Kim <[email protected]>\",\n \"Andrew Svetlov <[email protected]>\",\n )\n ),\n maintainer_email=\"[email protected]\",\n url=\"https://github.com/aio-libs/aiohttp\",\n project_urls={\n \"Chat: Gitter\": \"https://gitter.im/aio-libs/Lobby\",\n \"CI: GitHub Actions\": \"https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI\", # noqa\n \"Coverage: codecov\": \"https://codecov.io/github/aio-libs/aiohttp\",\n \"Docs: Changelog\": \"https://docs.aiohttp.org/en/stable/changes.html\",\n \"Docs: RTD\": \"https://docs.aiohttp.org\",\n \"GitHub: issues\": \"https://github.com/aio-libs/aiohttp/issues\",\n \"GitHub: repo\": \"https://github.com/aio-libs/aiohttp\",\n },\n license=\"Apache 2\",\n packages=[\"aiohttp\"],\n python_requires=\">=3.7\",\n install_requires=install_requires,\n extras_require={\n \"speedups\": [\n \"aiodns>=1.1\",\n \"Brotli\",\n \"cchardet\",\n ],\n },\n include_package_data=True,\n)\n\nif not NO_EXTENSIONS:\n print(\"*********************\")\n print(\"* Accelerated build *\")\n print(\"*********************\")\n setup(ext_modules=extensions, **args)\nelse:\n print(\"*********************\")\n print(\"* Pure Python build *\")\n print(\"*********************\")\n setup(**args)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nimport pathlib\nimport re\nimport sys\n\nfrom setuptools import Extension, setup\n\nif sys.version_info < (3, 7):\n raise RuntimeError(\"aiohttp 4.x requires Python 3.7+\")\n\n\nNO_EXTENSIONS = bool(os.environ.get(\"AIOHTTP_NO_EXTENSIONS\")) # type: bool\nHERE = pathlib.Path(__file__).parent\nIS_GIT_REPO = (HERE / \".git\").exists()\n\n\nif sys.implementation.name != \"cpython\":\n NO_EXTENSIONS = True\n\n\nif IS_GIT_REPO and not (HERE / \"vendor/llhttp/README.md\").exists():\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" git submodule update --init\", file=sys.stderr)\n sys.exit(2)\n\n\n# NOTE: makefile cythonizes all Cython modules\n\nextensions = [\n Extension(\"aiohttp._websocket\", [\"aiohttp/_websocket.c\"]),\n Extension(\n \"aiohttp._http_parser\",\n [\n \"aiohttp/_http_parser.c\",\n \"aiohttp/_find_header.c\",\n \"vendor/llhttp/build/c/llhttp.c\",\n \"vendor/llhttp/src/native/api.c\",\n \"vendor/llhttp/src/native/http.c\",\n ],\n define_macros=[(\"LLHTTP_STRICT_MODE\", 0)],\n include_dirs=[\"vendor/llhttp/build\"],\n ),\n Extension(\"aiohttp._helpers\", [\"aiohttp/_helpers.c\"]),\n Extension(\"aiohttp._http_writer\", [\"aiohttp/_http_writer.c\"]),\n]\n\n\ntxt = (HERE / \"aiohttp\" / \"__init__.py\").read_text(\"utf-8\")\ntry:\n version = re.findall(r'^__version__ = \"([^\"]+)\"\\r?$', txt, re.M)[0]\nexcept IndexError:\n raise RuntimeError(\"Unable to determine version.\")\n\ninstall_requires = [\n \"charset-normalizer>=2.0,<3.0\",\n \"multidict>=4.5,<7.0\",\n \"async_timeout>=4.0a2,<5.0\",\n 'asynctest==0.13.0; python_version<\"3.8\"',\n \"yarl>=1.0,<2.0\",\n \"typing_extensions>=3.7.4\",\n \"frozenlist>=1.1.1\",\n \"aiosignal>=1.1.2\",\n]\n\n\ndef read(f):\n return (HERE / f).read_text(\"utf-8\").strip()\n\n\nargs = dict(\n name=\"aiohttp\",\n version=version,\n description=\"Async http client/server framework (asyncio)\",\n long_description=\"\\n\\n\".join((read(\"README.rst\"), read(\"CHANGES.rst\"))),\n long_description_content_type=\"text/x-rst\",\n classifiers=[\n \"License :: OSI Approved :: Apache Software License\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Development Status :: 5 - Production/Stable\",\n \"Operating System :: POSIX\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: Microsoft :: Windows\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Framework :: AsyncIO\",\n ],\n author=\"Nikolay Kim\",\n author_email=\"[email protected]\",\n maintainer=\", \".join(\n (\n \"Nikolay Kim <[email protected]>\",\n \"Andrew Svetlov <[email protected]>\",\n )\n ),\n maintainer_email=\"[email protected]\",\n url=\"https://github.com/aio-libs/aiohttp\",\n project_urls={\n \"Chat: Gitter\": \"https://gitter.im/aio-libs/Lobby\",\n \"CI: GitHub Actions\": \"https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI\", # noqa\n \"Coverage: codecov\": \"https://codecov.io/github/aio-libs/aiohttp\",\n \"Docs: Changelog\": \"https://docs.aiohttp.org/en/stable/changes.html\",\n \"Docs: RTD\": \"https://docs.aiohttp.org\",\n \"GitHub: issues\": \"https://github.com/aio-libs/aiohttp/issues\",\n \"GitHub: repo\": \"https://github.com/aio-libs/aiohttp\",\n },\n license=\"Apache 2\",\n packages=[\"aiohttp\"],\n python_requires=\">=3.7\",\n install_requires=install_requires,\n extras_require={\n \"speedups\": [\n \"aiodns>=1.1\",\n \"Brotli\",\n \"cchardet\",\n ],\n },\n include_package_data=True,\n)\n\nif not NO_EXTENSIONS:\n print(\"*********************\")\n print(\"* Accelerated build *\")\n print(\"*********************\")\n setup(ext_modules=extensions, **args)\nelse:\n print(\"*********************\")\n print(\"* Pure Python build *\")\n print(\"*********************\")\n setup(**args)\n", "path": "setup.py"}]} | 1,729 | 293 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.